Quantifying the probability of existential catastrophe: A reply to Beard et al.

Link post

Seth Baum of GCRI has published an excellent new paper. Here’s the abstract:

A recent article by Beard, Rowe, and Fox (BRF) evaluates ten methodologies for quantifying the probability of existential catastrophe. This article builds on BRF’s valuable contribution. First, this article describes the conceptual and mathematical relationship between the probability of existential catastrophe and the severity of events that could result in existential catastrophe. It discusses complications in this relationship arising from catastrophes occurring at different speeds and from multiple concurrent catastrophes. Second, this article revisits the ten BRF methodologies, finding an inverse relationship between a methodology’s ease of use and the quality of results it produces—in other words, achieving a higher quality of analysis will in general require a larger investment in analysis. Third, the manuscript discusses the role of probability quantification in the management of existential risks, describing why the probability is only sometimes needed for decision-making and arguing that analyses should support real-world risk management decisions and not just be academic exercises. If the findings of this article are taken into account, together with BRF’s evaluations of specific methodologies, then risk analyses of existential catastrophe may tend to be more successful at understanding and reducing the risks.

(He also wrote a blog post about the paper.)

I’d highly recommend reading the whole paper. I’d also recommend the Beard et al. paper; I found it very useful when constructing a database of existential risk estimates, as well as when thinking about the pros and cons of doing so. (Unfortunately, Beard et al. is behind a paywall. But I think that this freely available working paper is essentially a draft of that paper, though I haven’t read it to check.)

Existential risk is a function of probability of occurrence and probability of sufficient severity

I want to highlight and briefly comment on one part of the paper in particular:

Quantifying the probability of specific existential catastrophe events (such as a nuclear war or Earth-asteroid collision) requires additional attention to severity [of those events]. The probability can be decomposed into two constituent parts as follows:

In Eq. (1) [the above equation], PEC is the probability of existential catastrophe from some event; P1 is the probability of the initial catastrophe event; and P2 is the probability that the event will result in a harm greater or equal to the collapse of civilization [Baum defers in this paper to Beard et al.’s nonstandard usage of the term “existential risk”; see also]. For example, P1 could represent the probability of nuclear war and P2 could represent the probability that nuclear war would result in the collapse of civilization or worse. The occurrence of the initial catastrophe event does not necessarily entail the collapse of civilization—that depends on how effectively the survivors can cope with the aftermath of the event.

Calculating PEC via Eq. (1) requires two distinct analyses: one for each of P1 and P2. Analysis of P1 is the analysis of the probability of initial events, and can follow many conventions of probabilistic risk analysis. In contrast, quantifying P2 requires analysis of the severity, with attention to the success of catastrophe survivors. This is a rather different sort of analysis than is needed to quantify the probability of initial catastrophe events represented by P1. However, P2 is not equivalent to severity. P2 is a probability variable representing the probability that the severity will exceed a certain threshold. P2 can be obtained by creating a probability distribution for the severity of an initial event and then calculating the portion of that distribution that exceeds the threshold for existential catastrophe:

In Eq. (2), P2 is as in Eq. (1); S is severity of some initial event; and ST is the minimum severity threshold of existential catastrophe (the collapse of civilization in BRF [again, this is nonstandard usage of the term “existential catastrophe”]). Eq. (2) is illustrated in Fig. 1.

I think this is a very useful way to break down and think about the likelihoods of various existential risks. (Though there are also of course other useful ways to do so.)

But what severity level is sufficient?

However, I think the above paragraphs fail to make one important point explicit: We’re uncertain about what the minimum threshold is in the first place, not just how severe an event will be (if it occurs). Both our uncertainty about the threshold and our uncertainty about the expected severity of an event contribute to our uncertainty about the likelihood that that event would cause an existential catastrophe (if it occurred).

For example, with nuclear war, we’re uncertain about how severe the “short-term” consequences would be (e.g., how many states will collapse and how many people will die?), and about what severity of consequences would be sufficient for unrecoverable civilizational collapse (e.g., is the collapse of all states and death of 99% of the population “enough”?).

We could adapt the above diagram to represent this by also showing a probability distribution over possible thresholds, rather than a single vertical line (which is effectively a point estimate about what the threshold is).

I think Baum would agree with these points, given his paper Uncertain Human Consequences in Asteroid Risk Analysis and the Global Catastrophe Threshold. And these points don’t contradict his statements in this new paper. One reason why these points don’t contradict his statements is that we could arguably just incorporate our uncertainty about the threshold into our uncertainty about how likely it is that the event (if it occurs) would exceed the threshold.

But it seems to me conceptually useful to explicitly think of the probability of an initial event exceeding the relevant threshold as being determined by both:

  • a probability distribution for the severity of the event (conditional on its occurrence)

  • and another probability distribution for where the relevant threshold is

I’d also guess that that’d sometimes be useful when actually trying to quantify risk levels, but I’m less sure about that.