Catastrophic rectangles—visualising catastrophic risks

Risks are bad. You probably noticed this if you’ve ever lost your wallet or gone through a pandemic. However, wallet-loss risk and pandemic risk are not equally worrying.

To assess how bad a risk is, two dimensions matter: the probability of the bad thing happening, and how bad it is if it happens. The product of these two dimensions is the expected badness of the risk.

Our intuitions are pretty bad at making comparisons of expected badness. To make things more intuitive, I introduce a visual representation of expected badness: catastrophic rectangles.

Here are two catastrophic rectangles with the same area, meaning they represent two risks with the same expected badness.

(Throughout this post, axes are intentionally left without scales, because no quantitative claims are made.)

Some people here want to reduce the badness of things. Reducing the expected badness of a risk means making its rectangle smaller. We can do this by reducing the event probability, or by reducing how bad it would be if it happened. The first approach is called prevention. The second approach is mitigation.

Prevention
Mitigation

Some events are catastrophic. This is typically the case when many people die at the same time. Some catastrophic events are even existential: they would permanently destroy human civilization. Which is really bad. To account for this extra badness, we have to give existential rectangles an existential badness bonus.

We can be pretty sure that the loss of your wallet will not destroy human civilisation. For some other risks, it is less clear. A climatic, artificial intelligence, pandemic or nuclear catastrophe could be existential, or not. These risks can be decomposed into existential and non-existential rectangles. Here is an example for pandemics:

What we want to do is still to reduce the total area of the rectangles. To do this, we can still use the prevention approach. For example, we can have better peace treaties to reduce the probability of a nuclear war.

Improving prevention

As for mitigation, there is something different about existential rectangles: their height cannot be reduced. This is because, by definition, an existential risk is an event that permanently destroys civilization. We can’t make the end of humanity feel much better. What we can do is try to avoid it. To do this, we can either improve our response to catastrophes, or improve our resilience.

Improving our response means preventing catastrophes from getting too big. For example, we can have better pandemic emergency plans. Improving our response reduces the badness of the non-existential rectangle, while also reducing the probability of catastrophes that destroy civilisation.

Actually, this is not really what happens when we improve our response. By limiting the badness of existential catastrophes, we turned them into non-existential ones, so we have to widen the non-existential rectangle a bit. What’s more, the catastrophes we converted to non-existential were the ones with the highest badness. This increases the average badness of the non-existential rectangle. Let’s reduce its height slightly less to take this into account.

Improving response

We can also choose to focus specifically on the scenario where civilisation is destroyed. For example, some people have built a seed vault on the island of Svalbard, in Norway, to help us grow crops again in case everything has been destroyed by some global catastrophe. This sort of interventions try to improve our resilience, that is, to give civilisation more chances to recover. A resilience intervention converts some existential risk into non-existential catastrophic risk.

Improving resilience

Sometimes risks can have different origins. For example, the next pandemic could be natural, or it could be engineered. We could use more catastrophic rectangles to integrate this distinction.

Because the engineered rectangles look more terrible than the natural ones, it’s easy to think that we should always focus on engineered pandemics. In reality, we should only focus on engineered pandemics when doing so allows for a greater reduction of the total area of the rectangles. An intervention that only reduces risks from engineered pandemics is not as great as one that would have the same impact on risks from engineered pandemics but would additionally reduce other risks.

Good
Even better

The same idea applies when we consider several risks at the same time. The Svalbard seed vault could be useful in any situation where all crops have been destroyed. It is thus better than an intervention that would reduce existential risk by the same factor, but only for one specific catastrophic risk.

Svalbard
not as good as Svalbard

All these rectangles are catastrophically simplistic. Many things here are highly debatable, and many rectangular questions remain open. For example:

  • How big do you think the existential badness bonus should be in this probability-badness frame ?

  • What would s-risks look like in this frame?

  • What would the rectangles become if every individual risk was represented in this frame with a high level of detail? What would be the shape of the curve?

  • How would you represent uncertainty in a nice way? How to represent a confidence region here?

  • Can this probability-badness frame be useful when used more quantitatively (with numbers on the axes)?

  • ...

I hope this way of visualising risks can help better communicate and discuss ideas about global catastrophic risks. If you want to make your own catastrophic rectangles, feel free to use this canva template!


CHERI logo

I wrote this post during the 2021 Summer Research Program of the Swiss Existential Risk Initiative (CHERI). Thanks to the CHERI organisers for their support, to my CHERI mentor Florian Habermacher, and to the other summer researchers for their helpful comments and feedback. I am especially grateful to Silvana Hultsch, who made me read the articles that gave me the idea of writing this. Views and mistakes are my own.

References

  • The idea of prevention, response and resilience is from Cotton-Barratt et al. (2020). It is also mentioned in The Precipice (Ord, 2020).

  • The existential badness bonus refers to Parfit’s two wars thought experiment, cited by Bostrom (2013), and also mentioned in The Precipice.

  • The idea that there could be a tendency to focus too much on specific catastrophic scenarios refers to what Yudkowsky (2008) says about conjunction bias.

  • The idea of considering the effect of interventions across multiple existential risks refers to the idea of integrated assessment of global risks (Baum and Barrett, 2018).

Baum, Seth, and Anthony Barrett. “Towards an Integrated Assessment of Global Catastrophic Risk.” SSRN Scholarly Paper. Rochester, NY: Social Science Research Network, January 17, 2018. https://​​papers.ssrn.com/​​abstract=3046816.

Bostrom, Nick. “Existential Risk Prevention as Global Priority: Existential Risk Prevention as Global Priority.” Global Policy 4, no. 1 (February 2013): 15–31. https://​​doi.org/​​10.1111/​​1758-5899.12002.

Cotton‐Barratt, Owen, Max Daniel, and Anders Sandberg. “Defence in Depth Against Human Extinction: Prevention, Response, Resilience, and Why They All Matter.” Global Policy 11, no. 3 (May 2020): 271–82. https://​​doi.org/​​10.1111/​​1758-5899.12786.

Ord, Toby. The Precipice: Existential Risk and the Future of Humanity. London Oxford New York New Delhi Sydney: Bloomsbury Publishing, 2020.

Yudkowsky, E. (2008), Cognitive Biases Potentially Affecting Judgment of Global Risks., In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 91–119. New York: Oxford University Press.