Venn diagrams of existential, global, and suffering catastrophes
Some terms and concepts that are important to many longtermists are existential risk, extinction risk, global catastrophic risk, and suffering risk. Also important are the terms and concepts for the four types of catastrophes corresponding to those four types of risks.[1]
Unfortunately, people sometimes mistakenly use one of these terms as if synonymous for another of these terms, or mistakenly present one concept as entirely fitting as a subset of another. Other times, people discuss these concepts in complete isolation from each other, which is not a mistake, but may sometimes be a missed opportunity.
In reality, there are important distinctions and important overlaps between these terms and concepts. So this post reviews definitions and descriptions of those four types of catastrophes, discusses how they relate to each other, and presents three Venn diagrams (which include example scenarios) to summarise these points. I hope this can help increase the conceptual clarity of discussions and thinking regarding longtermism and/âor large-scale risks.
This post primarily summarises and analyzes existing ideas, rather than presenting novel ideas.
Existential catastrophe, extinction, and failed continuation
In The Precipice, Toby Ord defines an existential catastrophe as âthe destruction of humanityâs longterm potential.â[2] Extinction is one way humanityâs long-term potential could be destroyed,[3] but not the only way. Ord gives the following breakdown of different types of existential catastrophe:
(For those interested, Iâve previously collected links to works on collapse and on dystopia.)
We could thus represent the relationship between the concepts of existential catastrophe, extinction, and failed continuation via the following Venn diagram:[4]
Iâve included in that Venn diagram some example scenarios. Note that:
All of the example scenarios used in this post really are just examples; there are many other scenarios that could fit within each category, and some may be more likely than the scenarios Iâve shown.
Whether a scenario really counts as an existential catastrophe depends on oneâs moral theory or values (or the âcorrectâ moral theory of values), because that affects what counts as fulfilling or destroying humanityâs long-term potential.
The sizes of each section of each Venn diagram are not meant to reflect relative likelihoods. E.g., I donât mean to imply that extinction and failed continuation are exactly as likely as each other.
Global and existential catastrophes
Multiple definitions of global catastrophic risks have been proposed, some of which differ substantially. Iâll use the loose definition provided by Bostrom & ÄirkoviÄ (2008, p.1-2) in the book Global Catastrophic Risks:
The term âglobal catastrophic riskâ lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic) would count as a global catastrophe, even if some region of the world escaped unscathed. As for disasters falling between these points, the definition is vague. The stipulation of a precise cut-off does not appear needful at this stage. [emphasis added]
Given this definition, many existential catastrophes would also be global catastrophes. This includes all potential extinction events, many or all potential unrecoverable collapse events, and many potential transitions to unrecoverable dystopia. However, the terms âexistential catastropheâ and âglobal catastropheâ are not synonymous, for two reasons.
Firstly, a wide array of global catastrophes would not be existential catastrophes. Indeed, âglobal catastropheâ is a notably âlower barâ, and so global catastrophes may be much more likely than existential catastrophes. (See also Database of existential risk estimates.)
Secondly, some existential catastrophes wouldnât be global catastrophes (given Bostrom & ÄirkoviÄâs definition), because they wouldnât involve any sudden spike in deaths or economic damage. This applies most clearly to âdesired dystopiasâ, in which a large portion of the people at the time actually favour the outcomes that occur (e.g., due to sharing a deeply flawed ideology). A desired dystopia may therefore not be recognised as a catastrophe by anyone who experiences it.[5] Ordâs (2020) âplausible examplesâ of a desired dystopia include:
worlds that forever fail to recognise some key form of harm or injustice (and thus perpetuate it blindly), worlds that lock in a single fundamentalist religion, and worlds where we deliberately replace ourselves with something that we didnât realise was much less valuable (such as machines incapable of feeling).
Itâs also possible that some transitions to undesired or enforced dystopias (e.g., totalitarian regimes) could occur with no (or little) bloodshed or economic damage.[6]
In contrast, unrecoverable collapse seems likely to involve a large number of deaths. It also seems to almost âby definitionâ involve major economic damage. And extinction would of course involve a huge amount of death and economic damage.
We could thus represent the relationship between the concepts of global catastrophe, existential catastrophe, extinction, and failed continuation via the following Venn diagram:
Suffering, existential, and global catastrophes
Suffering risks (also known as risks of astronomical suffering, or s-risks) are typically defined as ârisks where an adverse outcome would bring about suffering on an astronomical scale, vastly exceeding all suffering that has existed on Earth so farâ (Daniel, 2017).[7] (Update in 2022: That source has been updated to give the following definition instead: âS-risks are risks of events that bring about suffering in cosmically significant amounts. By âsignificantâ, we mean significant relative to expected future suffering.â This definition does seem better to me, for the reasons that Lukas Gloor mentions in a comment.)
Iâll use the term suffering catastrophe to describe the realisation of an s-risk; i.e., an event or process involving suffering on an astronomical scale.[8]
Two mistakes people sometimes make are discussing s-risks as if theyâre entirely distinct from existential risks, or discussing s-risks as if theyâre a subset of existential risks. In reality:
-
There are substantial overlaps between suffering catastrophes and existential catastrophes, because some existential catastrophes would involve or result in suffering on an astronomical scale.
Most obviously, many unrecoverable dystopia scenarios would involve suffering of humans or nonhumans on an astronomical scale. For example, a stable, global totalitarian regime could cause many billions of humans to suffer in slavery-like conditions each generation, for many generations (see also Reducing long-term risks from malevolent actors). Or a âdesired dystopiaâ might involve huge numbers of suffering wild animals (via terraforming) or suffering digital minds, because humanity fails to realise that thatâs problematic.
As another example, some AI-induced extinction events may be followed by large amounts of suffering, such as if the AI then runs huge numbers of highly detailed simulations including sentient beings (see e.g. Tomasik).[9]
-
But there could also be suffering catastrophes that arenât existential catastrophes, because they donât involve the destruction of (the vast majority of) humanityâs long-term potential.
This depends on oneâs moral theory or values (or the âcorrectâ moral theory or values), because, as noted above, that affects what counts as fulfilling or destroying humanityâs long-term potential.
For example, the Center on Long-Term Risk notes: âDepending on how you understand the [idea of loss of âpotentialâ in definitions] of [existential risks], there actually may be s-risks which arenât [existential risks]. This would be true if you think that reaching the full potential of Earth-originating intelligent life could involve suffering on an astronomical scale, i.e., the realisation of an s-risk. Think of a quarter of the universe filled with suffering, and three quarters filled with happiness. Considering such an outcome to be the full potential of humanity seems to require the view that the suffering involved would be outweighed by other, desirable features of reaching this full potential, such as vast amounts of happiness.â
In contrast, given a sufficiently suffering-focused theory of ethics, anything other than near-complete eradication of suffering might count as an existential catastrophe.
Likewise:
-
There are substantial overlaps between suffering catastrophes and global catastrophes, because some suffering catastrophes (or the events that cause them) would involve major human fatalities and economic damage.
-
But there could also be suffering catastrophes that arenât global catastrophes, because they donât involve major human fatalities and economic damage.
In particular, there could be suffering among nonhumans and/âor ongoing human suffering without a spike in human fatalities at any particular time.
We could thus represent the relationship between the concepts of suffering catastrophe, existential catastrophe, and global catastrophe via the following Venn diagram (although this diagram might be inaccurate, given a sufficiently suffering-focused theory of ethics):
If you found this post interesting, you may also find interesting my previous posts Clarifying existential risks and existential catastrophes and 3 suggestions about jargon in EA.
Iâm grateful to Tobias Baumann, Anthony D, and David Kristoffersson for helpful feedback. Iâm also grateful to David for discussions that may have informed the ideas I expressed in this post. This does not imply these peopleâs endorsement of all of this postâs claims.
I wrote this post in my personal time, and it doesnât necessarily represent the views of any of my employers.
- âŠď¸
Iâm making a distinction between âriskâ and âcatastropheâ, in which the former refers to the chance of a bad event happening and the latter refers to the bad event itself.
- âŠď¸
Ord writes âI am understanding humanityâs longterm potential in terms of the set of all possible futures that remain open to us. This is an expansive idea of possibility, including everything that humanity could eventually achieve, even if we have yet to invent the means of achieving it.â
- âŠď¸
For the purposes of this post, âextinctionâ refers to the premature extinction of humanity, and excludes scenarios such as:
humanity being âreplacedâ by a âdescendantâ which weâd be happy to be replaced by (e.g., whole brain emulations or a slightly different species that we evolve into)
âhumanity (or its descendants) [going] extinct after fulfilling our longterm potentialâ (Ord, 2020)
Those sorts of scenarios are excluded because they might not count as existential catastrophes.
- âŠď¸
Alternative Venn diagrams one could create, and which could also be useful, would include a diagram in which failed continuation is broken down into its two major types, and a diagram based on Bostromâs typology of existential catastrophes.
- âŠď¸
However, certain other definitions of âglobal catastropheâ might capture all existential catastrophes as a subset. For example, Yassif writes âBy our working definition, a GCR is something that could permanently alter the trajectory of human civilization in a way that would undermine its long-term potential or, in the most extreme case, threaten its survival.â Taken literally, that definition could include events that would involve no obvious or immediate deaths and economic damage, and that no one at the time recognises as a catastrophe.
- âŠď¸
Seemingly relevantly, Bostromâs classification of types of existential risk (by which I think he really means âtypes of existential catastropheâ) includes âplateauing â progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturityâ, as well as âunconsummated realizationâ. Both of those types seem like they could occur in ways such that the catastrophe doesnât rapidly and directly lead to large amounts of death and economic damage.
Itâs also possible that an existential catastrophe which doesnât rapidly and directly lead to large amounts of death and economic damage could still lead to that eventually and indirectly. For example, the catastrophe might cause a failure to develop some valuable technology that wouldâve been developed otherwise, and that wouldâve saved lives or boosted the economy. If that happens, I personally wouldnât say that that alone should make the existential catastrophe also count as a global catastrophe. But I also think that thatâs a debatable and relatively unimportant point.]
- âŠď¸
This definition of course leaves some room for interpretation. See the rest of Danielâs post for details. Also note that, in December 2019, the Center on Long-Term Risk (CLR) repeated this definition but added that âThis definition may be updated in the near future.â (CLR is arguably the main organisation focused on s-risks, and is the organisation Daniel was part of when he wrote the above-linked post.)
- âŠď¸
I havenât actually seen the term âsuffering catastropheâ, and sometimes the term âs-riskâ is itself used to refer to the event, rather than the risk of the event occurring (e.g., here). But it seems to me preferable to have a separate term for the risk and the event. And âsuffering catastropheâ seems to make sense as a term, by analogy to the relationship between (a) the terms âexistential riskâ and âexistential catastropheâ, and (b) the terms âglobal catastrophic riskâ and âglobal catastropheâ.
- âŠď¸
Also, more speculatively, if there are (or will be) vast numbers of suffering sentient beings elsewhere in the universe, and humans currently have the potential to stop or substantially reduce that suffering, then human extinction or unrecoverable collapse could also represent a suffering catastrophe. See also The expected value of extinction risk reduction is positive.
- A cenÂtral diÂrecÂtory for open reÂsearch questions by 19 Apr 2020 23:47 UTC; 163 points) (
- InÂdeÂpenÂdent impressions by 26 Sep 2021 18:43 UTC; 152 points) (
- The FuÂture Might Not Be So Great by 30 Jun 2022 13:01 UTC; 142 points) (
- A longterÂmist criÂtique of âThe exÂpected value of exÂtincÂtion risk reÂducÂtion is posÂiÂtiveâ by 1 Jul 2021 21:01 UTC; 129 points) (
- CruÂcial quesÂtions for longtermists by 29 Jul 2020 9:39 UTC; 104 points) (
- MoÂral cirÂcles: DeÂgrees, diÂmenÂsions, visuals by 24 Jul 2020 4:04 UTC; 87 points) (
- SufferÂing-FoÂcused Ethics (SFE) FAQ by 16 Oct 2021 11:33 UTC; 77 points) (
- Where are you donatÂing in 2020 and why? by 23 Nov 2020 8:47 UTC; 71 points) (
- LongterÂmist (esÂpeÂcially x-risk) terÂminolÂogy has biÂasÂing assumptions by 30 Oct 2022 16:26 UTC; 70 points) (
- Odds of reÂcovÂerÂing valÂues afÂter colÂlapse? by 24 Jul 2022 18:20 UTC; 65 points) (
- Some govÂerÂnance reÂsearch ideas to preÂvent malevÂolent conÂtrol over AGI and why this might matÂter a hell of a lot by 23 May 2023 13:07 UTC; 63 points) (
- My perÂsonal cruxes for foÂcusÂing on exÂisÂtenÂtial risks /â longterÂmism /â anyÂthing other than just video games by 13 Apr 2021 5:50 UTC; 55 points) (
- CruÂcial quesÂtions about opÂtiÂmal timing of work and donations by 14 Aug 2020 8:43 UTC; 45 points) (
- Notes on HenÂrichâs âThe WEIRDest PeoÂple in the Worldâ (2020) by 25 Mar 2021 5:04 UTC; 44 points) (
- ForeÂcastÂing Thread: ExÂisÂtenÂtial Risk by 22 Sep 2020 3:44 UTC; 43 points) (LessWrong;
- ModelÂling the odds of reÂcovÂery from civÂiÂlizaÂtional collapse by 17 Sep 2020 11:58 UTC; 41 points) (
- AdÂdressÂing Global Poverty as a StratÂegy to ImÂprove the Long-Term Future by 7 Aug 2020 6:27 UTC; 40 points) (
- 23 Jan 2021 3:05 UTC; 38 points) 's comment on [PodÂcast] Ajeya CoÂtra on worÂldÂview diÂverÂsifiÂcaÂtion and how big the fuÂture could be by (
- EA UpÂdates for July 2020 by 1 Aug 2020 9:46 UTC; 35 points) (
- What is exÂisÂtenÂtial seÂcuÂrity? by 1 Sep 2020 9:40 UTC; 34 points) (
- Notes on âBioterÂror and BiowarÂfareâ (2006) by 1 Mar 2021 9:42 UTC; 29 points) (
- 14 Apr 2021 6:43 UTC; 24 points) 's comment on MichaelAâs Quick takes by (
- How valuable would more acaÂdemic reÂsearch on foreÂcastÂing be? What quesÂtions should be reÂsearched? by 12 Aug 2020 7:19 UTC; 23 points) (
- QuanÂtifyÂing the probÂaÂbilÂity of exÂisÂtenÂtial catasÂtroÂphe: A reÂply to Beard et al. by 10 Aug 2020 5:56 UTC; 21 points) (
- 27 Dec 2020 3:23 UTC; 20 points) 's comment on What is the likeÂliÂhood that civÂiÂlizaÂtional colÂlapse would diÂrectly lead to huÂman exÂtincÂtion (within decades)? by (
- 28 Feb 2020 17:23 UTC; 18 points) 's comment on MichaelAâs Quick takes by (
- Notes on HenÂrichâs âThe WEIRDest PeoÂple in the Worldâ (2020) by 14 Feb 2021 8:40 UTC; 18 points) (LessWrong;
- What do we do if AI doesnât take over the world, but still causes a sigÂnifiÂcant global probÂlem? by 2 Aug 2020 3:35 UTC; 16 points) (
- 25 Apr 2021 18:14 UTC; 16 points) 's comment on MichaelAâs Quick takes by (
- 4 Apr 2021 8:23 UTC; 15 points) 's comment on The Epistemic Challenge to LongterÂmism (Tarsney, 2020) by (
- 5 Mar 2021 7:10 UTC; 14 points) 's comment on Why I find longterÂmism hard, and what keeps me motivated by (
- ExÂtincÂtion risk reÂducÂtion and moral cirÂcle exÂpanÂsion: SpecÂuÂlatÂing susÂpiÂcious convergence by 4 Aug 2020 11:38 UTC; 12 points) (
- 12 Oct 2022 1:42 UTC; 12 points) 's comment on Paper sumÂmary: The Epistemic Challenge to LongterÂmism (ChrisÂtian Tarsney) by (
- 26 Aug 2020 6:57 UTC; 12 points) 's comment on EA OrÂgaÂniÂzaÂtion UpÂdates: July 2020 by (
- 30 Jul 2020 1:24 UTC; 10 points) 's comment on ComÂmon ground for longtermists by (
- 10 Feb 2021 2:31 UTC; 10 points) 's comment on MichaelAâs Quick takes by (
- Notes on âBioterÂror and BiowarÂfareâ (2006) by 2 Mar 2021 0:43 UTC; 10 points) (LessWrong;
- 12 Feb 2022 20:21 UTC; 9 points) 's comment on ModelÂling Great Power conÂflict as an exÂisÂtenÂtial risk factor by (
- 16 Sep 2021 13:19 UTC; 9 points) 's comment on AvoidÂing GroupÂthink in InÂtro FelÂlowÂships (and DiverÂsifyÂing LongterÂmism) by (
- 30 May 2021 16:55 UTC; 8 points) 's comment on Notes on HenÂrichâs âThe WEIRDest PeoÂple in the Worldâ (2020) by (
- 30 Jan 2021 3:26 UTC; 8 points) 's comment on AMA: Ajeya CoÂtra, reÂsearcher at Open Phil by (
- 5 Jan 2021 9:29 UTC; 7 points) 's comment on MichaelAâs Quick takes by (
- 20 Nov 2020 7:30 UTC; 7 points) 's comment on Why those who care about catasÂtrophic and exÂisÂtenÂtial risk should care about auÂtonomous weapons by (
- 6 Sep 2020 13:03 UTC; 6 points) 's comment on AMA: ToÂbias BauÂmann, CenÂter for ReÂducÂing Suffering by (
- 2 May 2021 18:21 UTC; 5 points) 's comment on Thoughts on âThe Case for Strong LongterÂmismâ (Greaves & MacAskill) by (
- 1 Aug 2020 2:47 UTC; 5 points) 's comment on Are we neÂglectÂing edÂuÂcaÂtion? PhilosÂoÂphy in schools as a longterÂmist area by (
- 4 Dec 2021 11:08 UTC; 5 points) 's comment on EA megaproÂjects continued by (
- 15 Feb 2022 13:29 UTC; 5 points) 's comment on ModelÂling Great Power conÂflict as an exÂisÂtenÂtial risk factor by (
- InÂvestÂing for a cause by 12 Aug 2022 1:35 UTC; 5 points) (
- 11 Aug 2020 11:53 UTC; 5 points) 's comment on A New X-Risk FacÂtor: Brain-ComÂputer Interfaces by (
- 13 Apr 2021 7:26 UTC; 4 points) 's comment on My perÂsonal cruxes for foÂcusÂing on exÂisÂtenÂtial risks /â longterÂmism /â anyÂthing other than just video games by (
- 3 May 2021 6:53 UTC; 4 points) 's comment on Thoughts on âThe Case for Strong LongterÂmismâ (Greaves & MacAskill) by (
- 12 Feb 2022 21:43 UTC; 4 points) 's comment on ModelÂling Great Power conÂflict as an exÂisÂtenÂtial risk factor by (
- 2 May 2021 18:04 UTC; 3 points) 's comment on Thoughts on âThe Case for Strong LongterÂmismâ (Greaves & MacAskill) by (
- 11 Apr 2021 7:17 UTC; 3 points) 's comment on The ImÂporÂtance of ArÂtifiÂcial Sentience by (
- 14 Jan 2021 0:22 UTC; 3 points) 's comment on A FunÂnel for Cause Candidates by (
- 22 Jun 2021 14:02 UTC; 3 points) 's comment on An anÂiÂmated inÂtroÂducÂtion to longterÂmism (feat. Robert Miles) by (
- 29 Jan 2021 7:05 UTC; 2 points) 's comment on ImÂporÂtant Between-Cause ConÂsidÂerÂaÂtions: things evÂery EA should know about by (
- 4 Aug 2020 9:49 UTC; 2 points) 's comment on EA FoÂrum upÂdate: New edÂiÂtor! (And more) by (
- 14 Aug 2020 9:25 UTC; 2 points) 's comment on CruÂcial quesÂtions about opÂtiÂmal timing of work and donations by (
- 21 Mar 2021 5:03 UTC; 2 points) 's comment on Is DemocÂracy a Fad? by (
- 5 Aug 2020 0:23 UTC; 2 points) 's comment on ProÂpose and vote on poÂtenÂtial EA Wiki entries by (
- Opinioni indipendenti by 18 Jan 2023 11:21 UTC; 1 point) (
Why is âpeople decide to lock in vast nonhuman sufferingâ an example of failed continuation in the last diagram?
Failed continuation is where humanity doesnât go extinct, but (in Ordâs phrase) âthe destruction of humanityâs longterm potentialâ still occurs in some other way (and thus thereâs still an existential catastrophe).
And âdestruction of humanityâs longterm potentialâ in turn essentially means âpreventing the possibility of humanity ever bringing into existence something close to the best possible futureâ. (Thus, existential risks are not just about humanity.)
Itâs conceivable that vast nonhuman suffering could be a feature of even the best possible future, partly because both âvastâ and âsufferingâ are vague terms. But I mean something like astronomical amounts of suffering among moral patients. (I hadnât really noticed that the phrase I used in the diagram didnât actually make that clear.) And it seems to me quite likely that a future containing that is not close to the best possible future.
Thus, it seems to me likely that locking in such a feature of the future is tantamount to preventing us ever achieving something close to the best future possible.
Does that address your question? (Which is a fair question, in part because it turns out my language wasnât especially precise.)
ETA: Iâm also imagining that this scenario does not involve (premature) human extinction, which is another thing I hadnât made explicit.
Edit: I just noticed that this post Iâm commenting on is 2 years old (it came up in my feed and I thought it was new). So, the post wasnât outdated at the time!
That definition is outdated (at least with respect to how CLR thinks about it). The newer definition is the first sentence in the source you link to (itâs a commentary by CLR on the 2017 talk):
Reasons for the change: (1) Calling the future scenario âgalaxy-wide utopia where people still suffer headaches every now and thenâ an âs-riskâ may come with the connotation (always unintended) that this entire future scenario ought to be prevented. Over the years, my former colleagues at CLR and I received a lot of feedback (e.g., here and here) that this was off-putting about the older definition.
(2) Calling something an âs-riskâ when it doesnât constitute a plausible practical priority even for strongly suffering-focused longtermists may generate the impression that s-risks are generally unimportant. The new definition means theyâre unlikely to be a rounding error for most longtermist views as they are defined* (except maybe if your normative views imply a 1:1 exchange rate between utopia and dystopia).
(*S-risks may still turn out to be negligible in practice for longtermist views that arenât strongly focused on reducing suffering if particularly bad futures are really unlikely empirically or if we canât find promising interventions. [Edit: FWIW, I think there are tractable interventions and s-risks donât seem crazy unlikely to me.])
Thanks for flagging this! Iâve now updated my post to include this new definition (I still use the old one first, but have added an explicit update in the main text).
This definition does seem better to me, for the reasons you mention.
One further ambiguity that IMO would be worth resolving if you ever come to edit this is between âunrecoverable collapseâ and âcollapse that in practice we donât recover fromâ. The former sounds much more specific (eg a Mad Maxy scenario where we render so much surface area permanently uninhabitable by humans, such that weâd never again be able to develop a global economy) and so much lower probability.
Iâm honestly wondering if we should deliberately reject all the existing terminology and try to start again, since a) as you say, many organisations use these terms inconsistently with each other, and b) the terms arenât etymologically intuitive. That is, âexistential catastrophesâ neednât either threaten existence or seem catastrophic, and âglobalâ catastrophes neednât affect the whole globe, or only the one globe.
Also it would be useful to have a term that covered the union of any two of the three circles, esp âglobal catastropheâ + âexistential catastropheâ, but you might need multiple terms to account for the amibigity/âuncertainty.