Venn diagrams of existential, global, and suffering catastrophes

Some terms and con­cepts that are im­por­tant to many longter­mists are ex­is­ten­tial risk, ex­tinc­tion risk, global catas­trophic risk, and suffer­ing risk. Also im­por­tant are the terms and con­cepts for the four types of catas­tro­phes cor­re­spond­ing to those four types of risks.[1]

Un­for­tu­nately, peo­ple some­times mis­tak­enly use one of these terms as if syn­ony­mous for an­other of these terms, or mis­tak­enly pre­sent one con­cept as en­tirely fit­ting as a sub­set of an­other. Other times, peo­ple dis­cuss these con­cepts in com­plete iso­la­tion from each other, which is not a mis­take, but may some­times be a missed op­por­tu­nity.

In re­al­ity, there are im­por­tant dis­tinc­tions and im­por­tant over­laps be­tween these terms and con­cepts. So this post re­views defi­ni­tions and de­scrip­tions of those four types of catas­tro­phes, dis­cusses how they re­late to each other, and pre­sents three Venn di­a­grams (which in­clude ex­am­ple sce­nar­ios) to sum­marise these points. I hope this can help in­crease the con­cep­tual clar­ity of dis­cus­sions and think­ing re­gard­ing longter­mism and/​or large-scale risks.

This post pri­mar­ily sum­marises and an­a­lyzes ex­ist­ing ideas, rather than pre­sent­ing novel ideas.

Ex­is­ten­tial catas­tro­phe, ex­tinc­tion, and failed continuation

In The Precipice, Toby Ord defines an ex­is­ten­tial catas­tro­phe as “the de­struc­tion of hu­man­ity’s longterm po­ten­tial.”[2] Ex­tinc­tion is one way hu­man­ity’s long-term po­ten­tial could be de­stroyed,[3] but not the only way. Ord gives the fol­low­ing break­down of differ­ent types of ex­is­ten­tial catas­tro­phe:

(For those in­ter­ested, I’ve pre­vi­ously col­lected links to works on col­lapse and on dystopia.)

We could thus rep­re­sent the re­la­tion­ship be­tween the con­cepts of ex­is­ten­tial catas­tro­phe, ex­tinc­tion, and failed con­tinu­a­tion via the fol­low­ing Venn di­a­gram:[4]

I’ve in­cluded in that Venn di­a­gram some ex­am­ple sce­nar­ios. Note that:

  • All of the ex­am­ple sce­nar­ios used in this post re­ally are just ex­am­ples; there are many other sce­nar­ios that could fit within each cat­e­gory, and some may be more likely than the sce­nar­ios I’ve shown.

  • Whether a sce­nario re­ally counts as an ex­is­ten­tial catas­tro­phe de­pends on one’s moral the­ory or val­ues (or the “cor­rect” moral the­ory of val­ues), be­cause that af­fects what counts as fulfilling or de­stroy­ing hu­man­ity’s long-term po­ten­tial.

  • The sizes of each sec­tion of each Venn di­a­gram are not meant to re­flect rel­a­tive like­li­hoods. E.g., I don’t mean to im­ply that ex­tinc­tion and failed con­tinu­a­tion are ex­actly as likely as each other.

Global and ex­is­ten­tial catastrophes

Mul­ti­ple defi­ni­tions of global catas­trophic risks have been pro­posed, some of which differ sub­stan­tially. I’ll use the loose defi­ni­tion pro­vided by Bostrom & Ćirković (2008, p.1-2) in the book Global Catas­trophic Risks:

The term ‘global catas­trophic risk’ lacks a sharp defi­ni­tion. We use it to re­fer, loosely, to a risk that might have the po­ten­tial to in­flict se­ri­ous dam­age to hu­man well-be­ing on a global scale.

[...] a catas­tro­phe that caused 10,000 fatal­ities or 10 billion dol­lars worth of eco­nomic dam­age (e.g., a ma­jor earth­quake) would not qual­ify as a global catas­tro­phe. A catas­tro­phe that caused 10 mil­lion fatal­ities or 10 trillion dol­lars worth of eco­nomic loss (e.g., an in­fluenza pan­demic) would count as a global catas­tro­phe, even if some re­gion of the world es­caped un­scathed. As for dis­asters fal­ling be­tween these points, the defi­ni­tion is vague. The stipu­la­tion of a pre­cise cut-off does not ap­pear need­ful at this stage. [em­pha­sis added]

Given this defi­ni­tion, many ex­is­ten­tial catas­tro­phes would also be global catas­tro­phes. This in­cludes all po­ten­tial ex­tinc­tion events, many or all po­ten­tial un­re­cov­er­able col­lapse events, and many po­ten­tial tran­si­tions to un­re­cov­er­able dystopia. How­ever, the terms “ex­is­ten­tial catas­tro­phe” and “global catas­tro­phe” are not syn­ony­mous, for two rea­sons.

Firstly, a wide ar­ray of global catas­tro­phes would not be ex­is­ten­tial catas­tro­phes. In­deed, “global catas­tro­phe” is a no­tably “lower bar”, and so global catas­tro­phes may be much more likely than ex­is­ten­tial catas­tro­phes. (See also Database of ex­is­ten­tial risk es­ti­mates.)

Se­condly, some ex­is­ten­tial catas­tro­phes wouldn’t be global catas­tro­phes (given Bostrom & Ćirković’s defi­ni­tion), be­cause they wouldn’t in­volve any sud­den spike in deaths or eco­nomic dam­age. This ap­plies most clearly to “de­sired dystopias”, in which a large por­tion of the peo­ple at the time ac­tu­ally favour the out­comes that oc­cur (e.g., due to shar­ing a deeply flawed ide­ol­ogy). A de­sired dystopia may there­fore not be recog­nised as a catas­tro­phe by any­one who ex­pe­riences it.[5] Ord’s (2020) “plau­si­ble ex­am­ples” of a de­sired dystopia in­clude:

wor­lds that for­ever fail to recog­nise some key form of harm or in­jus­tice (and thus per­pet­u­ate it blindly), wor­lds that lock in a sin­gle fun­da­men­tal­ist re­li­gion, and wor­lds where we de­liber­ately re­place our­selves with some­thing that we didn’t re­al­ise was much less valuable (such as ma­chines in­ca­pable of feel­ing).

It’s also pos­si­ble that some tran­si­tions to un­de­sired or en­forced dystopias (e.g., to­tal­i­tar­ian regimes) could oc­cur with no (or lit­tle) blood­shed or eco­nomic dam­age.[6]

In con­trast, un­re­cov­er­able col­lapse seems likely to in­volve a large num­ber of deaths. It also seems to al­most “by defi­ni­tion” in­volve ma­jor eco­nomic dam­age. And ex­tinc­tion would of course in­volve a huge amount of death and eco­nomic dam­age.

We could thus rep­re­sent the re­la­tion­ship be­tween the con­cepts of global catas­tro­phe, ex­is­ten­tial catas­tro­phe, ex­tinc­tion, and failed con­tinu­a­tion via the fol­low­ing Venn di­a­gram:

Suffer­ing, ex­is­ten­tial, and global catastrophes

Suffer­ing risks (also known as risks of as­tro­nom­i­cal suffer­ing, or s-risks) are typ­i­cally defined as “risks where an ad­verse out­come would bring about suffer­ing on an as­tro­nom­i­cal scale, vastly ex­ceed­ing all suffer­ing that has ex­isted on Earth so far” (Daniel, 2017).[7] I’ll use the term suffer­ing catas­tro­phe to de­scribe the re­al­i­sa­tion of an s-risk; i.e., an event or pro­cess in­volv­ing suffer­ing on an as­tro­nom­i­cal scale.[8]

Two mis­takes peo­ple some­times make are dis­cussing s-risks as if they’re en­tirely dis­tinct from ex­is­ten­tial risks, or dis­cussing s-risks as if they’re a sub­set of ex­is­ten­tial risks. In re­al­ity:

  1. There are sub­stan­tial over­laps be­tween suffer­ing catas­tro­phes and ex­is­ten­tial catas­tro­phes, be­cause some ex­is­ten­tial catas­tro­phes would in­volve or re­sult in suffer­ing on an as­tro­nom­i­cal scale.

    • Most ob­vi­ously, many un­re­cov­er­able dystopia sce­nar­ios would in­volve suffer­ing of hu­mans or non­hu­mans on an as­tro­nom­i­cal scale. For ex­am­ple, a sta­ble, global to­tal­i­tar­ian regime could cause many billions of hu­mans to suffer in slav­ery-like con­di­tions each gen­er­a­tion, for many gen­er­a­tions (see also Re­duc­ing long-term risks from malev­olent ac­tors). Or a “de­sired dystopia” might in­volve huge num­bers of suffer­ing wild an­i­mals (via ter­raform­ing) or suffer­ing digi­tal minds, be­cause hu­man­ity fails to re­al­ise that that’s prob­le­matic.

    • As an­other ex­am­ple, some AI-in­duced ex­tinc­tion events may be fol­lowed by large amounts of suffer­ing, such as if the AI then runs huge num­bers of highly de­tailed simu­la­tions in­clud­ing sen­tient be­ings (see e.g. To­masik).[9]

  2. But there could also be suffer­ing catas­tro­phes that aren’t ex­is­ten­tial catas­tro­phes, be­cause they don’t in­volve the de­struc­tion of (the vast ma­jor­ity of) hu­man­ity’s long-term po­ten­tial.

    • This de­pends on one’s moral the­ory or val­ues (or the “cor­rect” moral the­ory or val­ues), be­cause, as noted above, that af­fects what counts as fulfilling or de­stroy­ing hu­man­ity’s long-term po­ten­tial.

    • For ex­am­ple, the Cen­ter on Long-Term Risk notes: “Depend­ing on how you un­der­stand the [idea of loss of “po­ten­tial” in defi­ni­tions] of [ex­is­ten­tial risks], there ac­tu­ally may be s-risks which aren’t [ex­is­ten­tial risks]. This would be true if you think that reach­ing the full po­ten­tial of Earth-origi­nat­ing in­tel­li­gent life could in­volve suffer­ing on an as­tro­nom­i­cal scale, i.e., the re­al­i­sa­tion of an s-risk. Think of a quar­ter of the uni­verse filled with suffer­ing, and three quar­ters filled with hap­piness. Con­sid­er­ing such an out­come to be the full po­ten­tial of hu­man­ity seems to re­quire the view that the suffer­ing in­volved would be out­weighed by other, de­sir­able fea­tures of reach­ing this full po­ten­tial, such as vast amounts of hap­piness.”

    • In con­trast, given a suffi­ciently suffer­ing-fo­cused the­ory of ethics, any­thing other than near-com­plete erad­i­ca­tion of suffer­ing might count as an ex­is­ten­tial catas­tro­phe.

Like­wise:

  1. There are sub­stan­tial over­laps be­tween suffer­ing catas­tro­phes and global catas­tro­phes, be­cause some suffer­ing catas­tro­phes (or the events that cause them) would in­volve ma­jor hu­man fatal­ities and eco­nomic dam­age.

  2. But there could also be suffer­ing catas­tro­phes that aren’t global catas­tro­phes, be­cause they don’t in­volve ma­jor hu­man fatal­ities and eco­nomic dam­age.

    • In par­tic­u­lar, there could be suffer­ing among non­hu­mans and/​or on­go­ing hu­man suffer­ing with­out a spike in hu­man fatal­ities at any par­tic­u­lar time.

We could thus rep­re­sent the re­la­tion­ship be­tween the con­cepts of suffer­ing catas­tro­phe, ex­is­ten­tial catas­tro­phe, and global catas­tro­phe via the fol­low­ing Venn di­a­gram (al­though this di­a­gram might be in­ac­cu­rate, given a suffi­ciently suffer­ing-fo­cused the­ory of ethics):


If you found this post in­ter­est­ing, you may also find in­ter­est­ing my pre­vi­ous posts Clar­ify­ing ex­is­ten­tial risks and ex­is­ten­tial catas­tro­phes and 3 sug­ges­tions about jar­gon in EA.

I’m grate­ful to To­bias Bau­mann, An­thony D, and David Kristoffers­son for helpful feed­back. I’m also grate­ful to David for dis­cus­sions that may have in­formed the ideas I ex­pressed in this post. This does not im­ply these peo­ple’s en­dorse­ment of all of this post’s claims.

I wrote this post in my per­sonal time, and it doesn’t nec­es­sar­ily rep­re­sent the views of any of my em­ploy­ers.


  1. I’m mak­ing a dis­tinc­tion be­tween “risk” and “catas­tro­phe”, in which the former refers to the chance of a bad event hap­pen­ing and the lat­ter refers to the bad event it­self. ↩︎

  2. Ord writes “I am un­der­stand­ing hu­man­ity’s longterm po­ten­tial in terms of the set of all pos­si­ble fu­tures that re­main open to us. This is an ex­pan­sive idea of pos­si­bil­ity, in­clud­ing ev­ery­thing that hu­man­ity could even­tu­ally achieve, even if we have yet to in­vent the means of achiev­ing it.” ↩︎

  3. For the pur­poses of this post, “ex­tinc­tion” refers to the pre­ma­ture ex­tinc­tion of hu­man­ity, and ex­cludes sce­nar­ios such as:

    • hu­man­ity be­ing “re­placed” by a “de­scen­dant” which we’d be happy to be re­placed by (e.g., whole brain em­u­la­tions or a slightly differ­ent species that we evolve into)

    • “hu­man­ity (or its de­scen­dants) [go­ing] ex­tinct af­ter fulfilling our longterm po­ten­tial” (Ord, 2020)

    Those sorts of sce­nar­ios are ex­cluded be­cause they might not count as ex­is­ten­tial catas­tro­phes. ↩︎

  4. Alter­na­tive Venn di­a­grams one could cre­ate, and which could also be use­ful, would in­clude a di­a­gram in which failed con­tinu­a­tion is bro­ken down into its two ma­jor types, and a di­a­gram based on Bostrom’s ty­pol­ogy of ex­is­ten­tial catas­tro­phes. ↩︎

  5. How­ever, cer­tain other defi­ni­tions of “global catas­tro­phe” might cap­ture all ex­is­ten­tial catas­tro­phes as a sub­set. For ex­am­ple, Yas­sif writes “By our work­ing defi­ni­tion, a GCR is some­thing that could per­ma­nently al­ter the tra­jec­tory of hu­man civ­i­liza­tion in a way that would un­der­mine its long-term po­ten­tial or, in the most ex­treme case, threaten its sur­vival.” Taken liter­ally, that defi­ni­tion could in­clude events that would in­volve no ob­vi­ous or im­me­di­ate deaths and eco­nomic dam­age, and that no one at the time recog­nises as a catas­tro­phe. ↩︎

  6. Seem­ingly rele­vantly, Bostrom’s clas­sifi­ca­tion of types of ex­is­ten­tial risk (by which I think he re­ally means “types of ex­is­ten­tial catas­tro­phe”) in­cludes “plateau­ing — progress flat­tens out at a level per­haps some­what higher than the pre­sent level but far be­low tech­nolog­i­cal ma­tu­rity”, as well as “un­con­sum­mated re­al­iza­tion”. Both of those types seem like they could oc­cur in ways such that the catas­tro­phe doesn’t rapidly and di­rectly lead to large amounts of death and eco­nomic dam­age.

    It’s also pos­si­ble that an ex­is­ten­tial catas­tro­phe which doesn’t rapidly and di­rectly lead to large amounts of death and eco­nomic dam­age could still lead to that even­tu­ally and in­di­rectly. For ex­am­ple, the catas­tro­phe might cause a failure to de­velop some valuable tech­nol­ogy that would’ve been de­vel­oped oth­er­wise, and that would’ve saved lives or boosted the econ­omy. If that hap­pens, I per­son­ally wouldn’t say that that alone should make the ex­is­ten­tial catas­tro­phe also count as a global catas­tro­phe. But I also think that that’s a de­bat­able and rel­a­tively unim­por­tant point.] ↩︎

  7. This defi­ni­tion of course leaves some room for in­ter­pre­ta­tion. See the rest of Daniel’s post for de­tails. Also note that, in De­cem­ber 2019, the Cen­ter on Long-Term Risk (CLR) re­peated this defi­ni­tion but added that “This defi­ni­tion may be up­dated in the near fu­ture.” (CLR is ar­guably the main or­gani­sa­tion fo­cused on s-risks, and is the or­gani­sa­tion Daniel was part of when he wrote the above-linked post.) ↩︎

  8. I haven’t ac­tu­ally seen the term “suffer­ing catas­tro­phe”, and some­times the term “s-risk” is it­self used to re­fer to the event, rather than the risk of the event oc­cur­ring (e.g., here). But it seems to me prefer­able to have a sep­a­rate term for the risk and the event. And “suffer­ing catas­tro­phe” seems to make sense as a term, by anal­ogy to the re­la­tion­ship be­tween (a) the terms “ex­is­ten­tial risk” and “ex­is­ten­tial catas­tro­phe”, and (b) the terms “global catas­trophic risk” and “global catas­tro­phe”. ↩︎

  9. Also, more spec­u­la­tively, if there are (or will be) vast num­bers of suffer­ing sen­tient be­ings el­se­where in the uni­verse, and hu­mans cur­rently have the po­ten­tial to stop or sub­stan­tially re­duce that suffer­ing, then hu­man ex­tinc­tion or un­re­cov­er­able col­lapse could also rep­re­sent a suffer­ing catas­tro­phe. See also The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive. ↩︎

Error

The value
  NIL
is not of type
  LOCAL-TIME:TIMESTAMP