Eight high-level uncertainties about global catastrophic and existential risk

I wanted to write a quick overview of over­ar­ch­ing top­ics in global catas­trophic and ex­is­ten­tial risk where we do not know much yet. Each of these top­ics de­serves a lot of at­ten­tion on their own, and this is sim­ply in­tended as a non-com­pre­hen­sive overview. I use the term ‘haz­ard’ to in­di­cate an event that could lead to ad­verse out­comes, and the term ‘risk’ to in­di­cate the product of a haz­ard’s prob­a­bil­ity times its nega­tive con­se­quences. Although I be­lieve not all un­cer­tain­ties are of equal im­por­tance (some might be more im­por­tant by or­ders of mag­ni­tude), I dis­cuss them in no par­tic­u­lar or­der. Fur­ther­more, the se­lec­tion of un­cer­tain­ties is the re­sult of what has been on the fore­front of my mind and does not re­flect the 8 most im­por­tant un­cer­tain­ties.

1. Timelines

Ex­is­ten­tial risk is of­ten dis­cussed as ‘y% risk in the next 100 years [or some other times­pan], con­di­tional on no other catas­trophic events’. How­ever, risk is prob­a­bly not equally dis­tributed over time. For ex­am­ple, risks from cli­mate change are larger in the fu­ture as global tem­per­a­tures con­tinue to rise. As­sum­ing we can do a rea­son­able as­sess­ment of risk over time, com­par­ing timelines of differ­ent haz­ards is im­por­tant for cross-risk pri­ori­ti­za­tion. After all, we should dis­count the risk of one haz­ard by the prob­a­bil­ity that an­other catas­trophic event would oc­cur first. For ex­am­ple, I hear many non-EA’s say that ‘we shouldn’t worry about these fu­tur­is­tic risks such as AI, be­cause the risk of catas­tro­phe from cli­mate change in the near term is very high’. On the other hand, we should also take into ac­count the timeline of achiev­ing civ­i­liza­tional in­vuln­er­a­bil­ity; if one be­lieves su­per­in­tel­li­gence is nearly cer­tain to ar­rive be­fore 2100, they should heav­ily dis­count the post-2100 ex­is­ten­tial risk.

How­ever, timelines by them­selves only af­fects the risk of other haz­ards by a small fac­tor. E.g. even if the global catas­trophic risk cli­mate change is 10% un­til 2050, that re­duces the x-risk from AI af­ter 2050 by only 10%.

2. Prob­a­bil­ity of recovery

Longter­mism is unique in that it makes a big moral dis­tinc­tion be­tween global col­lapse (i.e. the loss of crit­i­cal in­fras­truc­ture and loss of more than 50% of the world pop­u­la­tion) and ex­is­ten­tial catas­tro­phes (e.g. ex­tinc­tion). In turn, a large ar­gu­ment in favour of a fo­cus on emerg­ing tech­nolo­gies is that the prob­a­bil­ity of re­cov­ery af­ter global col­lapse is high or very high. How­ever, not much re­search has been done into this (Cf. GCRI’s page for an ex­cep­tion). To me, it seems that peo­ple’s pri­mary rea­son to be­lieve re­cov­ery is prob­a­ble is that hu­man­ity will have a lot of time: the Earth will re­main hab­it­able for a long time (100 mln. − 1 bln. years; ref) and the risk from nat­u­ral haz­ards is low (Cf. Sny­der-Beat­tie, Ord, Bon­sall, 2019 for an up­per bound on the risk).

How­ever, not much re­search has been done on hu­man­ity’s ex­pected lifes­pan af­ter col­lapse, how much of this pe­riod will be suit­able for large-scale com­plex so­cieties (e.g. How of­ten the cli­mate would be suit­able for agri­cul­ture; cf. Baum et al., 2019), how differ­ent catas­tro­phes would af­fect the con­di­tions for re­cov­ery, nor on ob­sta­cles that fu­ture hu­man­ity would face (e.g. limited re­sources for in­dus­tri­al­iza­tion). A good rule of thumb seems to be ‘the later the col­lapse, the worse the prospects for hu­man­ity’ (cf. Luke Kemp). How­ever, how much worse it would be is un­clear. Fur­ther­more, I be­lieve that the prob­a­bil­ity of re­cov­ery is sen­si­tive to the type of col­lapse and how the col­lapse in­fluences the con­di­tions for re­cov­ery. This means that we should not speak of a sin­gle prob­a­bil­ity of re­cov­ery, be­cause it de­pends on one’s other judg­ments of which col­lapse sce­nar­ios are most likely.

Given the limited re­search available, I find con­fi­dence on this ques­tion un­jus­tified.

3. Qual­ity of recovery

Even more un­cer­tain than the prob­a­bil­ity of re­cov­ery is the qual­ity of re­cov­ery. My im­pres­sion is that the stan­dard view is ‘we can’t an­swer this ques­tion, so the epistem­i­cally re­spon­si­ble ap­proach is to as­sume an ex­pected value just as good/​bad as our cur­rent tra­jec­tory with a large un­der­ly­ing var­i­ance in pos­si­ble out­comes.’

I be­lieve it would be valuable to do re­search on this topic: some things could po­ten­tially be dis­cov­ered by a dili­gent re­searcher. For ex­am­ple, a re­cov­ered global so­ciety might be less re­li­ant on fos­sil fuels, re­duc­ing the pres­sures from cli­mate change. On the other hand, a re­cov­er­ing so­ciety might re-in­vent weapons of mass de­struc­tion, and the early phase af­ter the dis­cov­ery of these weapons seems much riskier than the cur­rent situ­a­tion.

4. De­gree of frag­ility of society

Cur­rent EA-think­ing seems to ap­ply a multi-haz­ard model of ex­is­ten­tial risk anal­y­sis. It sim­ply looks at differ­ent haz­ards (nu­clear war, pan­demic, su­per­in­tel­li­gence, ex­treme cli­mate change) and asks for each haz­ard ‘what’s the prob­a­bil­ity that this haz­ard will oc­cur?’ and ‘given that it oc­curs, what is the prob­a­bil­ity of col­lapse, and of ex­tinc­tion?’ (Cf. p. 1-6 of my write-up for a more tech­ni­cal de­scrip­tion of this model).

How­ever, this ap­proach seems to as­sume a re­silient global sys­tem where ex­treme events are nec­es­sary to lead to col­lapse or ex­tinc­tion. In prac­tice, we don’t know how re­silient so­ciety is. Com­plex dy­namic sys­tems can ap­pear sta­ble, only to rad­i­cally and sud­denly fail (e.g. the fi­nan­cial sys­tem in 2008). If so­ciety is ac­tu­ally frag­ile, a fo­cus on haz­ards is mis­guided, and a fo­cus should in­stead be on im­prov­ing the re­silience of the global sys­tem. On the other hand, if so­ciety is re­silient, minor haz­ards would be unim­por­tant. Yet ma­jor haz­ards would—aside from be­ing the main source of col­lapse/​ex­tinc­tion—be more likely to only re­sult in global dis­rup­tion. This leads to the next un­cer­tainty.

5. Long-term effects of disruption

Within the haz­ard-fo­cused mod­els, at­ten­tion is mostly given to the ‘di­rect’ effects: the like­li­hood that a haz­ard di­rectly leads to col­lapse or ex­is­ten­tial catas­tro­phe. How­ever, if a nu­clear war were to oc­cur that does not lead to global col­lapse or ex­tinc­tion it would be a ma­jor event in hu­man his­tory. The ‘sta­tus quo tra­jec­tory’ would be mas­sively dis­rupted: post-war power re­la­tions would be sig­nifi­cantly changed, hu­man­ity would view global catas­tro­phe as much more likely for the next decades, and many other com­plex con­se­quences would fol­low (e.g. World War II plau­si­bly con­tributed to the em­pow­er­ment of women, which had large so­cial con­se­quences).

If a ma­jor haz­ard is much more likely to lead to global dis­rup­tion than col­lapse/​ex­tinc­tion and if global dis­rup­tion has sig­nifi­cant long-term effects on hu­man­ity’s tra­jec­tory, then a large frac­tion of the ex­pected value from re­duc­ing global catas­tro­phe comes from how that work af­fects the like­li­hood and effects of global dis­rup­tion.

6. Ex­pected value of the future

Work on ex­is­ten­tial risk is reg­u­larly mo­ti­vated by ap­peal­ing to the fact that the fu­ture would be tremen­dously valuable. Ex­tinc­tion would be an ‘as­tro­nom­i­cal waste’. How­ever, many peo­ple would dis­agree with this op­ti­mistic as­sump­tion. Ar­gu­ments for the qual­ity of the fu­ture rely on spec­u­la­tive ar­gu­ments such as that the ex­pected value calcu­la­tion is dom­i­nated by fu­tures that are op­ti­mized for value or dis­value, or that other agents would do worse in ex­pec­ta­tion (Cf. Brauner & Grosse-Holz, sec­tion 2.1).

Fur­ther­more, the op­tion value of post­pon­ing ex­tinc­tion is limited (Brauner & Grosse-Holz (sec­tion 1.3), me). In ad­di­tion, there is the con­sid­er­a­tion of ‘which world gets saved’: if we change the prop­er­ties of the world to re­duce ex­tinc­tion risk, we also af­fect the prop­er­ties of a sur­viv­ing world. In a similar vein, we might con­clude that a sur­viv­ing world has cer­tain prop­er­ties (e.g. some com­bi­na­tion of tech­nolog­i­cal ma­tu­rity, wis­dom, and co­or­di­na­tion) given that there has not been an ex­tinc­tion event.

Fur­ther work on the value of the fu­ture seems valuable—I’d es­pe­cially like to see an ac­cessible piece geared to­wards peo­ple who be­lieve the fu­ture is not clearly pos­i­tive. It could ei­ther provide con­vinc­ing rea­son­ing the fu­ture is likely to be valuable, or ar­gue that work on GC-/​X-risk re­duc­tion tends to be valuable re­gard­less. Of course, op­pos­ing view­points are also very wel­come.

7. Ways to achieve civ­i­liza­tional invulnerability

Ar­guably, the goal of ex­is­ten­tial risk re­duc­tion is to ap­proach civ­i­liza­tional in­vuln­er­a­bil­ity so a good fu­ture can be cre­ated. This is a barely ex­plored ques­tion and there might be mul­ti­ple ways to achieve it (Cf. Bostrom (2013, 2018) for dis­cus­sion of tech­nolog­i­cal ma­tu­rity and the Vuln­er­a­ble World Hy­poth­e­sis). Po­ten­tial strate­gies prob­a­bly con­tain a com­bi­na­tion of tech­nolog­i­cal and non-tech­nolog­i­cal in­no­va­tion (e.g. cul­tural, leg­is­la­tive, and eco­nomic in­no­va­tion). Some fea­si­ble strate­gies may lean heav­ily on tech­nolog­i­cal in­no­va­tion, while oth­ers could rely more heav­ily on non-tech­nolog­i­cal in­no­va­tion.

I am not sure whether re­search on this would un­cover valuable in­for­ma­tion. One po­ten­tially promis­ing line of re­search (sug­gested by Aaron Gertler) is the trade-off be­tween x-risk re­duc­tion and the qual­ity of the fu­ture (in­clud­ing how it af­fects the like­li­hood of suffer­ing risks).

8. Other mod­els or an­gles of ex­is­ten­tial risk & meta uncertainty

It is tempt­ing to—im­plic­itly or ex­plic­itly—con­struct a sin­gle model of haz­ards and prob­a­ble con­se­quences. How­ever, the dom­i­nant model might be miss­ing some im­por­tant fac­tors or high­light only a part of the prob­lem space. Real­ity can be carved up in differ­ent ways and it is good prac­tice to view a prob­lem from mul­ti­ple an­gles. Differ­ent mod­els of ex­is­ten­tial risk (in­clud­ing qual­i­ta­tive ones) could high­light as­pects that are cur­rently in our col­lec­tive blindspots. For ex­am­ple, view­ing all ex­is­ten­tial risks through the lens of agen­tial risk (i.e. risks stem­ming from peo­ple’s in­ten­tional or un­in­ten­tional be­havi­our, Cf. Tor­res, 2016), in­clud­ing bor­ing apoc­a­lypses (Cf. Liu, Lauta, and Maas (2018); and Kuh­le­man (2018)), or a struc­tural clas­sifi­ca­tion of global catas­trophic risk (Cf. Avin et al., 2018).

Lastly, meta un­cer­tainty is un­cer­tainty about what we are/​should be un­cer­tain about. As a point in case this list is not com­pre­hen­sive and I hope oth­ers add their main un­cer­tain­ties to it.

---

Thanks to Aaron Gertler for pro­vid­ing use­ful feed­back on this post. Many of my views here crys­tal­lized dur­ing my sum­mer vis­i­tor­ship at CSER spon­sored by BERI. Feel free to con­tact me if you want to know more about what I call ‘com­pre­hen­sive ex­is­ten­tial risk as­sess­ment’.