Eight high-level uncertainties about global catastrophic and existential risk

I wanted to write a quick overview of overarching topics in global catastrophic and existential risk where we do not know much yet. Each of these topics deserves a lot of attention on their own, and this is simply intended as a non-comprehensive overview. I use the term ‘hazard’ to indicate an event that could lead to adverse outcomes, and the term ‘risk’ to indicate the product of a hazard’s probability times its negative consequences. Although I believe not all uncertainties are of equal importance (some might be more important by orders of magnitude), I discuss them in no particular order. Furthermore, the selection of uncertainties is the result of what has been on the forefront of my mind and does not reflect the 8 most important uncertainties.

1. Timelines

Existential risk is often discussed as ‘y% risk in the next 100 years [or some other timespan], conditional on no other catastrophic events’. However, risk is probably not equally distributed over time. For example, risks from climate change are larger in the future as global temperatures continue to rise. Assuming we can do a reasonable assessment of risk over time, comparing timelines of different hazards is important for cross-risk prioritization. After all, we should discount the risk of one hazard by the probability that another catastrophic event would occur first. For example, I hear many non-EA’s say that ‘we shouldn’t worry about these futuristic risks such as AI, because the risk of catastrophe from climate change in the near term is very high’. On the other hand, we should also take into account the timeline of achieving civilizational invulnerability; if one believes superintelligence is nearly certain to arrive before 2100, they should heavily discount the post-2100 existential risk.

However, timelines by themselves only affects the risk of other hazards by a small factor. E.g. even if the global catastrophic risk climate change is 10% until 2050, that reduces the x-risk from AI after 2050 by only 10%.

2. Probability of recovery

Longtermism is unique in that it makes a big moral distinction between global collapse (i.e. the loss of critical infrastructure and loss of more than 50% of the world population) and existential catastrophes (e.g. extinction). In turn, a large argument in favour of a focus on emerging technologies is that the probability of recovery after global collapse is high or very high. However, not much research has been done into this (Cf. GCRI’s page for an exception). To me, it seems that people’s primary reason to believe recovery is probable is that humanity will have a lot of time: the Earth will remain habitable for a long time (100 mln. − 1 bln. years; ref) and the risk from natural hazards is low (Cf. Snyder-Beattie, Ord, Bonsall, 2019 for an upper bound on the risk).

However, not much research has been done on humanity’s expected lifespan after collapse, how much of this period will be suitable for large-scale complex societies (e.g. How often the climate would be suitable for agriculture; cf. Baum et al., 2019), how different catastrophes would affect the conditions for recovery, nor on obstacles that future humanity would face (e.g. limited resources for industrialization). A good rule of thumb seems to be ‘the later the collapse, the worse the prospects for humanity’ (cf. Luke Kemp). However, how much worse it would be is unclear. Furthermore, I believe that the probability of recovery is sensitive to the type of collapse and how the collapse influences the conditions for recovery. This means that we should not speak of a single probability of recovery, because it depends on one’s other judgments of which collapse scenarios are most likely.

Given the limited research available, I find confidence on this question unjustified.

3. Quality of recovery

Even more uncertain than the probability of recovery is the quality of recovery. My impression is that the standard view is ‘we can’t answer this question, so the epistemically responsible approach is to assume an expected value just as good/​bad as our current trajectory with a large underlying variance in possible outcomes.’

I believe it would be valuable to do research on this topic: some things could potentially be discovered by a diligent researcher. For example, a recovered global society might be less reliant on fossil fuels, reducing the pressures from climate change. On the other hand, a recovering society might re-invent weapons of mass destruction, and the early phase after the discovery of these weapons seems much riskier than the current situation.

4. Degree of fragility of society

Current EA-thinking seems to apply a multi-hazard model of existential risk analysis. It simply looks at different hazards (nuclear war, pandemic, superintelligence, extreme climate change) and asks for each hazard ‘what’s the probability that this hazard will occur?’ and ‘given that it occurs, what is the probability of collapse, and of extinction?’ (Cf. p. 1-6 of my write-up for a more technical description of this model).

However, this approach seems to assume a resilient global system where extreme events are necessary to lead to collapse or extinction. In practice, we don’t know how resilient society is. Complex dynamic systems can appear stable, only to radically and suddenly fail (e.g. the financial system in 2008). If society is actually fragile, a focus on hazards is misguided, and a focus should instead be on improving the resilience of the global system. On the other hand, if society is resilient, minor hazards would be unimportant. Yet major hazards would—aside from being the main source of collapse/​extinction—be more likely to only result in global disruption. This leads to the next uncertainty.

5. Long-term effects of disruption

Within the hazard-focused models, attention is mostly given to the ‘direct’ effects: the likelihood that a hazard directly leads to collapse or existential catastrophe. However, if a nuclear war were to occur that does not lead to global collapse or extinction it would be a major event in human history. The ‘status quo trajectory’ would be massively disrupted: post-war power relations would be significantly changed, humanity would view global catastrophe as much more likely for the next decades, and many other complex consequences would follow (e.g. World War II plausibly contributed to the empowerment of women, which had large social consequences).

If a major hazard is much more likely to lead to global disruption than collapse/​extinction and if global disruption has significant long-term effects on humanity’s trajectory, then a large fraction of the expected value from reducing global catastrophe comes from how that work affects the likelihood and effects of global disruption.

6. Expected value of the future

Work on existential risk is regularly motivated by appealing to the fact that the future would be tremendously valuable. Extinction would be an ‘astronomical waste’. However, many people would disagree with this optimistic assumption. Arguments for the quality of the future rely on speculative arguments such as that the expected value calculation is dominated by futures that are optimized for value or disvalue, or that other agents would do worse in expectation (Cf. Brauner & Grosse-Holz, section 2.1).

Furthermore, the option value of postponing extinction is limited (Brauner & Grosse-Holz (section 1.3), me). In addition, there is the consideration of ‘which world gets saved’: if we change the properties of the world to reduce extinction risk, we also affect the properties of a surviving world. In a similar vein, we might conclude that a surviving world has certain properties (e.g. some combination of technological maturity, wisdom, and coordination) given that there has not been an extinction event.

Further work on the value of the future seems valuable—I’d especially like to see an accessible piece geared towards people who believe the future is not clearly positive. It could either provide convincing reasoning the future is likely to be valuable, or argue that work on GC-/​X-risk reduction tends to be valuable regardless. Of course, opposing viewpoints are also very welcome.

7. Ways to achieve civilizational invulnerability

Arguably, the goal of existential risk reduction is to approach civilizational invulnerability so a good future can be created. This is a barely explored question and there might be multiple ways to achieve it (Cf. Bostrom (2013, 2018) for discussion of technological maturity and the Vulnerable World Hypothesis). Potential strategies probably contain a combination of technological and non-technological innovation (e.g. cultural, legislative, and economic innovation). Some feasible strategies may lean heavily on technological innovation, while others could rely more heavily on non-technological innovation.

I am not sure whether research on this would uncover valuable information. One potentially promising line of research (suggested by Aaron Gertler) is the trade-off between x-risk reduction and the quality of the future (including how it affects the likelihood of suffering risks).

8. Other models or angles of existential risk & meta uncertainty

It is tempting to—implicitly or explicitly—construct a single model of hazards and probable consequences. However, the dominant model might be missing some important factors or highlight only a part of the problem space. Reality can be carved up in different ways and it is good practice to view a problem from multiple angles. Different models of existential risk (including qualitative ones) could highlight aspects that are currently in our collective blindspots. For example, viewing all existential risks through the lens of agential risk (i.e. risks stemming from people’s intentional or unintentional behaviour, Cf. Torres, 2016), including boring apocalypses (Cf. Liu, Lauta, and Maas (2018); and Kuhleman (2018)), or a structural classification of global catastrophic risk (Cf. Avin et al., 2018).

Lastly, meta uncertainty is uncertainty about what we are/​should be uncertain about. As a point in case this list is not comprehensive and I hope others add their main uncertainties to it.

---

Thanks to Aaron Gertler for providing useful feedback on this post. Many of my views here crystallized during my summer visitorship at CSER sponsored by BERI. Feel free to contact me if you want to know more about what I call ‘comprehensive existential risk assessment’.