Prioritising between extinction risks: Evidence Quality

Section 1: Strength of Evidence

The EA community has identified many sources of extinction risk. However, most efforts to mitigate existential risk from one source will not mitigate risk from other sources. This means we must prioritise between extinction risks to maximise impact.

Many factors should be considered in this prioritisation. This post will only focus on one factor—evidence quality. All else equal, we should prioritise problems where there is stronger evidence that it poses an extinction risk.

In evidence-based medicine, there is a well-known “pyramid of evidence” which ranks the strength of evidence. This pyramid is not applicable to extinction risk studies, where all evidence is of much lower quality than that available in medicine. We also have the problem of observation selection effects which means we will not find past examples of human extinction.

This post introduces a potential approach to ranking evidence strength for extinction risks. I’d love to see people build on this and offer better-seeming alternatives.

I haven’t included “expert opinion” and “superforecaster estimates” in this ranking, since I think experts and superforecasters should be weighing evidence using this ranking to arrive at their opinions and estimates.

Proposed Levels of Evidence

​​1) Precedent of extinction of multiple species

Asteroids (Cretaceous-Paleogene Extinction)

Supervolcanoes /​ non-anthropogenic climate change (Permian-Triassic Extinction, Triassic-Jurassic Extinction)

2) Precedent of extinction of a single species

Infectious Disease (Tasmanian Tiger, Golden Toad, Christmas Island Pipistrelle)

3) Precedent of a human societal collapse

4) Precedent of an extremely large number of human deaths in an extremely short period of time

War (World War 2, Taiping Rebellion)

Famine (Great Chinese Famine)

5) Clear mechanism of extinction

Nuclear War

Gamma Ray Bursts

Biodiversity Loss

6) Unclear mechanism of extinction

AGI

Nanotechnology

Particle Physics Experiments

Global Systemic Risks /​ Cascading Risks

Geoengineering

Section 2: Uncertainties and Open Questions

What are the correct reference classes? Should we take previous technological change-induced societal collapses as a reason to prioritise nanotechnology and AI Safety, even if the technology which induced the collapse looked very different?

Will more research into exoplanets allow us to learn about past events which turned habitable planets into uninhabitable ones? I would put this type of evidence at the top of the ranking.

I’m unsure about where to place “precedent of human societal collapse” relative to precedent of extinction of animal species—I think this depends heavily on how special you think humans are relative to animals. Clearly, we’re much better able to co-ordinate and respond to emerging threats.

Section 3: Implications for Open Philanthropy, 80K and individual EAs

I don’t think Open Philanthropy, 80k or most individual EAs have given enough consideration to strength of evidence when prioritising between extinction risks.

So based on this current framework, I think all groups should allocate more resources (money, careers, etc) towards planetary defence and supervolcanoes, and less resources towards nuclear war and AI Safety, relative to the status quo.

No comments.