Summary: Mistakes in the Moral Mathematics of Existential Risk (David Thorstad)

This post summarizesThree Mistakes in the Moral Mathematics of Existential Risk,” a Global Priorities Institute Working Paper by David Thorstad. This post is part of my sequence of GPI Working Paper summaries. For more, Thorstad’s blog, Reflective Altruism, has a five-part series on this paper.

Introduction

Many prominent figures in the effective altruism community argue existential risk mitigation offers astronomical value. Thorstad believes there are many philosophical ways to push back on this conclusion[1] and even mathematical ones.

Thorstad argues leading models of existential risk mitigation neglect morally relevant parameters, mislocating debates and inflating existential risk reduction’s value by many orders of magnitude.

He broadly assumes we aren’t in the time of perils (which he justifies in this paper) and treats extinction risks as only those that kill all humans.[2]

Mistake 1: Cumulative Risk

Existential risks recur many times throughout history, meaning they can be presented as a per-century risk repeated each century or as a cumulative risk of occurring during a total time interval (e.g., the cumulative risk of extinction before Earth becomes less habitable).

  • Mistake 1: Expected value calculations of existential risk mitigation reduce the cumulative risk, not the per-century risk.

Thorstad identifies two problems with this choice.

  1. If humans live a long time, small reductions in cumulative risk require astronomical reductions in per-century risk. This is because the chance we survive for the total time interval in question depends on cumulative risk, and our cumulative survival chance must exceed our reduction in cumulative risk.

  2. Reducing cumulative risks with our actions today requires changing the risk for many, many centuries to come. So, even if we can substantially shift the risk of extinction this century or even nearby ones, we’ll likely have a hard time doing so for existential risk a thousand or million centuries from now.

  • For instance, if we want to create a meager one-in-a-hundred-million absolute reduction[3] in existential risk before Earth becomes less habitable,[4] the per-century risk must be nearly one-in-a-million or lower.[5] Many longtermists estimate this century’s existential risk to be ~15–20% or higher,[6] in which case we’d need to drive the per-century risk down a hundred thousand times. Hence, many expected value calculations of existential risk mitigation demand vastly greater reductions in per-century risk than they initially seem to.

Mistake 2: Background Risk

Millett and Snyder-Beattie (MSB) offer one of the most cited papers discussing biorisk—biological extinction risk—featuring a favorable cost-effectiveness estimate. While Thorstad believes many complaints about MSB’s model exist, he raises two.

  • Mistake 2: Existential risk mitigation calculations (including MSB’s model) ignore background risk.

In MSB’s model, the background risk is the risk of extinction from all non-biological sources. But, modifying this model to include background risk changes the estimated cost-effectiveness considerably.

Without background risk, a 1% relative reduction in biorisk has a meaningful impact on per-century risk: it discounts per-century risk by 1%. But, when you include non-biological background risk, the same reduction in biorisk changes the per-century risk far less: per-century risk becomes the discounted biorisk plus the full background risk. Since many longtermists believe the per-century risk is very high (~15–20% or higher)[6] and thus much greater than biorisk, this substantially reduces biorisk mitigation’s estimated cost-effectiveness:

For reference, GiveWell estimates its most effective short-term interventions can save a life-year for about $100.[7]

Thorstad also raises a second complaint with MSB’s model: It assumes we reduce the cumulative biorisk rather than the more plausible prospect of reducing the biorisk of nearby centuries.

Suppose our intervention reduces this century’s biorisk, but other centuries must fend for themselves. Combined with the background risk, this assumption produces the following doubly revised cost-effectiveness table:

A precipitous drop in cost-effectiveness. GiveWell’s recommendations now appear more cost-effective, perhaps by orders of magnitude.

Mistake 3: Population Dynamics

Longtermist estimates of the future population display astronomically many lives at stake if we don’t act prudently as a species. However, calculated by taking the number of lives a region (e.g., Earth) can support and multiplying it by the duration humanity could exist in the region, these estimates ignore background risk (mistake 2) and introduce the third mistake:

  • Mistake 3: Existential risk mitigation calculations ignore population dynamics.[8]

The most pessimistic existential risk mitigation calculations assume the population hovers around the current 8 billion, but most demographers[9] expect the population to begin decreasing by ~2100, which may be permanent.[10] After all, fertility rates are influenced by myriad factors other than the number of lives we can support in principle. Will MacAskill memorably projected the future population reaching five million ‘stick figures’ of ten billion humans each (5 * 10). In contrast, even with their most optimistic fertility rate (1.8), Geruso and Spears estimate the future population at no greater than three stick figures (3 * 10), even ignoring background risk.

Objections

High-Fertility Subpopulations

One may object that high-fertility subpopulations will constitute an increasing share of the total population, effectively reversing declining fertility rates, but Thorstad argues that many demographers place little confidence in this scenario and provides three reasons:

  1. Fertility rates are dropping in high-fertility subpopulations[11]

  2. Fertility norms are, at best, incompletely transmissible[12]

  3. We could adapt, as high-fertility subpopulations would take centuries to gain an outsized population share[13]

Techno-optimism

One also might object to these population forecasts with more optimistic assumptions. For instance, say…

  • The human population will reach the maximum number of lives our inhabited region can support.

  • We continuously settle the stars in all directions at a tenth of the speed of light.

Christian Tarsney models these assumptions to compare existential risk mitigation with a near-term intervention, finding the former more effective if the risk of extinction per century remains below ~1.34%. Already, many longtermists estimate the per-century risk to be higher than this.[6] Not to mention, this assumes humanity continues immediately past each planet it settles toward the next, which demographers find unlikely. They prefer an alternative story:

  • As settlers populate planets, they gradually use up prime economic opportunities and make the planets crowded.

  • This eventually motivates settlers to colonize new planets.

If you assume it takes 1,000 years to colonize a planet, Tarsney’s model now only endorses existential risk mitigation if the risk of extinction per century remains below ~0.145%, almost ten times less than before. Hence, even with optimistic technological assumptions, population dynamics matter.

Conclusion /​ Brief Summary

Thorstad identifies three mistakes in existential risk mitigation calculations.

  1. They focus on cumulative extinction risk, not per-century risk. This is a mistake for two reasons:

    1. It assumes we can change the cumulative risk for all future generations.

    2. Slightly changing the cumulative risk requires dramatic changes in per-century risk (if humans live a long time).

      1. For instance, reducing cumulative risk by a millionth of a percent requires cutting per-century risk down to 1 in 100 million, which is five orders of magnitude below many longtermists’ estimated risk of extinction this century)[6]

  2. They ignore background extinction risk. But, reducing any single existential risk is much less valuable when unaltered extinction risks exist, as it decreases the per-century risk less than it would otherwise.

    1. For instance, one of the most cited cost-effectiveness estimates of biological extinction risk changes orders of magnitude when accounting for this mistake, making biorisk mitigation appear less cost-effective than GiveWell’s recommendations.

  3. They ignore population dynamics by assuming population sizes are only determined by the maximum number of lives that inhabited regions can support.

    1. On our best scientific population models,[9] the population begins declining by ~2100, which may be permanent.[10] MacAskill’s estimated five million future ‘stick figures’ of ten billion humans each may be replaced by only two or three.

    2. Even optimistically assuming a spacefaring future for humanity where we reach the maximum number of lives that inhabited regions can support, the downtime required to establish colonies on each planet substantially reduces the value of existential risk mitigation.

For more, see the paper itself or Thorstad’s blog, Reflective Altruism, which has a five-part series on this paper.

  1. ^

    Including population ethical neutrality (Naverson; Frick), discounting future people (Lloyd; Mogensen), prioritizing present duties (Cordelli), questioning personal perogatives (Unruh forthcoming), or being averse to risk (Pettigrew), ambiguity (Buchak), fanaticism (Monton; Smith), or aggregation (Curran; Heikkinen).

  2. ^

    This assumption has three reasons: (1) Thorstad wants to avoid burdensome modeling complexity and (2) avoid accusations of fiddling with modeling assumptions, and (3) he believes this assumption will likely not alter conclusions.

  3. ^

    In this case, an absolute reduction means we’re taking the original extinction risk and subtracting one in a hundred million (or 10).

  4. ^

    In this estimate, Earth becomes less habitable in one billion years.

  5. ^

    Thorstad says, “...an absolute reduction of 10 in cumulative existential risk would bring about a probability of at least 10 that humanity survives for a billion years. However, the probability of surviving for a billion years, or ten million centuries, depends on the cumulative risk r: we survive for ten million centuries with probability P(S) = (1 − r). For our cumulative survival chance P(S) to exceed the seemingly small probability 10 requires an extremely low per-century risk of r ≈ 1.6 ∗ 10, barely a one-in-a-million risk of existential catastrophe per century.”

  6. ^

    Ord (2020) puts risk at 16.6%; attendees of the 2008 Global Catastrophic Risks Conference at the Future of Humanity Institute gave a median estimate of 19% (Sandberg and Bostrom 2008); and the Astronomer Royal Martin Rees puts the chance of civilizational collapse at 50% by the end of the century (Rees 2003).”

  7. ^

    Thorstad argues this estimate is conservative because it doesn’t consider global health interventions’ long-term wellfare benefits, such as their contribution to economic growth.

  8. ^

    “Even readers who place nontrivial confidence in optimistic scenarios for future population growth may not place substantial confidence [this assumption], on which population size hovers near carrying capacity. There is a significant gap between the most optimistic and the most pessimistic population projections, and moving beyond pessimism need not carry us all the way to full optimism.”

  9. ^
  10. ^
  11. ^
  12. ^
  13. ^