Summary: Tiny Probabilities and the Value of the Far Future (Petra Kosonen)

This post is a summary of Tiny probabilities and the value of the far future, a Global Priorities Institute Working Paper by Petra Kosonen. This post is part of my sequence of GPI Working Paper summaries.

If you’d like a brief summary, skip to “Conclusion/​Brief Summary.”

Introduction

Many people reasonably doubt the probability of our actions today causing better outcomes in the distant future, leading them to eschew longtermism. After all, something feels wrong with a theory that lets tiny probabilities of enormous value dictate our actions. For instance, it leaves us vulnerable to Pascal’s mugging.

However, Kosonen argues that even if we discount small probabilities—which protects us from cases like Pascal’s mugging—longtermism is not undermined. She refutes three arguments:

  1. The Low Risks Argument: The probabilities of existential risks are so low that we should ignore them.

  2. The Small Future Argument: After we ignore unlikely outcomes (e.g., space settlement or digital minds), the number of individuals in the future is too low for longtermism to be correct.

  3. The No Difference Argument: The probability we make a difference in whether extinction occurs is so low we should ignore it.

The Low Risks Argument

Naive Discounting

Before discussing arguments against longtermism, Kosonen introduces one way to discount probabilities, which she calls Naive Discounting.

  • Naive Discounting: Ignore probabilities below some threshold[1].

The Low Risks Argument

  • The Low Risks Argument: The probabilities of existential risks are so low that we should ignore them.

In response, Kosonen argues it’s unclear what Naive Discounting says about The Low Risks Argument:

  1. Probabilities of existential risks from various sources in the next century seem to fall above any reasonable discounting threshold.[2]

  2. For any risk, we can make it specific enough to make its probability fall below the threshold (e.g., extinction from an asteroid on Jan. 4, 2055), or we can make it broad enough to fall above the threshold (e.g., extinction from a pandemic this century).

  3. We don’t have guidelines for choosing how specific to make these risks when assessing their probabilities, so whether they fall above or below the threshold is arbitrary.

  4. Hence, it’s unclear what Naive Discounting says, and it doesn’t impair longtermism based on the Low Risks Argument.

Tail Discounting

Even another, more plausible, way to discount probabilities, called Tail Discounting, doesn’t undermine longtermism.

  • Tail Discounting: Ignore the very best and very worst outcomes (tails). That is, ignore the grey areas under this curve:

Figure 1

However, Kosonen argues Tail Discounting doesn’t ignore the possibility of extinction because extinction plausibly doesn’t sit in the tails—a world worse than extinction[3] is likely enough, and a world better than extinction is likely enough (see Figure 2).

Figure 2

Hence, Kosonen argues the Low Risks Argument—that existential risks are so unlikely we should ignore them—doesn’t erode longtermism because:

  1. Naive Discounting struggles with how to specify risks, so it’s unclear what it says.

  2. Tail Discounting doesn’t ignore extinction risks as long as there are meaningful probabilities of outcomes better and worse than extinction.

The Small Future Argument

Kosonen addresses another argument against longtermism.

  • The Small Future Argument: After we ignore unlikely outcomes (e.g., space settlement or digital minds), the number of individuals in the future is too low for longtermism to be correct.

She compares the cost-effectiveness of giving $10,000 to the AMF and giving it to existential risk reduction. She calculates[4] the following effects:

  • AMF: Save 2.5 lives

  • Asteroid detection: Reduce the chance of extinction from an asteroid this century by 1 in 33,000 trillion[5]

  • Pandemic prevention: Reduce the chance of extinction from a pandemic this century by 1 in 50 trillion[5]

  • AI safety: Reduce the chance of extinction from AI this century by 1 in 10 billion[5]

Using these, she calculates the size of the future population required for existential risk reduction to beat the AMF by saving more than 2.5 lives in expectation (see Figure 3).

Figure 3

Assuming the population reaches and stays at 11 billion and people live for 80 years on average, she calculates the future population if humanity continues for…

  • 87,000 years (based on extinction risks from natural causes[6])

    • There will be 12 trillion future humans, meaning AI safety beats the AMF

  • 1 million years (the average lifespan of hominins)

    • The future holds 140 trillion people, meaning AI safety and pandemic prevention beat the AMF

  • 1 billion years (until Earth becomes uninhabitable)

    • The future population is 140,000 trillion, meaning AI safety, pandemic prevention, and asteroid detection beat the AMF

In fact, with these assumptions, humanity must continue for only 265 years for AI safety to beat the AMF. Humanity is expected to continue this long even with a 16 probability of extinction in the next 100 years (Toby Ord’s estimate).

Kosonen also argues that assigning a negligible probability to space settlement or digital minds is overconfident, and if we don’t ignore these possibilities, the expected number of future lives becomes far greater.

Overall, Kosonen thinks the future seems large enough for longtermism to ring true, even if we ignore unlikely possibilities.

The No Difference Argument

  • The No Difference Argument: The probability we make a difference in whether extinction occurs is so low we should ignore it.

On its face, the No Difference Argument seems to impair longetermism. However, Kosonen argues it faces collective action problems. For instance, imagine an asteroid is about to hit Earth. We have multiple defenses, but each one has a very low probability of working and incurs a small cost.

Since each defense probably won’t make a difference, the No Difference Argument says we shouldn’t launch them, even if we almost certainly would stop the asteroid if we launch many of them.

Kosonen argues our decisions about whether we work on existential risk reduction are similar to the asteroid defenses; we plausibly ought to take others’ decisions into account and consider whether the collective has a non-negligible probability of making a difference. She discusses the justifications for this view as well as the problems with it on pages 27 to 33.

Moreover, she thinks the collective has a non-negligible probability of preventing extinction. For instance, spending $1 billion on AI safety would plausibly provide an absolute reduction in extinction risk of at least 1 in 100,000—though this estimate is conservative.[7]

Conclusion/​Brief Summary

To recap, Kosonen argues that even if we discount small probabilities, longtermism holds true. She refutes three arguments.

  1. The Low Risks Argument: The probabilities of existential risks are so low that we should ignore them. She finds this isn’t true because…

    1. If you ignore probabilities below some threshold (Naive Discounting), it’s unclear what you should do.

    2. If you ignore the best and worst extremes (Tail Discounting), you won’t ignore extinction risk, as there are (plausibly) meaningful probabilities of outcomes better and worse than extinction.

  2. The Small Future Argument: After we ignore unlikely outcomes (e.g., space settlement or digital minds), the number of individuals in the future is too low for longtermism to be correct.

    1. She finds this isn’t true because likely scenarios feature humanity persisting for long enough that longtermism is true.

  3. The No Difference Argument: The probability we make a difference in whether extinction occurs is so low we should ignore it.

    1. She offers reasons you should consider our collective probability of making a difference instead of your individual probability.

    2. She argues our collective probability of preventing extinction is non-negligible.

  1. ^

    Kosonen finds it implausible that there is a specific required threshold. For instance, Monton, who defends probability discounting, considers this threshold “subjective within reason.”
    Others have proposed thresholds, such as 1 in 10,000 (Buffon) and 1 in 144,768 (Condorcet). Buffon’s threshold is the probability of a 56-year-old dying on a given day, a chance most people ignore. Condorcet had a similar justification.

  2. ^

    See pages 6 and 7 to see the estimates she consults.

  3. ^

    She thinks there’s a non-negligible probability that human and non-human suffering make the world’s value net negative (an outcome worse than extinction).

  4. ^

    See pages 11 to 14 for her calculations.

  5. ^

    These are absolute (not relative) reductions in probabilities.

  6. ^

    Snyder-Beattie et al. (2019).

  7. ^

    This calculation assumes extinction from AI has a 0.1% risk in the next 100 years (while the median expert estimate is 5%), and it assumes $1 billion will only decrease the chance of extinction by AI by 1%.