People often argue that we urgently need to prioritize reducing existential risk because we live in an unusually dangerous time. If existential risk decreases over time, one might intuitively expect that efforts to reduce x-risk will matter less later on. But in fact, the lower the risk of existential catastrophe, the more valuable it is to further reduce that risk.
Think of it like this: if we face a 50% risk of extinction per century, we will last two centuries on average. If we reduce the risk to 25%, the expected length of the future doubles to four centuries. Halving risk again doubles the expected length to eight centuries. In general, halving x-risk becomes more valuable when x-risk is lower.
This argument starts with assumptions implying that civilization has on the order of a 10^-3000 chance of surviving a million years, a duration typical of mammalian species. In the second case it’s 10^-1250. That’s a completely absurd claim, a result of modeling as though you have infinite certainty in a constant hazard rate.
If you start with some reasonable credence that we’re not doomed and can enter a stable state of low risk, this effect becomes second order or negligible. E.g. leaping off from the Precipice estimates, say there’s expected 1⁄6 extinction risk this century, and 1⁄6 for the rest of history. I.e. probably we stabilize enough for civilization to survive as long as feasible. If the two periods were uncorrelated, then this reduces the value of preventing an existential catastrophe this century by between 1⁄6 and 1/3rd compared to preventing one after the risk of this century. That’s not negligible, but also not first order, and the risk of catastrophe would also cut the returns of saving for the future (your investments and institution/movement-building for x-risk 2 are destroyed if x-risk 1 wipes out humanity).
[For the Precipice estimates, it’s also worth noting that part of the reason for risk being after this century is credence on critical tech developments like AGI happening after this century, so if we make it through that this century, then risk in the later periods is lower since we’ve already passed through the dangerous transition and likely developed the means for stabilization at minimal risk.]
Scenarios where we are 99%+ likely to go prematurely extinct, from a sequence of separate risks that would each drive the probability of survival low, are going to have very low NPV of the future population, but we should not be near-certain that we are in such a scenario, and with uncertainty over reasonable parameter values you wind up with the dominant cases being those with substantial risk followed by substantial likelihood of safe stabilization, and late x-risk reduction work is not favored over reduction soon.
The problem with this is similar to the problem with not modelling uncertainty about discount rates discussed by Weitzman. If you project forward 100 years, scenarios with high discount rates drop out of your calculation, while the low discount rates scenarios dominate at that point. Likewise, the longtermist value of the long term future is all about the plausible scenarios where hazard rates give a limited cumulative x-risk probability over future history.
This result might not hold up if:
In future centuries, civilization will reduce x-risk to such a low rate that it will become too difficult to reduce any further.
It’s not required that it *will* do so, merely that it may plausibly go low enough that the total fraction of the future lost to such hazard rates doesn’t become overwhelmingly high.
The passage you quoted was just an example, I don’t actually think we should use exponential discounting. The thesis of the essay can still be true when using a declining hazard rate.
If you accept Toby Ord’s numbers of a 1⁄6 x-risk this century and a 1⁄6 x-risk in all future centuries, then it’s almost certainly more cost-effective to reduce x-risk this century. But suppose we use different numbers. For example, say 10% chance this century and 90% chance in all future centuries. Also suppose short-term x-risk reduction efforts only help this century, while longtermist institutional reform helps in all future centuries. Under these conditions, it seems likely that marginal work on longtermist institutional reform is more cost-effective. (I don’t actually think these conditions are very likely to be true.)
(Aside: Any assumption of fixed <100% chance of existential catastrophe runs into the problem that now the EV of the future is infinite. As far as I know, we haven’t figured out any good way to compare infinite futures. So even though it’s intuitively plausible, we don’t know if we can actually say that an 89% chance of extinction is preferable to a 90% chance (maybe limit-discounted utilitarianism can say so). This is not to say we shouldn’t assume a <100% chance, just that if we do so, we run into some serious unsolved problems.)
This argument starts with assumptions implying that civilization has on the order of a 10^-3000 chance of surviving a million years, a duration typical of mammalian species. In the second case it’s 10^-1250. That’s a completely absurd claim, a result of modeling as though you have infinite certainty in a constant hazard rate.
If you start with some reasonable credence that we’re not doomed and can enter a stable state of low risk, this effect becomes second order or negligible. E.g. leaping off from the Precipice estimates, say there’s expected 1⁄6 extinction risk this century, and 1⁄6 for the rest of history. I.e. probably we stabilize enough for civilization to survive as long as feasible. If the two periods were uncorrelated, then this reduces the value of preventing an existential catastrophe this century by between 1⁄6 and 1/3rd compared to preventing one after the risk of this century. That’s not negligible, but also not first order, and the risk of catastrophe would also cut the returns of saving for the future (your investments and institution/movement-building for x-risk 2 are destroyed if x-risk 1 wipes out humanity).
[For the Precipice estimates, it’s also worth noting that part of the reason for risk being after this century is credence on critical tech developments like AGI happening after this century, so if we make it through that this century, then risk in the later periods is lower since we’ve already passed through the dangerous transition and likely developed the means for stabilization at minimal risk.]
Scenarios where we are 99%+ likely to go prematurely extinct, from a sequence of separate risks that would each drive the probability of survival low, are going to have very low NPV of the future population, but we should not be near-certain that we are in such a scenario, and with uncertainty over reasonable parameter values you wind up with the dominant cases being those with substantial risk followed by substantial likelihood of safe stabilization, and late x-risk reduction work is not favored over reduction soon.
The problem with this is similar to the problem with not modelling uncertainty about discount rates discussed by Weitzman. If you project forward 100 years, scenarios with high discount rates drop out of your calculation, while the low discount rates scenarios dominate at that point. Likewise, the longtermist value of the long term future is all about the plausible scenarios where hazard rates give a limited cumulative x-risk probability over future history.
It’s not required that it *will* do so, merely that it may plausibly go low enough that the total fraction of the future lost to such hazard rates doesn’t become overwhelmingly high.
The passage you quoted was just an example, I don’t actually think we should use exponential discounting. The thesis of the essay can still be true when using a declining hazard rate.
If you accept Toby Ord’s numbers of a 1⁄6 x-risk this century and a 1⁄6 x-risk in all future centuries, then it’s almost certainly more cost-effective to reduce x-risk this century. But suppose we use different numbers. For example, say 10% chance this century and 90% chance in all future centuries. Also suppose short-term x-risk reduction efforts only help this century, while longtermist institutional reform helps in all future centuries. Under these conditions, it seems likely that marginal work on longtermist institutional reform is more cost-effective. (I don’t actually think these conditions are very likely to be true.)
(Aside: Any assumption of fixed <100% chance of existential catastrophe runs into the problem that now the EV of the future is infinite. As far as I know, we haven’t figured out any good way to compare infinite futures. So even though it’s intuitively plausible, we don’t know if we can actually say that an 89% chance of extinction is preferable to a 90% chance (maybe limit-discounted utilitarianism can say so). This is not to say we shouldn’t assume a <100% chance, just that if we do so, we run into some serious unsolved problems.)