a) TBH, Iâm not very concerned with precise values of point-estimates for the probability of human extinction. Because of anthropic bias, or the fact that this is necessarily a one-time event, the incredible values involved, and doubts about how to extrapolate from past events here, etc., So many degress of freedom, that I donât expect the uncertainties in question to be properly expressed. Thus, if the overall âtrueâ x-risk is 1% or 0.00000001%, that doesnât make a lot of difference to meâat least in terms of policy recommendation.
On the one hand, I agree expected value estimates cannot be taken literally. On the other, I think there is a massive difference between oneâs best guess for the annual extinction risk[1] being 1 % or 10^-10 (in policy and elsewhere). I guess you were not being literal? In terms of risk of personal death, that would be the difference between a non-Sherpa first-timer climbing Mount Everest[2] (risky), and driving for 1 s[3] (not risky).
It is worth noting one of the upshorts of the post I linked above is that priors are important. I see my post as an illustration that priors for extinction risk are quite low, such that inside view estimates should be heavily moderated.
It may often not be desirable to prioritise based on point estimates, but there is a sense in which they are unavoidable. When one decides to prioritise A over B at the margin, one is implicitly relying on point estimates: âexpected marginal cost-effectiveness of Aâ > âexpected marginal cost-effectiveness of Bâ.
Iâm rather more concerned with odds ratios. If one says that every x-risk estimate is off by n orders of magnitude, I have nothing to reply; instead, Iâm interested in knowing if, e.g., one specific type of risk is off, or if it makes human extinction 100 times more likely than the âbackground rate of extinctionâ (I hate this expression, because it suggests we are talking about frequencies).
That makes a lot of sense if one is assessing interventions to decrease extinction risk. However, if the risk is sufficiently low, it will arguably be better to start relying on other metrics. So I think it is worth keeping track of the absolute risk for the purpose of cause prioritisation.
b) So I have been wondering if, instead of trying to compute a causal chain leading from now to extinction, itâd be more useful to do backward reasoning instead: suppose that humanity is extinct (or reduced to a locked-in state) by 3000 CE (or any other period you choose); how likely is it that factor x figures in a causal chain leading to that?
Pre-mortems make sense. Yet, they also involve thinking about the causal chain. In contrast, my post takes an outside view approach without modelling the causal chain, which is also useful. Striking the right balance between inside and outside views is one of the Ten Commandments for Aspiring Superforecasters.
When I try to consider this, I think that a messy unlucky narrative where many catastrophes concur is at least on a pair with a âpaperclip-maxâ scenario. Thus, even though WW 3 would not wipe us out, it would make it way more likely that something else would destroy us afterwards. Iâll someday try to properly model this.
I agree cascade effects are real, and that having a 2nd catastrophe conditional on 1 catastrophe will tend to be more likely than having the 1st catastrophe. Still, having 2 catastrophes will tend to be less likely than having 1, and I guess the risk of the 1st catastrophe will often be a good proxy for the overall risk.
c) I suspect that some confusions might be due to Parfitâs thought-experiment: because extinction would be much worse than an event that killed 99% of humanity, people often think about events that could wipe us out once and for all. But, in the real world, an event that killed 99% of humanity at once is way more likely than extinction at once, and the former would probably increase extinction risk in many orders of magnitude (specially if most survivors were confined to a state where they would be fragile against local catastrophes). The last human will possibly die of something quite ordinary.
Relatedly, readers may want to check Luisa Rodriguezâ post on the likelihood that civilizational collapse would directly lead to human extinction. Nonetheless, at least following my methodology, which does not capture all relevant considerations, annual war deaths being 99 % of the global population is also astronomically unlikely for most best fit distributions. You can see this comparing my estimates for the probability of a 10 % and 100 % population loss.
d) Thereâs an interesting philosophical discussion to be had about what âthe correct estimate of the probability of human extinctionâ even means. Itâs certainly not an objective probability; so the grounds for saying that such an estimate is better than another might be something like that it converges towards what an ideal prediction market or logical inductor would output. But then, I am quite puzzled about how such a mechanism could work for x-risks (how would one define prices? well, one could perhaps value lives with the statistical value of life, like Martin & Pyndick).
I would argue there is not a fundamental difference between objective and subjective probabilities. All probabilities are based on past empirical evidence and personal guesses to a certain extent. That being said, I think using heuristics like the ones you suggested can be useful to ground more subjective probabilities.
10^-10 corresponds to 10^-4 micromorts, and driving â370 kmâ corresponds to 1 micromort. So 10^-10 respects driving for 0.037 km (370 km times 10^-4), which would take 1 s (= 0.037/â100*60^2) at 100 km/âh.
I think there is a massive difference between oneâs best guess for the annual extinction risk[1] being 1 % or 10^-10 (in policy and elsewhere). I guess you were not being literal? In terms of risk of personal death, that would be the difference between a non-Sherpa first-timer climbing Mount Everest[2] (risky), and driving for 1 s[3] (not risky).
I did say that Iâm not very concerned with the absolute values of precise point-estimates, and more interested in proportional changes and in relative probabilities; allow me to explain:
First, as a rule of thumb, coeteris paribus, a decrease in the avg x-risk implies an increase in the expected duration of human survivalâso yielding a proportionally higher expected value for reducing x-risk. I think this can be inferred from Thorstadâs toy model in Existential risk pessimism and the time of perils. So, if something reduces x-risk by 100x, Iâm assuming it doesnât make much difference, from my POV, if the prior x-risk is 1% or 10^-10 - because Iâm assuming that EV will stay the same. This is not always true; I should have clarified this.
Second, itâs not that I donât see any difference between â1%â vs. â10^-10âł; I just donât take sentences of the type âthe probability of p is 10^-14â at face value. For me, the reference for such measures might be quite ambiguous without additional informationâin the excerpt I quoted above, you do provide that when you say that this difference would correspond to the distance between the risk of death for Everest climbing vs. driving for 1s â which, btw, are extrapolated from frequencies (according to the footnotes you provided).
Now, it looks like you say that, given your best estimate, the probability of extinction due to war is really approximately like picking a certain number from a lottery with 10^14 possibilities, or the probability of tossing a fair coin 46-47 times and getting only heads; itâs just that, because itâs not resilient, there are many things that could make you significantly update your model (unlike the case of the lottery and the fair coin). I do have something like a philosophical problem with that, which is unimportant; but I think it might result in a practical problem, which might be important. So...
It reminds me of a paper by the epistemologist Duncan Pritchard, where he supposes that a bomb will explode if (i) in a lottery, a specific number out of 14 million is withdrawn, or if ( (ii) a conjunction of bizarre events (eg., the spontaneous pronouncement of a certain Polish sentence during the Queenâs next speech, the victory of an underdog at the Grand National...) occurs, with an assigned probability of 1 in 14 million. Pritchard concludes that, though both conditions are equiprobable, we consider the latter to be a lesser risk because it is âmodally farther awayâ, in a âmore distant worldâ; I think thatâs a terrible solution: people usually prefer to toss a fair coin rather than a coin they know is biased (but whose precise bias they ignore), even though both scenarios have the same âmodal distanceâ. Instead, the problem is, I think, that reducing our assessment to a point-estimate might fail to convey our uncertainty regarding the differences in both information sets â and one of the goals of subjective probabilities is actually to provide a measurement of uncertainty (and the expectation of surprise). Thatâs why, when Iâm talking about very different things, I prefer statements like âboth probability distributions have the same meanâ to claims such as âboth events have the same probabilityâ.
Finally, I admit that the financial crisis of 2008 might have made me a bit too skeptical of sophisticated models yielding precise estimates with astronomically tiny odds, when applied to events that require no farfetched assumptionsâparticularly if minor correations are neglected, and if underestimating the probability of a hazard might make people more lenient regarding it (and so unnecessarily make it more likely). Iâm not sure how epistemically sound my behavior is; and I want to emphasize that this skepticism is not quite applicable to your analysisâas you make clear that your probabilities are not resilient, and point out the main caveats involved (particularly that, e.g., a lot depends on what type of distribution is a better fit for predicting war casualties, or on what role tech plays).
First, as a rule of thumb, coeteris paribus, a decrease in the avg x-risk implies an increase in the expected duration of human survivalâso yielding a proportionally higher expected value for reducing x-risk. I think this can be inferred from Thorstadâs toy model in Existential risk pessimism and the time of perils. So, if something reduces x-risk by 100x, Iâm assuming it doesnât make much difference, from my POV, if the prior x-risk is 1% or 10^-10 - because Iâm assuming that EV will stay the same. This is not always true; I should have clarified this.
I think you mean that the expected value of the future will not change much if one decreases the nearterm annual existential risk without decreasing the longterm annual existential risk.
Thanks for sharing your thoughts, Ramiro!
On the one hand, I agree expected value estimates cannot be taken literally. On the other, I think there is a massive difference between oneâs best guess for the annual extinction risk[1] being 1 % or 10^-10 (in policy and elsewhere). I guess you were not being literal? In terms of risk of personal death, that would be the difference between a non-Sherpa first-timer climbing Mount Everest[2] (risky), and driving for 1 s[3] (not risky).
It is worth noting one of the upshorts of the post I linked above is that priors are important. I see my post as an illustration that priors for extinction risk are quite low, such that inside view estimates should be heavily moderated.
It may often not be desirable to prioritise based on point estimates, but there is a sense in which they are unavoidable. When one decides to prioritise A over B at the margin, one is implicitly relying on point estimates: âexpected marginal cost-effectiveness of Aâ > âexpected marginal cost-effectiveness of Bâ.
That makes a lot of sense if one is assessing interventions to decrease extinction risk. However, if the risk is sufficiently low, it will arguably be better to start relying on other metrics. So I think it is worth keeping track of the absolute risk for the purpose of cause prioritisation.
Pre-mortems make sense. Yet, they also involve thinking about the causal chain. In contrast, my post takes an outside view approach without modelling the causal chain, which is also useful. Striking the right balance between inside and outside views is one of the Ten Commandments for Aspiring Superforecasters.
I agree cascade effects are real, and that having a 2nd catastrophe conditional on 1 catastrophe will tend to be more likely than having the 1st catastrophe. Still, having 2 catastrophes will tend to be less likely than having 1, and I guess the risk of the 1st catastrophe will often be a good proxy for the overall risk.
Relatedly, readers may want to check Luisa Rodriguezâ post on the likelihood that civilizational collapse would directly lead to human extinction. Nonetheless, at least following my methodology, which does not capture all relevant considerations, annual war deaths being 99 % of the global population is also astronomically unlikely for most best fit distributions. You can see this comparing my estimates for the probability of a 10 % and 100 % population loss.
I would argue there is not a fundamental difference between objective and subjective probabilities. All probabilities are based on past empirical evidence and personal guesses to a certain extent. That being said, I think using heuristics like the ones you suggested can be useful to ground more subjective probabilities.
I prefer focussing on extinction risk.
According to this article, âfrom 2006 to 2019, the death rate for first-time, non-Sherpa climbers was 0.5% for women and 1.1% for menâ.
10^-10 corresponds to 10^-4 micromorts, and driving â370 kmâ corresponds to 1 micromort. So 10^-10 respects driving for 0.037 km (370 km times 10^-4), which would take 1 s (= 0.037/â100*60^2) at 100 km/âh.
Let me briefly try to reply or clarify this:
I did say that Iâm not very concerned with the absolute values of precise point-estimates, and more interested in proportional changes and in relative probabilities; allow me to explain:
First, as a rule of thumb, coeteris paribus, a decrease in the avg x-risk implies an increase in the expected duration of human survivalâso yielding a proportionally higher expected value for reducing x-risk. I think this can be inferred from Thorstadâs toy model in Existential risk pessimism and the time of perils. So, if something reduces x-risk by 100x, Iâm assuming it doesnât make much difference, from my POV, if the prior x-risk is 1% or 10^-10 - because Iâm assuming that EV will stay the same. This is not always true; I should have clarified this.
Second, itâs not that I donât see any difference between â1%â vs. â10^-10âł; I just donât take sentences of the type âthe probability of p is 10^-14â at face value. For me, the reference for such measures might be quite ambiguous without additional informationâin the excerpt I quoted above, you do provide that when you say that this difference would correspond to the distance between the risk of death for Everest climbing vs. driving for 1s â which, btw, are extrapolated from frequencies (according to the footnotes you provided).
Now, it looks like you say that, given your best estimate, the probability of extinction due to war is really approximately like picking a certain number from a lottery with 10^14 possibilities, or the probability of tossing a fair coin 46-47 times and getting only heads; itâs just that, because itâs not resilient, there are many things that could make you significantly update your model (unlike the case of the lottery and the fair coin). I do have something like a philosophical problem with that, which is unimportant; but I think it might result in a practical problem, which might be important. So...
It reminds me of a paper by the epistemologist Duncan Pritchard, where he supposes that a bomb will explode if (i) in a lottery, a specific number out of 14 million is withdrawn, or if ( (ii) a conjunction of bizarre events (eg., the spontaneous pronouncement of a certain Polish sentence during the Queenâs next speech, the victory of an underdog at the Grand National...) occurs, with an assigned probability of 1 in 14 million. Pritchard concludes that, though both conditions are equiprobable, we consider the latter to be a lesser risk because it is âmodally farther awayâ, in a âmore distant worldâ; I think thatâs a terrible solution: people usually prefer to toss a fair coin rather than a coin they know is biased (but whose precise bias they ignore), even though both scenarios have the same âmodal distanceâ. Instead, the problem is, I think, that reducing our assessment to a point-estimate might fail to convey our uncertainty regarding the differences in both information sets â and one of the goals of subjective probabilities is actually to provide a measurement of uncertainty (and the expectation of surprise). Thatâs why, when Iâm talking about very different things, I prefer statements like âboth probability distributions have the same meanâ to claims such as âboth events have the same probabilityâ.
Finally, I admit that the financial crisis of 2008 might have made me a bit too skeptical of sophisticated models yielding precise estimates with astronomically tiny odds, when applied to events that require no farfetched assumptionsâparticularly if minor correations are neglected, and if underestimating the probability of a hazard might make people more lenient regarding it (and so unnecessarily make it more likely). Iâm not sure how epistemically sound my behavior is; and I want to emphasize that this skepticism is not quite applicable to your analysisâas you make clear that your probabilities are not resilient, and point out the main caveats involved (particularly that, e.g., a lot depends on what type of distribution is a better fit for predicting war casualties, or on what role tech plays).
Thanks for clarifying!
I think you mean that the expected value of the future will not change much if one decreases the nearterm annual existential risk without decreasing the longterm annual existential risk.