I think there is a massive difference between one’s best guess for the annual extinction risk[1] being 1 % or 10^-10 (in policy and elsewhere). I guess you were not being literal? In terms of risk of personal death, that would be the difference between a non-Sherpa first-timer climbing Mount Everest[2] (risky), and driving for 1 s[3] (not risky).
I did say that I’m not very concerned with the absolute values of precise point-estimates, and more interested in proportional changes and in relative probabilities; allow me to explain:
First, as a rule of thumb, coeteris paribus, a decrease in the avg x-risk implies an increase in the expected duration of human survival—so yielding a proportionally higher expected value for reducing x-risk. I think this can be inferred from Thorstad’s toy model in Existential risk pessimism and the time of perils. So, if something reduces x-risk by 100x, I’m assuming it doesn’t make much difference, from my POV, if the prior x-risk is 1% or 10^-10 - because I’m assuming that EV will stay the same. This is not always true; I should have clarified this.
Second, it’s not that I don’t see any difference between “1%” vs. “10^-10″; I just don’t take sentences of the type “the probability of p is 10^-14” at face value. For me, the reference for such measures might be quite ambiguous without additional information—in the excerpt I quoted above, you do provide that when you say that this difference would correspond to the distance between the risk of death for Everest climbing vs. driving for 1s – which, btw, are extrapolated from frequencies (according to the footnotes you provided).
Now, it looks like you say that, given your best estimate, the probability of extinction due to war is really approximately like picking a certain number from a lottery with 10^14 possibilities, or the probability of tossing a fair coin 46-47 times and getting only heads; it’s just that, because it’s not resilient, there are many things that could make you significantly update your model (unlike the case of the lottery and the fair coin). I do have something like a philosophical problem with that, which is unimportant; but I think it might result in a practical problem, which might be important. So...
It reminds me of a paper by the epistemologist Duncan Pritchard, where he supposes that a bomb will explode if (i) in a lottery, a specific number out of 14 million is withdrawn, or if ( (ii) a conjunction of bizarre events (eg., the spontaneous pronouncement of a certain Polish sentence during the Queen’s next speech, the victory of an underdog at the Grand National...) occurs, with an assigned probability of 1 in 14 million. Pritchard concludes that, though both conditions are equiprobable, we consider the latter to be a lesser risk because it is “modally farther away”, in a “more distant world”; I think that’s a terrible solution: people usually prefer to toss a fair coin rather than a coin they know is biased (but whose precise bias they ignore), even though both scenarios have the same “modal distance”. Instead, the problem is, I think, that reducing our assessment to a point-estimate might fail to convey our uncertainty regarding the differences in both information sets – and one of the goals of subjective probabilities is actually to provide a measurement of uncertainty (and the expectation of surprise). That’s why, when I’m talking about very different things, I prefer statements like “both probability distributions have the same mean” to claims such as “both events have the same probability”.
Finally, I admit that the financial crisis of 2008 might have made me a bit too skeptical of sophisticated models yielding precise estimates with astronomically tiny odds, when applied to events that require no farfetched assumptions—particularly if minor correations are neglected, and if underestimating the probability of a hazard might make people more lenient regarding it (and so unnecessarily make it more likely). I’m not sure how epistemically sound my behavior is; and I want to emphasize that this skepticism is not quite applicable to your analysis—as you make clear that your probabilities are not resilient, and point out the main caveats involved (particularly that, e.g., a lot depends on what type of distribution is a better fit for predicting war casualties, or on what role tech plays).
First, as a rule of thumb, coeteris paribus, a decrease in the avg x-risk implies an increase in the expected duration of human survival—so yielding a proportionally higher expected value for reducing x-risk. I think this can be inferred from Thorstad’s toy model in Existential risk pessimism and the time of perils. So, if something reduces x-risk by 100x, I’m assuming it doesn’t make much difference, from my POV, if the prior x-risk is 1% or 10^-10 - because I’m assuming that EV will stay the same. This is not always true; I should have clarified this.
I think you mean that the expected value of the future will not change much if one decreases the nearterm annual existential risk without decreasing the longterm annual existential risk.
Let me briefly try to reply or clarify this:
I did say that I’m not very concerned with the absolute values of precise point-estimates, and more interested in proportional changes and in relative probabilities; allow me to explain:
First, as a rule of thumb, coeteris paribus, a decrease in the avg x-risk implies an increase in the expected duration of human survival—so yielding a proportionally higher expected value for reducing x-risk. I think this can be inferred from Thorstad’s toy model in Existential risk pessimism and the time of perils. So, if something reduces x-risk by 100x, I’m assuming it doesn’t make much difference, from my POV, if the prior x-risk is 1% or 10^-10 - because I’m assuming that EV will stay the same. This is not always true; I should have clarified this.
Second, it’s not that I don’t see any difference between “1%” vs. “10^-10″; I just don’t take sentences of the type “the probability of p is 10^-14” at face value. For me, the reference for such measures might be quite ambiguous without additional information—in the excerpt I quoted above, you do provide that when you say that this difference would correspond to the distance between the risk of death for Everest climbing vs. driving for 1s – which, btw, are extrapolated from frequencies (according to the footnotes you provided).
Now, it looks like you say that, given your best estimate, the probability of extinction due to war is really approximately like picking a certain number from a lottery with 10^14 possibilities, or the probability of tossing a fair coin 46-47 times and getting only heads; it’s just that, because it’s not resilient, there are many things that could make you significantly update your model (unlike the case of the lottery and the fair coin). I do have something like a philosophical problem with that, which is unimportant; but I think it might result in a practical problem, which might be important. So...
It reminds me of a paper by the epistemologist Duncan Pritchard, where he supposes that a bomb will explode if (i) in a lottery, a specific number out of 14 million is withdrawn, or if ( (ii) a conjunction of bizarre events (eg., the spontaneous pronouncement of a certain Polish sentence during the Queen’s next speech, the victory of an underdog at the Grand National...) occurs, with an assigned probability of 1 in 14 million. Pritchard concludes that, though both conditions are equiprobable, we consider the latter to be a lesser risk because it is “modally farther away”, in a “more distant world”; I think that’s a terrible solution: people usually prefer to toss a fair coin rather than a coin they know is biased (but whose precise bias they ignore), even though both scenarios have the same “modal distance”. Instead, the problem is, I think, that reducing our assessment to a point-estimate might fail to convey our uncertainty regarding the differences in both information sets – and one of the goals of subjective probabilities is actually to provide a measurement of uncertainty (and the expectation of surprise). That’s why, when I’m talking about very different things, I prefer statements like “both probability distributions have the same mean” to claims such as “both events have the same probability”.
Finally, I admit that the financial crisis of 2008 might have made me a bit too skeptical of sophisticated models yielding precise estimates with astronomically tiny odds, when applied to events that require no farfetched assumptions—particularly if minor correations are neglected, and if underestimating the probability of a hazard might make people more lenient regarding it (and so unnecessarily make it more likely). I’m not sure how epistemically sound my behavior is; and I want to emphasize that this skepticism is not quite applicable to your analysis—as you make clear that your probabilities are not resilient, and point out the main caveats involved (particularly that, e.g., a lot depends on what type of distribution is a better fit for predicting war casualties, or on what role tech plays).
Thanks for clarifying!
I think you mean that the expected value of the future will not change much if one decreases the nearterm annual existential risk without decreasing the longterm annual existential risk.