If a risk estimate is used for EA cause prio, it should be our betting odds /​ subjectie probabilities, that is, average over our epistemic uncertainty. If from our point of view a risk is 10% likely to be >0.001%, and 90% likely to be ~0%, this lower bounds our betting odds at 0.0001%. It doesn’t matter that it’s more likely to be 0%.
Agreed. I expect my estimate for the nearterm extinction risk from nuclear war to remain astronomically low.
Statistics of human height are much better understood than nuclear war because we have billions of humans but no nuclear wars. The situation is more analogous to finding the probability of a 10 meter tall adult human having only ever observed a few thousand monkeys (conventional wars), plus one human infant (WWII) and also knowing that every few individuals humans mutate into an entirely new species (technological progress).
My study of the monkeys and infants, i.e. my analysis of past wars, suggested an annual extinction risk from wars of 6.36*10^-14, which is still 1.07 % (= 5.93*10^-12/​(5.53*10^-10)) of my best guess.
It would be difficult to create a model suggesting a much higher risk because most of the risk comes from black swan events. Maybe one could upper bound the probability by considering huge numbers of possible mechanisms for extinction and ruling them out, but I don’t see how you could get anywhere near 10^-12.
For the superforecasters’ annual extinction risk from nuclear war until 2050 of 3.57*10^-6 to be correct, my model would need to miss 99.9998 % (= 1 − 5.93*10^-12/​(3.57*10^-6)) of the total risk. You say most (i.e. more than 50 %) of the risk comes from black swan events, but I think it would be really surprising if 99.9998 % did? The black swan events would also have to absent in some sense from XPT’s report, because my estimate accounts for the information I found there.
I should also clarify my 10^-6 probability of human extinction given insufficient calorie production is supposed to account for unknown unknowns. Otherwise, my extinction risk from nuclear war would be orders of magnitude lower.
My study of the monkeys and infants, i.e. my analysis of past wars, suggested an annual extinction risk from wars of 6.36*10^-14, which is still 1.07 % (= 5.93*10^-12/​(5.53*10^-10)) of my best guess.
The fact that one model of one process gives a low number doesn’t mean the true number is within a couple orders of magnitude of that. Modeling mortgage-backed security risk in 2007 using a Gaussian copula gives an astronomically low estimate of something like 10^-200, even though they did in fact default and cause the financial crisis. If the bankers adjusted their estimate upward to 10^-198 it would still be wrong.
IMO it is not really surprising for very near 100% of the risk of something to come from unmodeled risks, if the modeled risk is extremely low. Like say I write some code to generate random digits, and the first 200 outputs are zeros. One might estimate this at 10^-200 probability or adjust upwards to 10^-198, but the probability of this happening is way more than 10^-200 due to bugs.
The fact that one model of one process gives a low number doesn’t mean the true number is within a couple orders of magnitude of that.
Agreed. One should not put all weight in a single model. Likewise, one’s best guess for the annual extinction risk from wars should not update to (Stephen Clare’s) 0.01 % just because one model (Pareto distribution) outputs that. So the question of how one aggregates the outputs of various models is quite important. In my analysis of past wars, I considered 111 models, and got an annual extinction risk of 6.36*10^-14 for what I think is a reasonable aggregation method. You may think my aggregation method is super wrong, but this is different from suggesting I am putting all weight into a single method. Past analyses of war extinction risk did this, but not mine.
IMO it is not really surprising for very near 100% of the risk of something to come from unmodeled risks, if the modeled risk is extremely low. Like say I write some code to generate random digits, and the first 200 outputs are zeros. One might estimate this at 10^-200 probability or adjust upwards to 10^-198, but the probability of this happening is way more than 10^-200 due to bugs.
If it was not for considerations like the above, my best guess for the nearterm extinction risk from nuclear war would be many orders of magnitude below my estimate of 10^-11. I would very much agree that a risk of e.g. 10^-20 would be super overconfident, and not pay sufficient attention to unknown unknowns.
Agreed. I expect my estimate for the nearterm extinction risk from nuclear war to remain astronomically low.
My study of the monkeys and infants, i.e. my analysis of past wars, suggested an annual extinction risk from wars of 6.36*10^-14, which is still 1.07 % (= 5.93*10^-12/​(5.53*10^-10)) of my best guess.
For the superforecasters’ annual extinction risk from nuclear war until 2050 of 3.57*10^-6 to be correct, my model would need to miss 99.9998 % (= 1 − 5.93*10^-12/​(3.57*10^-6)) of the total risk. You say most (i.e. more than 50 %) of the risk comes from black swan events, but I think it would be really surprising if 99.9998 % did? The black swan events would also have to absent in some sense from XPT’s report, because my estimate accounts for the information I found there.
I should also clarify my 10^-6 probability of human extinction given insufficient calorie production is supposed to account for unknown unknowns. Otherwise, my extinction risk from nuclear war would be orders of magnitude lower.
The fact that one model of one process gives a low number doesn’t mean the true number is within a couple orders of magnitude of that. Modeling mortgage-backed security risk in 2007 using a Gaussian copula gives an astronomically low estimate of something like 10^-200, even though they did in fact default and cause the financial crisis. If the bankers adjusted their estimate upward to 10^-198 it would still be wrong.
IMO it is not really surprising for very near 100% of the risk of something to come from unmodeled risks, if the modeled risk is extremely low. Like say I write some code to generate random digits, and the first 200 outputs are zeros. One might estimate this at 10^-200 probability or adjust upwards to 10^-198, but the probability of this happening is way more than 10^-200 due to bugs.
Agreed. One should not put all weight in a single model. Likewise, one’s best guess for the annual extinction risk from wars should not update to (Stephen Clare’s) 0.01 % just because one model (Pareto distribution) outputs that. So the question of how one aggregates the outputs of various models is quite important. In my analysis of past wars, I considered 111 models, and got an annual extinction risk of 6.36*10^-14 for what I think is a reasonable aggregation method. You may think my aggregation method is super wrong, but this is different from suggesting I am putting all weight into a single method. Past analyses of war extinction risk did this, but not mine.
If it was not for considerations like the above, my best guess for the nearterm extinction risk from nuclear war would be many orders of magnitude below my estimate of 10^-11. I would very much agree that a risk of e.g. 10^-20 would be super overconfident, and not pay sufficient attention to unknown unknowns.