My study of the monkeys and infants, i.e. my analysis of past wars, suggested an annual extinction risk from wars of 6.36*10^-14, which is still 1.07 % (= 5.93*10^-12/(5.53*10^-10)) of my best guess.
The fact that one model of one process gives a low number doesn’t mean the true number is within a couple orders of magnitude of that. Modeling mortgage-backed security risk in 2007 using a Gaussian copula gives an astronomically low estimate of something like 10^-200, even though they did in fact default and cause the financial crisis. If the bankers adjusted their estimate upward to 10^-198 it would still be wrong.
IMO it is not really surprising for very near 100% of the risk of something to come from unmodeled risks, if the modeled risk is extremely low. Like say I write some code to generate random digits, and the first 200 outputs are zeros. One might estimate this at 10^-200 probability or adjust upwards to 10^-198, but the probability of this happening is way more than 10^-200 due to bugs.
The fact that one model of one process gives a low number doesn’t mean the true number is within a couple orders of magnitude of that.
Agreed. One should not put all weight in a single model. Likewise, one’s best guess for the annual extinction risk from wars should not update to (Stephen Clare’s) 0.01 % just because one model (Pareto distribution) outputs that. So the question of how one aggregates the outputs of various models is quite important. In my analysis of past wars, I considered 111 models, and got an annual extinction risk of 6.36*10^-14 for what I think is a reasonable aggregation method. You may think my aggregation method is super wrong, but this is different from suggesting I am putting all weight into a single method. Past analyses of war extinction risk did this, but not mine.
IMO it is not really surprising for very near 100% of the risk of something to come from unmodeled risks, if the modeled risk is extremely low. Like say I write some code to generate random digits, and the first 200 outputs are zeros. One might estimate this at 10^-200 probability or adjust upwards to 10^-198, but the probability of this happening is way more than 10^-200 due to bugs.
If it was not for considerations like the above, my best guess for the nearterm extinction risk from nuclear war would be many orders of magnitude below my estimate of 10^-11. I would very much agree that a risk of e.g. 10^-20 would be super overconfident, and not pay sufficient attention to unknown unknowns.
The fact that one model of one process gives a low number doesn’t mean the true number is within a couple orders of magnitude of that. Modeling mortgage-backed security risk in 2007 using a Gaussian copula gives an astronomically low estimate of something like 10^-200, even though they did in fact default and cause the financial crisis. If the bankers adjusted their estimate upward to 10^-198 it would still be wrong.
IMO it is not really surprising for very near 100% of the risk of something to come from unmodeled risks, if the modeled risk is extremely low. Like say I write some code to generate random digits, and the first 200 outputs are zeros. One might estimate this at 10^-200 probability or adjust upwards to 10^-198, but the probability of this happening is way more than 10^-200 due to bugs.
Agreed. One should not put all weight in a single model. Likewise, one’s best guess for the annual extinction risk from wars should not update to (Stephen Clare’s) 0.01 % just because one model (Pareto distribution) outputs that. So the question of how one aggregates the outputs of various models is quite important. In my analysis of past wars, I considered 111 models, and got an annual extinction risk of 6.36*10^-14 for what I think is a reasonable aggregation method. You may think my aggregation method is super wrong, but this is different from suggesting I am putting all weight into a single method. Past analyses of war extinction risk did this, but not mine.
If it was not for considerations like the above, my best guess for the nearterm extinction risk from nuclear war would be many orders of magnitude below my estimate of 10^-11. I would very much agree that a risk of e.g. 10^-20 would be super overconfident, and not pay sufficient attention to unknown unknowns.