Clearly you are right. That said, the examples that you give are the kind of frequentist probabilities for which one can actually measure rates. This is quite different from the probability given in the survey, which presumably comes from an imperfect Bayesian model with imprecise inputs.
I also don’t want to belabor the point… but I’m pretty sure my probability of being stuck by lightning today is far from 0.001%. Given where I live and today’s weather, it could be a few orders of magnitude lower. If I use your unadjusted probability (10 micromorts) and am willing to spend $25 to avert a micromort, I would conclude that I should invest $250 in lightning protection today… that seems the kind of wrong conclusion that my post warns about.
I think humility is useful in cases like the present survey question, when a specific low probability, derived from an imperfect model, can change the entire conclusion. There are many computations where the outcome is fairly robust to small absolute estimation errors (e.g., intervention (1) in the question). On the other hand, for computations that depend on a low probability with high sensitivity, we should be extra careful about that probability.
Richard Chappell writes something similar here, better than I could. Thanks Lizka for linking to that post!
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. They’re more or less made up, and could easily be “off” by many, many orders of magnitude. Per Holden Karnofsky’s argument in ‘Why we can’t take explicit expected value estimates literally’, Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
Maybe I should have titled this post differently, for example “Beware of non-robust probability estimates multiplied by large numbers”.
This is a great point.
Clearly you are right. That said, the examples that you give are the kind of frequentist probabilities for which one can actually measure rates. This is quite different from the probability given in the survey, which presumably comes from an imperfect Bayesian model with imprecise inputs.
I also don’t want to belabor the point… but I’m pretty sure my probability of being stuck by lightning today is far from 0.001%. Given where I live and today’s weather, it could be a few orders of magnitude lower. If I use your unadjusted probability (10 micromorts) and am willing to spend $25 to avert a micromort, I would conclude that I should invest $250 in lightning protection today… that seems the kind of wrong conclusion that my post warns about.
I think humility is useful in cases like the present survey question, when a specific low probability, derived from an imperfect model, can change the entire conclusion. There are many computations where the outcome is fairly robust to small absolute estimation errors (e.g., intervention (1) in the question). On the other hand, for computations that depend on a low probability with high sensitivity, we should be extra careful about that probability.
Richard Chappell writes something similar here, better than I could. Thanks Lizka for linking to that post!
Maybe I should have titled this post differently, for example “Beware of non-robust probability estimates multiplied by large numbers”.