Clearly you believe that probabilities can be less than 1%, reliably. Your probably of being struck by lightning today is not “0% or maybe 1%”, it’s on the order of 0.001%. Your probability of winning the lottery is not “0% or 1%” it’s ~0.0000001%. I am confident you deal with probabilities that have much less than 1% error all the time, and feel comfortable using them.
It doesn’t make sense to think of humility as something absolute like “don’t give highly specific probabilities”. You frequently have justified belief of a probability being very highly specific (the probability that random.org’s random number generator will generate “2” when asked about a random number between 1 and 10 is exactly 10%, not 11%, not 9%, exactly 10%, with very little uncertainty about that number).
Clearly you are right. That said, the examples that you give are the kind of frequentist probabilities for which one can actually measure rates. This is quite different from the probability given in the survey, which presumably comes from an imperfect Bayesian model with imprecise inputs.
I also don’t want to belabor the point… but I’m pretty sure my probability of being stuck by lightning today is far from 0.001%. Given where I live and today’s weather, it could be a few orders of magnitude lower. If I use your unadjusted probability (10 micromorts) and am willing to spend $25 to avert a micromort, I would conclude that I should invest $250 in lightning protection today… that seems the kind of wrong conclusion that my post warns about.
I think humility is useful in cases like the present survey question, when a specific low probability, derived from an imperfect model, can change the entire conclusion. There are many computations where the outcome is fairly robust to small absolute estimation errors (e.g., intervention (1) in the question). On the other hand, for computations that depend on a low probability with high sensitivity, we should be extra careful about that probability.
Richard Chappell writes something similar here, better than I could. Thanks Lizka for linking to that post!
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. They’re more or less made up, and could easily be “off” by many, many orders of magnitude. Per Holden Karnofsky’s argument in ‘Why we can’t take explicit expected value estimates literally’, Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
Maybe I should have titled this post differently, for example “Beware of non-robust probability estimates multiplied by large numbers”.
Clearly you believe that probabilities can be less than 1%, reliably. Your probably of being struck by lightning today is not “0% or maybe 1%”, it’s on the order of 0.001%. Your probability of winning the lottery is not “0% or 1%” it’s ~0.0000001%. I am confident you deal with probabilities that have much less than 1% error all the time, and feel comfortable using them.
It doesn’t make sense to think of humility as something absolute like “don’t give highly specific probabilities”. You frequently have justified belief of a probability being very highly specific (the probability that random.org’s random number generator will generate “2” when asked about a random number between 1 and 10 is exactly 10%, not 11%, not 9%, exactly 10%, with very little uncertainty about that number).
This is a great point.
Clearly you are right. That said, the examples that you give are the kind of frequentist probabilities for which one can actually measure rates. This is quite different from the probability given in the survey, which presumably comes from an imperfect Bayesian model with imprecise inputs.
I also don’t want to belabor the point… but I’m pretty sure my probability of being stuck by lightning today is far from 0.001%. Given where I live and today’s weather, it could be a few orders of magnitude lower. If I use your unadjusted probability (10 micromorts) and am willing to spend $25 to avert a micromort, I would conclude that I should invest $250 in lightning protection today… that seems the kind of wrong conclusion that my post warns about.
I think humility is useful in cases like the present survey question, when a specific low probability, derived from an imperfect model, can change the entire conclusion. There are many computations where the outcome is fairly robust to small absolute estimation errors (e.g., intervention (1) in the question). On the other hand, for computations that depend on a low probability with high sensitivity, we should be extra careful about that probability.
Richard Chappell writes something similar here, better than I could. Thanks Lizka for linking to that post!
Maybe I should have titled this post differently, for example “Beware of non-robust probability estimates multiplied by large numbers”.