To answer your linguistic objection directly, I think one reason/intuition I have for not trusting probabilities much above 99% or much below 1% is that the empirical rates for the reference class of “fairly decent forecaster considers a novel well-defined question for some time, and then becomes inside-view utterly confident in the result” has a failure rate likely between 0.1% and 5%.
For me personally, I think the rate is slightly under 1%, including from misreading a question (eg forgetting the “not”) and not understanding the data source.
This isn’t decisive (I do indeed say things like giving <0.1% for direct human extinction from nuclear war or climate change this century) but is sort of a weak outside view argument for why anchoring on 1%-99% is not entirely absurd, even if we lived in an epistemic environment where basis points or 1-millionths probabilities are the default expressions of uncertainty.
Put another way, I think if the best research on how humans think of probabilities to date for novel well-defined problems is Expert Political Judgement where political experts’ “utter confidence” translates to a ~15% failure rate (and my personal anecdotal evidence lines up with the empirical results), I think I’d say something similar about 10-90% being range of “reasonable” probabilities even if we use percentage-point based language.
To answer your linguistic objection directly, I think one reason/intuition I have for not trusting probabilities much above 99% or much below 1% is that the empirical rates for the reference class of “fairly decent forecaster considers a novel well-defined question for some time, and then becomes inside-view utterly confident in the result” has a failure rate likely between 0.1% and 5%.
For me personally, I think the rate is slightly under 1%, including from misreading a question (eg forgetting the “not”) and not understanding the data source.
This isn’t decisive (I do indeed say things like giving <0.1% for direct human extinction from nuclear war or climate change this century) but is sort of a weak outside view argument for why anchoring on 1%-99% is not entirely absurd, even if we lived in an epistemic environment where basis points or 1-millionths probabilities are the default expressions of uncertainty.
Put another way, I think if the best research on how humans think of probabilities to date for novel well-defined problems is Expert Political Judgement where political experts’ “utter confidence” translates to a ~15% failure rate (and my personal anecdotal evidence lines up with the empirical results), I think I’d say something similar about 10-90% being range of “reasonable” probabilities even if we use percentage-point based language.