The potential downsides I cover include causing overconfidence, underestimating the value of information, and anchoring, among other things that are less directly related to your point. That said, I ultimately conclude that:
There are some real downsides that can occur in practice when actual humans use [explicit probabilities] (or [explicit probabilistic models], or maximising expected utility)
But some downsides that have been suggested (particularly causing overconfidence and understating the [value of information]) might actually be more pronounced for approaches other than using [explicit probabilities]
Some downsides (particularly relating to the optimizerās curse, anchoring, and reputational issues) may be more pronounced when the probabilities one has (or could have) are less trustworthy
Other downsides (particularly excluding oneās intuitive knowledge) may be more pronounced when the probabilities one has (or could have) are more trustworthy
Only one downside (reputational issues) seems to provide any argument for even acting as if thereās a binary risk-uncertainty distinction
And even in that case the argument is quite unclear, and wouldnāt suggest we should use the idea of such a distinction inour own thinking
The above point, combined with arguments I made in an earlier post, makes me believe that we should abandon the concept of the risk-uncertainty distinction in our own thinking (and at least most communication), and that we should think instead in terms of:
a continuum of more to less trustworthy probabilities
the practical upsides and downsides of using [explicit probabilities], for actual humans. [emphasis added]
Relatedly, I think itās not at all obvious that putting numbers on things, forecasting, etc. would tend to get in the way of āFostering an environment of criticism and error-correction becomes paramountā. (It definitely could get in the way sometimes; it depends on the details.) There are various reasons why putting numbers on things and making forecasts can be actively helpful in fostering such an environment (some of which I discuss in my post).
You or other readers might find this post of mine from last year of interest: Potential downsides of using explicit probabilities.
The potential downsides I cover include causing overconfidence, underestimating the value of information, and anchoring, among other things that are less directly related to your point. That said, I ultimately conclude that:
Relatedly, I think itās not at all obvious that putting numbers on things, forecasting, etc. would tend to get in the way of āFostering an environment of criticism and error-correction becomes paramountā. (It definitely could get in the way sometimes; it depends on the details.) There are various reasons why putting numbers on things and making forecasts can be actively helpful in fostering such an environment (some of which I discuss in my post).