Thanks for this interesting post. I typically (though tentatively) support making and using explicit probability estimates (I discussed this a bit here). The arguments in this post have made me a little more confident in that view, and in the view that these estimates should be stated quite precisely. This is especially because this post highlighted a good way to state estimates precisely while hopefully reducing appearances of false precision.
That said, it still does seem plausible to me that anchoring effects and overestimations of the speakerās confidence (or arrogance) would be exacerbated by following the principles you describe, compared to following similar principles but with more rounding. E.g., by saying something like āI think thereās a 12% chance of a famine in South Sudan this year, but if I spent another 5 hours on this Iād expect to move by 6%ā, rather than something like āI think thereās a roughly 10% chance of a famine in South Sudan this year, but if I spent another few hours on this Iād expect to move by about 5%ā.
(Of course, I donāt have any actual evidence about whether and how much anchoring and overestimates of speaker confidence are exacerbated by stating estimates more precisely even if you also give a statement about how (un)resilient your estimate is.)
Relatedly, it seems like one could reasonably argue against giving misleadingly precise estimates of how much one might update oneās views (e.g., āIād expect this to move by 6%ā). That too could perhaps be perceived as suggesting overconfidence in oneās forecasting abilities.
I expect these issues to be especially pronounced during communication with non-EAs and in low-fidelity channels.
So Iād be interested in whether you think:
The above issues are real, but donāt outweigh the benefits of enhanced precision
The above issues are real, and so you advocate giving quite precise estimates only for relatively important estimates and when talking to the right sort of person in the right sort of context (e.g., conversation rather than media soundbyte)
I had a worry on similar lines that I was surprised not to see discussed.
I think the obvious objection to using additional precision is that this will falsely convey certainty and expertise to most folks (i.e. those outside the EA/ārationalist bubble). If I say to a man in the pub either (A) āthereās a 12.4% chance of famine in Sudanā or (B) āthereās a 10% chance of famine in Sudanā, I expect him to interpret me as an expert in (A) - how else could I get so precise? - even if I know nothing about Sudan and all Iāve read about discussing probabilities is this forum post. I might expect him to take my estimate more seriously than of someone who knows about Sudan but not about conveying uncertainty.
(In philosophy of language jargon, the use of a non-rounded percentage is a conversational implicature that you have enough information, by the standards of ordinary discourse, to be that precise.)
Personally, I think that the post did discuss that objection. In particular, the section āāFalse precisionāā seems to capture that objection, and then the section āResilienceā suggests Greg thinks that his proposal addresses that objection. In particular, Greg isnāt suggesting saying (A), but rather saying something like (A+) āI think thereās a 12% chance of a famine in South Sudan this year, but if I spent another 5 hours on this Iād expect to move by 6%ā.
What I was wondering was what his thoughts were on the possibility of substantial anchoring and false perceptions of certainty even if you adjust A to A+. And whether that means itād often be best to indeed make the adjustment of mentioning resilience, but to still āround offā oneās estimate even so.
Hmm. Okay, thatās fair, on re-reading I note the OP did discuss this at the start, but Iām still unconvinced. I think the context may make a difference. If you are speaking to a member of the public, I think my concern stands, because of how they will misinterpret the thoughtfulness of your prediction. If you are speaking to other predict-y types, I think this concerns disappears, as they will interpret your statements the way you mean them. And if youāre putting a set of predictions together into a calculation, not only it is useful to carry that precision through, but itās not as if your calculation will misinterpret you, so to speak.
My reply is a mix of the considerations you anticipate. With apologies for brevity:
Itās not clear to me whether avoiding anchoring favours (e.g.) round numbers or not. If my listener, in virtue of being human, is going to anchor on whatever number I provide them, I might as well anchor them on a number I believe to be more accurate.
I expect there are better forms of words for my examples which can better avoid the downsides you note (e.g. maybe saying āroughly 12%ā instead of ā12%ā² still helps, even if you give a later articulation).
Iām less fussed about precision re. resilience (e.g. āIād typically expect drift of several percent from this with a few more hours to think about itā doesnāt seem much worse than āthe standard error of this forecast is 6% versus me with 5 hours more thinking timeā or similar). Iād still insist something at least pseudo-quantitative is important, as verbal riders may not put the listener in the right ballpark (e.g. does āroughlyā 10% pretty much rule out it being 30%?)
Similar to the ātrip to the shopsā example in the OP, thereās plenty of cases where precision isnāt a good way to spend time and words (e.g. I could have counter-productively littered many of the sentences above with precise yet non-resilient forecasts). Iād guess thereās also cases where it is better to sacrifice precision to better communicate with your listener (e.g. despite the rider on resilience you offer, they will still think ā12%ā² is claimed to be accurate to the nearest percent, but if you say āroughly 10%ā they will better approximate what you have in mind). I still think when the stakes are sufficiently high, it is worth taking pains on this.
Also, it occurs to me that giving percentages is itself effectively rounding to the nearest percent; itās unlikely the cognitive processes that result in outputting an estimate naturally fall into 100 evenly spaced buckets. Do you think we should typically give percentages? Or that we should round to the nearest thousandth, hundredth, tenth, etc. similarly often, just depending on a range of factors about the situation?
(I mean this more as a genuine question than an attempted reductio ad absurdum.)
Thanks for this interesting post. I typically (though tentatively) support making and using explicit probability estimates (I discussed this a bit here). The arguments in this post have made me a little more confident in that view, and in the view that these estimates should be stated quite precisely. This is especially because this post highlighted a good way to state estimates precisely while hopefully reducing appearances of false precision.
That said, it still does seem plausible to me that anchoring effects and overestimations of the speakerās confidence (or arrogance) would be exacerbated by following the principles you describe, compared to following similar principles but with more rounding. E.g., by saying something like āI think thereās a 12% chance of a famine in South Sudan this year, but if I spent another 5 hours on this Iād expect to move by 6%ā, rather than something like āI think thereās a roughly 10% chance of a famine in South Sudan this year, but if I spent another few hours on this Iād expect to move by about 5%ā.
(Of course, I donāt have any actual evidence about whether and how much anchoring and overestimates of speaker confidence are exacerbated by stating estimates more precisely even if you also give a statement about how (un)resilient your estimate is.)
Relatedly, it seems like one could reasonably argue against giving misleadingly precise estimates of how much one might update oneās views (e.g., āIād expect this to move by 6%ā). That too could perhaps be perceived as suggesting overconfidence in oneās forecasting abilities.
I expect these issues to be especially pronounced during communication with non-EAs and in low-fidelity channels.
So Iād be interested in whether you think:
The above issues are real, but donāt outweigh the benefits of enhanced precision
The above issues are real, and so you advocate giving quite precise estimates only for relatively important estimates and when talking to the right sort of person in the right sort of context (e.g., conversation rather than media soundbyte)
The above issues are trivial in size
(Or something else)
I had a worry on similar lines that I was surprised not to see discussed.
I think the obvious objection to using additional precision is that this will falsely convey certainty and expertise to most folks (i.e. those outside the EA/ārationalist bubble). If I say to a man in the pub either (A) āthereās a 12.4% chance of famine in Sudanā or (B) āthereās a 10% chance of famine in Sudanā, I expect him to interpret me as an expert in (A) - how else could I get so precise? - even if I know nothing about Sudan and all Iāve read about discussing probabilities is this forum post. I might expect him to take my estimate more seriously than of someone who knows about Sudan but not about conveying uncertainty.
(In philosophy of language jargon, the use of a non-rounded percentage is a conversational implicature that you have enough information, by the standards of ordinary discourse, to be that precise.)
Personally, I think that the post did discuss that objection. In particular, the section āāFalse precisionāā seems to capture that objection, and then the section āResilienceā suggests Greg thinks that his proposal addresses that objection. In particular, Greg isnāt suggesting saying (A), but rather saying something like (A+) āI think thereās a 12% chance of a famine in South Sudan this year, but if I spent another 5 hours on this Iād expect to move by 6%ā.
What I was wondering was what his thoughts were on the possibility of substantial anchoring and false perceptions of certainty even if you adjust A to A+. And whether that means itād often be best to indeed make the adjustment of mentioning resilience, but to still āround offā oneās estimate even so.
Hmm. Okay, thatās fair, on re-reading I note the OP did discuss this at the start, but Iām still unconvinced. I think the context may make a difference. If you are speaking to a member of the public, I think my concern stands, because of how they will misinterpret the thoughtfulness of your prediction. If you are speaking to other predict-y types, I think this concerns disappears, as they will interpret your statements the way you mean them. And if youāre putting a set of predictions together into a calculation, not only it is useful to carry that precision through, but itās not as if your calculation will misinterpret you, so to speak.
My reply is a mix of the considerations you anticipate. With apologies for brevity:
Itās not clear to me whether avoiding anchoring favours (e.g.) round numbers or not. If my listener, in virtue of being human, is going to anchor on whatever number I provide them, I might as well anchor them on a number I believe to be more accurate.
I expect there are better forms of words for my examples which can better avoid the downsides you note (e.g. maybe saying āroughly 12%ā instead of ā12%ā² still helps, even if you give a later articulation).
Iām less fussed about precision re. resilience (e.g. āIād typically expect drift of several percent from this with a few more hours to think about itā doesnāt seem much worse than āthe standard error of this forecast is 6% versus me with 5 hours more thinking timeā or similar). Iād still insist something at least pseudo-quantitative is important, as verbal riders may not put the listener in the right ballpark (e.g. does āroughlyā 10% pretty much rule out it being 30%?)
Similar to the ātrip to the shopsā example in the OP, thereās plenty of cases where precision isnāt a good way to spend time and words (e.g. I could have counter-productively littered many of the sentences above with precise yet non-resilient forecasts). Iād guess thereās also cases where it is better to sacrifice precision to better communicate with your listener (e.g. despite the rider on resilience you offer, they will still think ā12%ā² is claimed to be accurate to the nearest percent, but if you say āroughly 10%ā they will better approximate what you have in mind). I still think when the stakes are sufficiently high, it is worth taking pains on this.
That all makes sense to meāthanks for the answer!
And interesting point regarding the way anchoring may also boost the value of precisionāI hadnāt considered that previously.
Also, it occurs to me that giving percentages is itself effectively rounding to the nearest percent; itās unlikely the cognitive processes that result in outputting an estimate naturally fall into 100 evenly spaced buckets. Do you think we should typically give percentages? Or that we should round to the nearest thousandth, hundredth, tenth, etc. similarly often, just depending on a range of factors about the situation?
(I mean this more as a genuine question than an attempted reductio ad absurdum.)