Reply 1⁄3
Got it now, thanks! I agree there’s confident and uncertain, and it’s an important point.
I’ll spend this reply on the distinction between the two, another response on the interventions you propose, and another response on your statement that qualifiers often help you be more relaxed.
The more I think about it, the more I think that there’s quite a bit for someone to unpack here conceptually. I haven’t done so, but here a start:
There’s stating epistemic degree of epistemic uncertainty to inform others how much they should update based on your belief (e.g. “I’m 70% confident in my beliefs, i.e. I think it’s 70% likely I’d still hold them after lots of reflection.”)
There’s stating probabilities which looks similar, but just tells others what your belief is, not how confident you are in it (“I think event X is 70% likely to occur”)
There’s stating epistemic uncertainty for social reasons that are not anxiety/underconfidence driven: Making a situation less adversarial; showing that you’re willing to change your mind; making it easy for others to disagree; just picking up this style of talking from people around you
There’s stating epistemic uncertainty for social reasons that is anxiety/underconfidence driven: Showing you’re willing to change your mind, so others don’t think you’re cocky; Saying you’re not sure, so you don’t look silly if you’re wrong/any other worry you have because you think maybe you’re saying something ‘dumb’; Making a situation less adversarial because you want to avoid conflict because you don’t want others to dislike you
There’s stating uncertainty about the value of your contribution. That can honestly be done in full confidence, because you want to help the group allocate to attention optimally, so you convey information and social permission to not spend too much time on your point. I think online most of the reasons to do so do not apply (people can just ignore you), so I’m counting it mostly as anxious social signalling or in the best case, a not so useful habit. An exception are if you want to help people decide whether to read a long piece of text.
I think you’re mostly referring to 1 and 2. I think 1 and 2 are good things to encourage and 4 and 5 are bad things to encourage. Although I think 4⁄5 also have their functions and shouldn’t be fully discouraged (more in my (third reply)[https://forum.effectivealtruism.org/posts/rWSLCMyvSbN5K5kqy/chi-s-shortform?commentId=un24bc2ZcH4mrGS8f]). I think 3 is a mix. I like 3. I really like that EA has so much of 3. But too much can be unhelpful, esp. the “this is just a habit” kind of 3.
I think 1 and 2 look quite different from 4 and 5. The main problem that it’s hard to see if something is 3 or 4 or both, and that often, you can only know if you know the intention behind a sentence. Although 1 can also sometimes be hard to tell apart from 3, 4, and 5, e.g. today I said “I could be wrong”, which triggered my 4-alarm, but I was actually doing 1. (This is alongside other norms, e.g. expert deference memes, that might encourage 4.)
I would love to see more expressions that are obviously 1, and less of what could be construed as any of 1, 3, 4, or 5. Otherwise, the main way I see to improve this communication norm is for people to individually ask themselves which of 1,3,4,5 is their intention behind a qualifier.
edit: No idea, I really love 3
Tangentially, I just want to push back a bit on 1 and 2 being obviously good. While I think that quantification is in general good, my forecasting experience taught me that quantitative estimates without a robust track record and/or reasoning are quite unsatisfactory. I am a bit worried that misunderstanding of the Aumann agreement theorem might lead to overpraising communication of pure probabilities (which are often unhelpful).
Reply 1⁄3 Got it now, thanks! I agree there’s confident and uncertain, and it’s an important point. I’ll spend this reply on the distinction between the two, another response on the interventions you propose, and another response on your statement that qualifiers often help you be more relaxed.
The more I think about it, the more I think that there’s quite a bit for someone to unpack here conceptually. I haven’t done so, but here a start:
There’s stating epistemic degree of epistemic uncertainty to inform others how much they should update based on your belief (e.g. “I’m 70% confident in my beliefs, i.e. I think it’s 70% likely I’d still hold them after lots of reflection.”)
There’s stating probabilities which looks similar, but just tells others what your belief is, not how confident you are in it (“I think event X is 70% likely to occur”)
There’s stating epistemic uncertainty for social reasons that are not anxiety/underconfidence driven: Making a situation less adversarial; showing that you’re willing to change your mind; making it easy for others to disagree; just picking up this style of talking from people around you
There’s stating epistemic uncertainty for social reasons that is anxiety/underconfidence driven: Showing you’re willing to change your mind, so others don’t think you’re cocky; Saying you’re not sure, so you don’t look silly if you’re wrong/any other worry you have because you think maybe you’re saying something ‘dumb’; Making a situation less adversarial because you want to avoid conflict because you don’t want others to dislike you
There’s stating uncertainty about the value of your contribution. That can honestly be done in full confidence, because you want to help the group allocate to attention optimally, so you convey information and social permission to not spend too much time on your point. I think online most of the reasons to do so do not apply (people can just ignore you), so I’m counting it mostly as anxious social signalling or in the best case, a not so useful habit. An exception are if you want to help people decide whether to read a long piece of text.
I think you’re mostly referring to 1 and 2. I think 1 and 2 are good things to encourage and 4 and 5 are bad things to encourage. Although I think 4⁄5 also have their functions and shouldn’t be fully discouraged (more in my (third reply)[https://forum.effectivealtruism.org/posts/rWSLCMyvSbN5K5kqy/chi-s-shortform?commentId=un24bc2ZcH4mrGS8f]). I think 3 is a mix. I like 3. I really like that EA has so much of 3. But too much can be unhelpful, esp. the “this is just a habit” kind of 3. I think 1 and 2 look quite different from 4 and 5. The main problem that it’s hard to see if something is 3 or 4 or both, and that often, you can only know if you know the intention behind a sentence. Although 1 can also sometimes be hard to tell apart from 3, 4, and 5, e.g. today I said “I could be wrong”, which triggered my 4-alarm, but I was actually doing 1. (This is alongside other norms, e.g. expert deference memes, that might encourage 4.)
I would love to see more expressions that are obviously 1, and less of what could be construed as any of 1, 3, 4, or 5. Otherwise, the main way I see to improve this communication norm is for people to individually ask themselves which of 1,3,4,5 is their intention behind a qualifier.edit: No idea, I really love 3I like your 1–5 list.
Tangentially, I just want to push back a bit on 1 and 2 being obviously good. While I think that quantification is in general good, my forecasting experience taught me that quantitative estimates without a robust track record and/or reasoning are quite unsatisfactory. I am a bit worried that misunderstanding of the Aumann agreement theorem might lead to overpraising communication of pure probabilities (which are often unhelpful).