Observation about EA culture and my journey to develop self-confidence:
Today I noticed an eerie similarity between things I’m trying to work on to become more confident and effective altruism culture. For example, I am trying to reduce my excessive use of qualifiers. At the same time, qualifiers are very popular in effective altruism. It was very enlightening when a book asked me to guess whether the following piece of dialogue was from a man or woman:
‘I just had a thought, I don’t know if it’s worth mentioning...I just had a thought about [X] on this one, and I know it might not be the right time to pop it on the table, but I just thought I’d mention it in case it’s useful.’
and I just immediately thought ‘No, that’s an effective altruist’. I think what the community actually endorses is communicating the degree of epistemic certainty and making it easy to disagree, while the above quote is anxious social signalling. I do think the community does a lot of the latter though, and it’s partly rewarded because of confounding with the first. (In the above example it’s obvious, but I think anxious social signaling is also often the place where ‘I’m uncertain about this’, ‘I haven’t thought much about this’, and ‘I might be wrong’ (of course you might be wrong) come from. That’s certainly the case for me.) Tangentially, there is also a strong emphasis on deference and a somewhat conservative approach to not causing harm, esp. with new projects.
Overall, I am worried that this communication norm and the two memes I mentioned foster under-confidence, a tendency to keep yourself small, and the feeling that you need permission to work on important problems or to think through important questions. The communication norm and memes I mentioned also have upsides, esp. when targeted at overconfident people, and I haven’t figured out yet what my overall take on them is. I just thought it was an interesting observation that certain things I’m trying to decrease are particularly pervasive in the effective altruism community.
(I think there are also lots of other problems related to self-esteem and effective altruism, but I wanted to focus on this particular aspect.)
Hey Chi, let me report my personal experience: uncertainty and putting qualifiers feel quite different to me than anxious social signaling. The conversation in the beginning of Confidence all the way up points to the difference. You can be uncertain or potentially wrong, and be chill about it. Acknowledging uncertainty helps with (fear of) saying “oops, was wrong” and hence makes one more at ease.
Hey Misha!
Thanks for the reply and for linking the post, I enjoyed reading the conversation. I agree that there’s an important difference. The point I was trying to make is that one can look like the other, and that I’m worried that a culture of epistemic uncertainty can accidentally foster a culture of anxious social signaling, esp. when people who are inclined to be underconfident can smuggle anxious social signaling in disguised (to the speaker/writer themselves) as epistemic uncertainty. And because anxious social signalling can superficially look similar to epistemic uncertainty, they see other people in their community show similar-ish behavior and see similar-ish behavior be rewarded.
Not sure how to address this without harming epistemic uncertainty though. (although I’m inclined to think the right trade-off point involves more risk of less of the good communication of epistemic uncertainty)
Or was your point that you disagree that they look superficially similar? And hence, one wouldn’t encourage the other? And if that’s indeed your point, would you independently agree or disagree that there’s a lot of anxious social signaling of uncertainty in effective altruism?
I mostly wanted to highlight that there is a confident but uncertain mode of communication. And that displaying uncertainty or lack of knowledge sometimes helps me be more relaxed.
People surely pick up bits of style from others they respect; so aspiring EAs are likely to adopt the manners of respected members of our community. It seems plausible to me that this will lead to the negative consequences you mentioned in the fifth paragraph (e.g. there is too much deference to authority for the amounts of cluelessness and uncertainty we have). I think a solution might be not in discouraging display of uncertainty but in encouraging positive downstream activities like betting, quantification, acknowledging that arguments changed your mind &c — likely this will make cargo culting less probable (a tangential example is encouraging people to make predictions when they say “my model is…”).
I agree underconfidence and anxiety could be confused on the forum. But not in real life as people leak clues about their inner state all the time.
Reply 1⁄3
Got it now, thanks! I agree there’s confident and uncertain, and it’s an important point.
I’ll spend this reply on the distinction between the two, another response on the interventions you propose, and another response on your statement that qualifiers often help you be more relaxed.
The more I think about it, the more I think that there’s quite a bit for someone to unpack here conceptually. I haven’t done so, but here a start:
There’s stating epistemic degree of epistemic uncertainty to inform others how much they should update based on your belief (e.g. “I’m 70% confident in my beliefs, i.e. I think it’s 70% likely I’d still hold them after lots of reflection.”)
There’s stating probabilities which looks similar, but just tells others what your belief is, not how confident you are in it (“I think event X is 70% likely to occur”)
There’s stating epistemic uncertainty for social reasons that are not anxiety/underconfidence driven: Making a situation less adversarial; showing that you’re willing to change your mind; making it easy for others to disagree; just picking up this style of talking from people around you
There’s stating epistemic uncertainty for social reasons that is anxiety/underconfidence driven: Showing you’re willing to change your mind, so others don’t think you’re cocky; Saying you’re not sure, so you don’t look silly if you’re wrong/any other worry you have because you think maybe you’re saying something ‘dumb’; Making a situation less adversarial because you want to avoid conflict because you don’t want others to dislike you
There’s stating uncertainty about the value of your contribution. That can honestly be done in full confidence, because you want to help the group allocate to attention optimally, so you convey information and social permission to not spend too much time on your point. I think online most of the reasons to do so do not apply (people can just ignore you), so I’m counting it mostly as anxious social signalling or in the best case, a not so useful habit. An exception are if you want to help people decide whether to read a long piece of text.
I think you’re mostly referring to 1 and 2. I think 1 and 2 are good things to encourage and 4 and 5 are bad things to encourage. Although I think 4⁄5 also have their functions and shouldn’t be fully discouraged (more in my (third reply)[https://forum.effectivealtruism.org/posts/rWSLCMyvSbN5K5kqy/chi-s-shortform?commentId=un24bc2ZcH4mrGS8f]). I think 3 is a mix. I like 3. I really like that EA has so much of 3. But too much can be unhelpful, esp. the “this is just a habit” kind of 3.
I think 1 and 2 look quite different from 4 and 5. The main problem that it’s hard to see if something is 3 or 4 or both, and that often, you can only know if you know the intention behind a sentence. Although 1 can also sometimes be hard to tell apart from 3, 4, and 5, e.g. today I said “I could be wrong”, which triggered my 4-alarm, but I was actually doing 1. (This is alongside other norms, e.g. expert deference memes, that might encourage 4.)
I would love to see more expressions that are obviously 1, and less of what could be construed as any of 1, 3, 4, or 5. Otherwise, the main way I see to improve this communication norm is for people to individually ask themselves which of 1,3,4,5 is their intention behind a qualifier.
edit: No idea, I really love 3
Tangentially, I just want to push back a bit on 1 and 2 being obviously good. While I think that quantification is in general good, my forecasting experience taught me that quantitative estimates without a robust track record and/or reasoning are quite unsatisfactory. I am a bit worried that misunderstanding of the Aumann agreement theorem might lead to overpraising communication of pure probabilities (which are often unhelpful).
“displaying uncertainty or lack of knowledge sometimes helps me be more relaxed”
I think there’s a good version of that experience and I think that’s what you’re referring to, and I agree that’s a good use of qualifiers. Just wanted to make a note to potential readers because I think the literal reading of that statement is a bit incomplete. So, this is not really addressed at you :)
I think displaying uncertainty or lack of knowledge always helps to be more relaxed even when it comes from a place of anxious social signalling. (See my first reply for what exactly I mean with that and what I contrast it to) That’s why people do it. If you usually anxiously qualify and force yourself not to do it, that feels scary. I still think, practicing not to do it will help with self-confidence, as in taking yourself more seriously, in the long run. (Apart from efficient communication)*
Of course, sometimes you just need to qualify things (in the anxious social signalling sense) to get yourself in the right state of mind (e.g. to feel safe to openly change your mind later, freely speculate, or to say anything at all in the first place), or allowing yourself the habit of anxious social signalling makes things so much more efficient, that you should absolutely go for it and not beat yourself up over it. Actually, an-almost ideal healthy confidence probably also includes some degree of what I call anxious social signalling and it’s unrealistic to get rid of all of it.
I just found one other frame for what I meant with anxious social signalling partly being rewarded in EA. Usually, that kind of signaling means others take you less seriously. I think it’s great that that’s not so much the case in EA, but I worry that sometimes it may look like people in EA take you more seriously when you do it. Maybe because EA actually endorses what I call 3 in my first reply, but—to say the same thing for the 100th time—I worry that it also encourages anxious social signalling.
I like the suggestions, and they probably-not-so-incidentally are also things that I often tell myself I should do more and that I hate. One drawback with them is that they are already quite difficult, so I’m worried that it’s too ambitious of an ask for many. At least for an individual, it might be more tractable to (encourage them to) change their excessive use of qualifiers as a first baby step than to jump right into quantification and betting. (Of course, what people find more or less difficult confidence-wise differs. But these things are definitely quite high on my personal “how scary are things” ranking, and I would expect that that’s the case for most people.)
OTOH, on the community level, the approach to encourage more quantification etc. might well be more tractable. Community wide communication norms are very fuzzy and seem hard to influence on the whole. (I noticed that I didn’t draw the distinction quite where you drew it. E.g. “Acknowledgements that arguments changed your mind” are also about communication norms.)
I am a little bit worried that it might have backfire effects. More quantification and betting could mostly encourage already confident people to do so (while underconfident people are still stuck at “wouldn’t even dare to write a forum comment because that’s scary.”), make the online community seem more confident, and make entry for underconfident people harder, i.e scarier. Overall, I think the reasons to encourage a culture of betting, quantification etc. are stronger than the concerns about backfiring. But I’m not sure if that’s the case for other norms that could have that effect. (See also my reply to Emery )
I agree that the mechanisms proposed in my comment are quite costly sometimes. But I think higher-effort downstream activities only need to be invoked occasionally (e.g. not everyone who downvotes needs to explain why but it’s good that someone will occasionally) — if they are invoked consistently they will be picked up by people.
Right, I think I see how this can backfire now. Maybe upvoting “ugh, I still think that this is likely but am uncomfortable about betting” might still encourage using qualifiers for reasons 1–3 while acknowledging vulnerability and reducing pressure on commenters?
This is a really interesting point! I think I’m also sometimes guilty of using the norms of signalling epistemic uncertainty in order to mask what is actually anxious social signalling on my part, which I hadn’t thought about so explicitly until now.
One thing that occurred to me while reading this—I’d be curious as to whether you have any thoughts on how this might interact with gender diversity in EA, if at all?
Honestly, I’m confused by the relation to gender. I’m bracketing out genders that are both not-purely-female and not-purely-male because I don’t know enough about the patterns of qualifiers there.
In general, I think anxious qualifying is more common for women. EA isn’t known for having very many women, so I’m a bit confused why there’s seemingly so much of it in EA.
(As a side: This reminds me of a topic I didn’t bring into the original post: How much is just a selection effect and how much is EA increasing anxious qualifying. Intuitively, I at least think it’s not purely a selection effect, but I haven’t thought closely about this.)
Given the above, I would expect that women are also more likely to take the EA culture, and transform it into excessive use of anxious qualifiers, but that’s just speculation. Maybe the percentage change of anxious qualifier use is also higher for men, just because their baseline is lower
I’m not sure how this affects gender diversity in EA as a whole. I can imagine that it might actually be good because underconfident people might be less scared off if the online communication doesn’t seem too confident, and they feel like they can safely use their preferred lots-of-anxious-signalling communication strategy.
That being said, I guess that what would do the above job (at least) equally good is what I call “3” in my reply to Misha. Or, at least I’m hopeful that there are some other communication strategies that would have that benefit without encouraging anxious signalling.
edit: I noticed that the last bullet point doesn’t make much sense because I claim elsewhere that 3 can encourage 4 because they look so similar, and I stand by that.
Interestingly, maybe not instructively, I was kind of hesitant to bring gender into my original post. Partly for good reasons, but partly also because I worried about backlash or at least that some people would take it less seriously as a result. I honestly don’t know if that says much about EA/society, or solely about me. (I felt the need to include “honestly” to make it distinguishable from a random qualifier and mark it as a genuine expression of cluelessness!)
I think within EA, people should report their accurate levels of confidence, which in some cultures and situations will come across as underconfident and in other cultures and situations will come across as overconfident.
I’m not sure what the practical solution is to this level of precision bleeding outside of EA; I definitely felt like there were times where I was socially penalized for trying to be accurate in situations where accuracy was implicitly not called for. If I was smarter/more socially savvy the “obvious” right call would be to quickly codeswitch between different contexts, but in practice I’ve found it quite hard.
___
Separate from the semantics used, I agree there is a real issue where some people are systematically underconfident or overconfident relative to reality, and this hurts their ability to believe true things or achieve their goals in the long run. Unfortunately this plausibly correlates with demographic differences (eg women on average less confident than men, Asians on average less confident than Caucasians), which seems worth correcting for if possible.
I didn’t downvote your comment, but I did feel a bit like it wasn’t really addressing the points Chi was making, so if I had to guess, I’d say that might be why.
Should we interview people with high status in the effective altruism community (or make other content) featuring their (personal) story, how they have overcome challenges, and live into their values?
Background:
I think it’s no secret that effective altruism has some problems with community health. (This is not to belittle the great work that is done in this space.) Posts that talk about personal struggles, for example related to self-esteem and impact, usually get highly upvoted. While many people agree that we should reward dedication and that the thing that really matters is to try your best given your resources, I think that, within EA, the main thing that gives you status, that many people admire, desire, and tie their self-esteem to is being smart.
Other altruistic communities seem to do a better job at making people feel included. I think this has already been discussed a lot, and there seem to be some reasons for why this is just inherently harder for effective altruism to do. But one specific thing I noticed is what I associate with leaders of different altruistic communities.
When I think of most high status people in effective altruism, I don’t think of their altruistic (or other personal) virtues, I think ‘Wow, they’re smart.’ Not because of a lack of altruistic virtues—I assume -, but because smartness is just more salient to me. On the other hand, when I think of other people, for example Michelle Obama or Melinda Gates or even Alicia Keys for that matter, I do think “Wow, these people are so badass. They really live into their values.” I wouldn’t want to use them as role models for how to have impact, but I do use them as role models for what kind of person I would like to be. I admire them as people, and they inspire me to work on myself to become like them in relevant respects, and they make me think it’s possible. I am worried that people look at high status people in effective altruism for what kind of person they would like to be, but the main trait of those people they are presented with is smartness, which is mostly intractable to try to improve.
I don’t think this difference is because these non-EAs lack any smartness or achievement that I could admire. I think it’s because I have consumed content where their personal story and values were put front and centre alongside what they did and how they achieved it. Similarly, I don’t think that high status people in effective altruism lack any personal virtue I could aspire to, but I’m simply not exposed to it.
I don’t know if it would actually improve this aspect of community health, and whether it’s overall worth the time of all people involved (although I think the answer is yes if the answer to the first is yes), but this made me wonder if we should create more content with high status people in the effective altruism community that is similar to the kind of interviews with non-EAs I mentioned. ‘That kind of content’ is pretty vague, and one would have to figure out how we can best celebrate the kind of virtues we want to celebrate, and whether this could work, in principle, with effective altruism. (Maybe the personal virtues we most admire in high status effective altruists just are detrimental to the self esteem of others. I can imagine that with some presentations of impact obsession for example.) But this might be a worth while idea, and I am somewhat hopeful that this could be combined with the presentation of more object-level content (the type that 80k interviews are mostly about).
I just wondered whether there is systematic bias in how much advice there is in EA for people who tend to be underconfident and people who tend to be appropriately or overconfident. Anecdotally, when I think of Memes/norms in effective altruism that I feel at least conflicted about, that’s mostly because they seem to be harmful for underconfident people to hear.
Way in which this could be true and bad: people tend to post advice that would be helpful to themselves, and underconfident people tend to not post advice/things in general.
Way in which this could be true but unclear in sign: people tend to post advice that would be helpful to themselves, and they are more appropriately or overconfident people in the community than underconfident ones.
Way in which this could be true but appropriate: advice that would be harmful when overconfident people internalize it tends to be more harmful than advice that’s harmful to underconfident people. Hence, people post proportionally less of the first.
(I don’t think the vast space of possible advice just has more advice that’s harmful for underconfident people to hear than advice that’s harmful for overconfident people to hear.)
Maybe memes/norms that might be helpful for underconfident for people to hear or their properties that could be harmful for underconfident people are also just more salient to me.
Observation about EA culture and my journey to develop self-confidence:
Today I noticed an eerie similarity between things I’m trying to work on to become more confident and effective altruism culture. For example, I am trying to reduce my excessive use of qualifiers. At the same time, qualifiers are very popular in effective altruism. It was very enlightening when a book asked me to guess whether the following piece of dialogue was from a man or woman:
‘I just had a thought, I don’t know if it’s worth mentioning...I just had a thought about [X] on this one, and I know it might not be the right time to pop it on the table, but I just thought I’d mention it in case it’s useful.’
and I just immediately thought ‘No, that’s an effective altruist’. I think what the community actually endorses is communicating the degree of epistemic certainty and making it easy to disagree, while the above quote is anxious social signalling. I do think the community does a lot of the latter though, and it’s partly rewarded because of confounding with the first. (In the above example it’s obvious, but I think anxious social signaling is also often the place where ‘I’m uncertain about this’, ‘I haven’t thought much about this’, and ‘I might be wrong’ (of course you might be wrong) come from. That’s certainly the case for me.) Tangentially, there is also a strong emphasis on deference and a somewhat conservative approach to not causing harm, esp. with new projects.
Overall, I am worried that this communication norm and the two memes I mentioned foster under-confidence, a tendency to keep yourself small, and the feeling that you need permission to work on important problems or to think through important questions. The communication norm and memes I mentioned also have upsides, esp. when targeted at overconfident people, and I haven’t figured out yet what my overall take on them is. I just thought it was an interesting observation that certain things I’m trying to decrease are particularly pervasive in the effective altruism community.
(I think there are also lots of other problems related to self-esteem and effective altruism, but I wanted to focus on this particular aspect.)
Hey Chi, let me report my personal experience: uncertainty and putting qualifiers feel quite different to me than anxious social signaling. The conversation in the beginning of Confidence all the way up points to the difference. You can be uncertain or potentially wrong, and be chill about it. Acknowledging uncertainty helps with (fear of) saying “oops, was wrong” and hence makes one more at ease.
Hey Misha! Thanks for the reply and for linking the post, I enjoyed reading the conversation. I agree that there’s an important difference. The point I was trying to make is that one can look like the other, and that I’m worried that a culture of epistemic uncertainty can accidentally foster a culture of anxious social signaling, esp. when people who are inclined to be underconfident can smuggle anxious social signaling in disguised (to the speaker/writer themselves) as epistemic uncertainty. And because anxious social signalling can superficially look similar to epistemic uncertainty, they see other people in their community show similar-ish behavior and see similar-ish behavior be rewarded. Not sure how to address this without harming epistemic uncertainty though. (although I’m inclined to think the right trade-off point involves more risk of less of the good communication of epistemic uncertainty)
Or was your point that you disagree that they look superficially similar? And hence, one wouldn’t encourage the other? And if that’s indeed your point, would you independently agree or disagree that there’s a lot of anxious social signaling of uncertainty in effective altruism?
I mostly wanted to highlight that there is a confident but uncertain mode of communication. And that displaying uncertainty or lack of knowledge sometimes helps me be more relaxed.
People surely pick up bits of style from others they respect; so aspiring EAs are likely to adopt the manners of respected members of our community. It seems plausible to me that this will lead to the negative consequences you mentioned in the fifth paragraph (e.g. there is too much deference to authority for the amounts of cluelessness and uncertainty we have). I think a solution might be not in discouraging display of uncertainty but in encouraging positive downstream activities like betting, quantification, acknowledging that arguments changed your mind &c — likely this will make cargo culting less probable (a tangential example is encouraging people to make predictions when they say “my model is…”).
I agree underconfidence and anxiety could be confused on the forum. But not in real life as people leak clues about their inner state all the time.
Reply 1⁄3 Got it now, thanks! I agree there’s confident and uncertain, and it’s an important point. I’ll spend this reply on the distinction between the two, another response on the interventions you propose, and another response on your statement that qualifiers often help you be more relaxed.
The more I think about it, the more I think that there’s quite a bit for someone to unpack here conceptually. I haven’t done so, but here a start:
There’s stating epistemic degree of epistemic uncertainty to inform others how much they should update based on your belief (e.g. “I’m 70% confident in my beliefs, i.e. I think it’s 70% likely I’d still hold them after lots of reflection.”)
There’s stating probabilities which looks similar, but just tells others what your belief is, not how confident you are in it (“I think event X is 70% likely to occur”)
There’s stating epistemic uncertainty for social reasons that are not anxiety/underconfidence driven: Making a situation less adversarial; showing that you’re willing to change your mind; making it easy for others to disagree; just picking up this style of talking from people around you
There’s stating epistemic uncertainty for social reasons that is anxiety/underconfidence driven: Showing you’re willing to change your mind, so others don’t think you’re cocky; Saying you’re not sure, so you don’t look silly if you’re wrong/any other worry you have because you think maybe you’re saying something ‘dumb’; Making a situation less adversarial because you want to avoid conflict because you don’t want others to dislike you
There’s stating uncertainty about the value of your contribution. That can honestly be done in full confidence, because you want to help the group allocate to attention optimally, so you convey information and social permission to not spend too much time on your point. I think online most of the reasons to do so do not apply (people can just ignore you), so I’m counting it mostly as anxious social signalling or in the best case, a not so useful habit. An exception are if you want to help people decide whether to read a long piece of text.
I think you’re mostly referring to 1 and 2. I think 1 and 2 are good things to encourage and 4 and 5 are bad things to encourage. Although I think 4⁄5 also have their functions and shouldn’t be fully discouraged (more in my (third reply)[https://forum.effectivealtruism.org/posts/rWSLCMyvSbN5K5kqy/chi-s-shortform?commentId=un24bc2ZcH4mrGS8f]). I think 3 is a mix. I like 3. I really like that EA has so much of 3. But too much can be unhelpful, esp. the “this is just a habit” kind of 3. I think 1 and 2 look quite different from 4 and 5. The main problem that it’s hard to see if something is 3 or 4 or both, and that often, you can only know if you know the intention behind a sentence. Although 1 can also sometimes be hard to tell apart from 3, 4, and 5, e.g. today I said “I could be wrong”, which triggered my 4-alarm, but I was actually doing 1. (This is alongside other norms, e.g. expert deference memes, that might encourage 4.)
I would love to see more expressions that are obviously 1, and less of what could be construed as any of 1, 3, 4, or 5. Otherwise, the main way I see to improve this communication norm is for people to individually ask themselves which of 1,3,4,5 is their intention behind a qualifier.edit: No idea, I really love 3I like your 1–5 list.
Tangentially, I just want to push back a bit on 1 and 2 being obviously good. While I think that quantification is in general good, my forecasting experience taught me that quantitative estimates without a robust track record and/or reasoning are quite unsatisfactory. I am a bit worried that misunderstanding of the Aumann agreement theorem might lead to overpraising communication of pure probabilities (which are often unhelpful).
Reply 3⁄3
“displaying uncertainty or lack of knowledge sometimes helps me be more relaxed”
I think there’s a good version of that experience and I think that’s what you’re referring to, and I agree that’s a good use of qualifiers. Just wanted to make a note to potential readers because I think the literal reading of that statement is a bit incomplete. So, this is not really addressed at you :)
I think displaying uncertainty or lack of knowledge always helps to be more relaxed even when it comes from a place of anxious social signalling. (See my first reply for what exactly I mean with that and what I contrast it to) That’s why people do it. If you usually anxiously qualify and force yourself not to do it, that feels scary. I still think, practicing not to do it will help with self-confidence, as in taking yourself more seriously, in the long run. (Apart from efficient communication)*
Of course, sometimes you just need to qualify things (in the anxious social signalling sense) to get yourself in the right state of mind (e.g. to feel safe to openly change your mind later, freely speculate, or to say anything at all in the first place), or allowing yourself the habit of anxious social signalling makes things so much more efficient, that you should absolutely go for it and not beat yourself up over it. Actually, an-almost ideal healthy confidence probably also includes some degree of what I call anxious social signalling and it’s unrealistic to get rid of all of it.
I just found one other frame for what I meant with anxious social signalling partly being rewarded in EA. Usually, that kind of signaling means others take you less seriously. I think it’s great that that’s not so much the case in EA, but I worry that sometimes it may look like people in EA take you more seriously when you do it. Maybe because EA actually endorses what I call 3 in my first reply, but—to say the same thing for the 100th time—I worry that it also encourages anxious social signalling.
Chi, I appreciate the depth of your engagement! I mostly agree with your comments.
Reply 2⁄3
I like the suggestions, and they probably-not-so-incidentally are also things that I often tell myself I should do more and that I hate. One drawback with them is that they are already quite difficult, so I’m worried that it’s too ambitious of an ask for many. At least for an individual, it might be more tractable to (encourage them to) change their excessive use of qualifiers as a first baby step than to jump right into quantification and betting. (Of course, what people find more or less difficult confidence-wise differs. But these things are definitely quite high on my personal “how scary are things” ranking, and I would expect that that’s the case for most people.) OTOH, on the community level, the approach to encourage more quantification etc. might well be more tractable. Community wide communication norms are very fuzzy and seem hard to influence on the whole. (I noticed that I didn’t draw the distinction quite where you drew it. E.g. “Acknowledgements that arguments changed your mind” are also about communication norms.) I am a little bit worried that it might have backfire effects. More quantification and betting could mostly encourage already confident people to do so (while underconfident people are still stuck at “wouldn’t even dare to write a forum comment because that’s scary.”), make the online community seem more confident, and make entry for underconfident people harder, i.e scarier. Overall, I think the reasons to encourage a culture of betting, quantification etc. are stronger than the concerns about backfiring. But I’m not sure if that’s the case for other norms that could have that effect. (See also my reply to Emery )
I agree that the mechanisms proposed in my comment are quite costly sometimes. But I think higher-effort downstream activities only need to be invoked occasionally (e.g. not everyone who downvotes needs to explain why but it’s good that someone will occasionally) — if they are invoked consistently they will be picked up by people.
Right, I think I see how this can backfire now. Maybe upvoting “ugh, I still think that this is likely but am uncomfortable about betting” might still encourage using qualifiers for reasons 1–3 while acknowledging vulnerability and reducing pressure on commenters?
This is a really interesting point! I think I’m also sometimes guilty of using the norms of signalling epistemic uncertainty in order to mask what is actually anxious social signalling on my part, which I hadn’t thought about so explicitly until now.
One thing that occurred to me while reading this—I’d be curious as to whether you have any thoughts on how this might interact with gender diversity in EA, if at all?
Thanks for the reply!
Honestly, I’m confused by the relation to gender. I’m bracketing out genders that are both not-purely-female and not-purely-male because I don’t know enough about the patterns of qualifiers there.
In general, I think anxious qualifying is more common for women. EA isn’t known for having very many women, so I’m a bit confused why there’s seemingly so much of it in EA.
(As a side: This reminds me of a topic I didn’t bring into the original post: How much is just a selection effect and how much is EA increasing anxious qualifying. Intuitively, I at least think it’s not purely a selection effect, but I haven’t thought closely about this.)
Given the above, I would expect that women are also more likely to take the EA culture, and transform it into excessive use of anxious qualifiers, but that’s just speculation. Maybe the percentage change of anxious qualifier use is also higher for men, just because their baseline is lower
I’m not sure how this affects gender diversity in EA as a whole. I can imagine that it might actually be good because underconfident people might be less scared off if the online communication doesn’t seem too confident, and they feel like they can safely use their preferred lots-of-anxious-signalling communication strategy.
That being said, I guess that what would do the above job (at least) equally good is what I call “3” in my reply to Misha. Or, at least I’m hopeful that there are some other communication strategies that would have that benefit without encouraging anxious signalling.
edit: I noticed that the last bullet point doesn’t make much sense because I claim elsewhere that 3 can encourage 4 because they look so similar, and I stand by that.
Interestingly, maybe not instructively, I was kind of hesitant to bring gender into my original post. Partly for good reasons, but partly also because I worried about backlash or at least that some people would take it less seriously as a result. I honestly don’t know if that says much about EA/society, or solely about me. (I felt the need to include “honestly” to make it distinguishable from a random qualifier and mark it as a genuine expression of cluelessness!)
I think within EA, people should report their accurate levels of confidence, which in some cultures and situations will come across as underconfident and in other cultures and situations will come across as overconfident.
I’m not sure what the practical solution is to this level of precision bleeding outside of EA; I definitely felt like there were times where I was socially penalized for trying to be accurate in situations where accuracy was implicitly not called for. If I was smarter/more socially savvy the “obvious” right call would be to quickly codeswitch between different contexts, but in practice I’ve found it quite hard.
___
Separate from the semantics used, I agree there is a real issue where some people are systematically underconfident or overconfident relative to reality, and this hurts their ability to believe true things or achieve their goals in the long run. Unfortunately this plausibly correlates with demographic differences (eg women on average less confident than men, Asians on average less confident than Caucasians), which seems worth correcting for if possible.
Why is this comment downvoted? :)
I didn’t downvote your comment, but I did feel a bit like it wasn’t really addressing the points Chi was making, so if I had to guess, I’d say that might be why.
If my comment didn’t seem pertinent, I think I most likely misunderstood the original points then. Will reread and try to understand better.
Should we interview people with high status in the effective altruism community (or make other content) featuring their (personal) story, how they have overcome challenges, and live into their values?
Background: I think it’s no secret that effective altruism has some problems with community health. (This is not to belittle the great work that is done in this space.) Posts that talk about personal struggles, for example related to self-esteem and impact, usually get highly upvoted. While many people agree that we should reward dedication and that the thing that really matters is to try your best given your resources, I think that, within EA, the main thing that gives you status, that many people admire, desire, and tie their self-esteem to is being smart.
Other altruistic communities seem to do a better job at making people feel included. I think this has already been discussed a lot, and there seem to be some reasons for why this is just inherently harder for effective altruism to do. But one specific thing I noticed is what I associate with leaders of different altruistic communities.
When I think of most high status people in effective altruism, I don’t think of their altruistic (or other personal) virtues, I think ‘Wow, they’re smart.’ Not because of a lack of altruistic virtues—I assume -, but because smartness is just more salient to me. On the other hand, when I think of other people, for example Michelle Obama or Melinda Gates or even Alicia Keys for that matter, I do think “Wow, these people are so badass. They really live into their values.” I wouldn’t want to use them as role models for how to have impact, but I do use them as role models for what kind of person I would like to be. I admire them as people, and they inspire me to work on myself to become like them in relevant respects, and they make me think it’s possible. I am worried that people look at high status people in effective altruism for what kind of person they would like to be, but the main trait of those people they are presented with is smartness, which is mostly intractable to try to improve.
I don’t think this difference is because these non-EAs lack any smartness or achievement that I could admire. I think it’s because I have consumed content where their personal story and values were put front and centre alongside what they did and how they achieved it. Similarly, I don’t think that high status people in effective altruism lack any personal virtue I could aspire to, but I’m simply not exposed to it.
I don’t know if it would actually improve this aspect of community health, and whether it’s overall worth the time of all people involved (although I think the answer is yes if the answer to the first is yes), but this made me wonder if we should create more content with high status people in the effective altruism community that is similar to the kind of interviews with non-EAs I mentioned. ‘That kind of content’ is pretty vague, and one would have to figure out how we can best celebrate the kind of virtues we want to celebrate, and whether this could work, in principle, with effective altruism. (Maybe the personal virtues we most admire in high status effective altruists just are detrimental to the self esteem of others. I can imagine that with some presentations of impact obsession for example.) But this might be a worth while idea, and I am somewhat hopeful that this could be combined with the presentation of more object-level content (the type that 80k interviews are mostly about).
I just wondered whether there is systematic bias in how much advice there is in EA for people who tend to be underconfident and people who tend to be appropriately or overconfident. Anecdotally, when I think of Memes/norms in effective altruism that I feel at least conflicted about, that’s mostly because they seem to be harmful for underconfident people to hear.
Way in which this could be true and bad: people tend to post advice that would be helpful to themselves, and underconfident people tend to not post advice/things in general.
Way in which this could be true but unclear in sign: people tend to post advice that would be helpful to themselves, and they are more appropriately or overconfident people in the community than underconfident ones.
Way in which this could be true but appropriate: advice that would be harmful when overconfident people internalize it tends to be more harmful than advice that’s harmful to underconfident people. Hence, people post proportionally less of the first.
(I don’t think the vast space of possible advice just has more advice that’s harmful for underconfident people to hear than advice that’s harmful for overconfident people to hear.)
Maybe memes/norms that might be helpful for underconfident for people to hear or their properties that could be harmful for underconfident people are also just more salient to me.