These are the risks—actual or perceived—that I mostly hear about when people choose not to publicly own their EA identity:
People don’t want to talk to you / take you seriously because you are affiliated with EA
You won’t get some career opportunities because you are affiliated with EA
I think you left out another key reason: you do not agree with lots of EAs about lots of things and think that telling people you are an EA will give them false impressions about your beliefs.
I am a development economist, and I often tell other development economists that I am an EA. That tells them “this is someone who cares a lot about cost-effectiveness and finding impactful interventions, not just answering questions for the sake of answering questions”, which is true. But if I said I was an EA to random people in the Bay Area, they would infer “this is someone who thinks AI risk is a big deal”, which is not true of me, so I don’t want to convey that. This example could apply to lots of people who work on global development or animal welfare and don’t feel compelled by AI risk. (ETA: one solution would be to signal the flavor of EA you’re most involved in, e.g. “bed nets not light cone” but it sounds like that would not be owning your EA identity publicly according to the OP.)
I also don’t follow much of the media around EA. I don’t have any special insight into the OpenAI board drama, or FTX, or anything. I don’t see myself as a resource people should come to when they want to make sense of EA being in the news for whatever reason. I think this behavior is specific to me, so maybe it’s not a great reason for others.
one solution would be to signal the flavor of EA you’re most involved in, e.g. “bed nets not light cone” but it sounds like that would not be owning your EA identity publicly according to the OP
I’d also add that I took great care in not using “identity” but “affiliation” instead, I think it’s important not to make it about identity.
Not sure I correctly understand your situation (I have not lived in the bay for more than a few weeks), but I think it can be worth doing the following:
State your affinity for EA, maybe even with some explanation
Let people get the wrong impression of what it means to you anyway
[Highly contextual] correct this impression, either through immediate explanation or letting your actions speak
-> over time, this can help everyone see the core value of what you cherish and reduce the all too common understanding of EA as an identity (within and outside of EA). We all need to work on not identifying with our convictions to avoid being soldiers.
Most interactions are iterative, not one offs. You could help people understand that EA is not == AI xrisk.
If you think EA is about a general approach to doing good, explaining this more often would help you and the xrisk people. Identities are often pushed onto us and distort discourse. I see it as part of my responsibility to counteract this wherever I can. Otherwise, my affiliation would mostly be a way to identify myself as “in-group”—which reinforces the psycho-social dynamics that build the “out-group” and thus identity politics.
Your example seems to be an opportunity to help people better understand EA or even to improve EA with the feedback you get. You don’t necessarily have to stay on top of the news—on the contrary, it helps if you show that everyone can make it their own thing as long as the basic tenets are preserved.
I understand this might be effortful for many. I don’t want to pressure anyone into doing this because it can also look defensive and reinforce identity politics. I figured it might be worth laying out this model to make it easier for people to sit through the discomfort and counteract an oversimplified understanding of what they cherish—whether that’s EA or anything else.
I think this is a good approach for iterative interactions.
I also want to flag that for one off interactions, they might be one off as a result of people evaluating you based on their (false) impression of you. Job interviews, cover letters, and first dates are obvious example. But even casually meeting a new person for the first time without a structure of assessment/evaluation (imagine a friend’s birthday party, an conference, or a Meetup-type social event) involves that person deciding if they would want to spend time with you. So if I tell a sibling that I am interested in EA ideas we will almost certainly have the space/time to discuss the details of what that means. But new and unformed social relationships often won’t be given that space/time to understand nuance.
This is interesting because I can tell you that—being a retired military servicemember—I’ve encountered some of the same discrimination and labeling within the EA community that EAs claim to experience from non-EAs. To use Alix Pham’s verbiage in an earlier post in a separate context “‘people don’t talk to you’ (because they project some beliefs on you you actually don’t have)”. Thankfully, engaging with the EA community (at least those with whom I’ve engaged) over the last several years have changed their minds (e.g. that I don’t come here to infiltrate the EA community or to arm the EA community, etc). In my Defense community, it is generally inadvisable to claim the EA moniker, while ironically it seemed inadvisable to claim being a former servicemember in my EA community. (It was a challenge being PNG’ed* in both communities that one wants to represent and bridge together, but I digress.)
Additionally, I believe that the term “soldier bias” creates the very confirmation bias within the EA community that EAs generally try to avoid; by automatically claiming that all soldiers have this particularly zealous bias. See the irony there? I know that there are several former servicemembers within the EA community who are proud and outstanding EAs (though many of them have been hesitant to openly divulge their previous profession, as they have told me). I think the “soldier bias” term would be unacceptable as a professional and formal naming convention if you replaced “soldier” with any other profession’s name when it is meant in a negative context.
That’s interesting, I’ll reflect on that. I would be curious to explore how the reason you mention can be a risk for you? And to what extent you’ll undertake actions to make sure people don’t know your EA affiliation for that reason?
one solution would be to signal the flavor of EA you’re most involved in, e.g. “bed nets not light cone” but it sounds like that would not be owning your EA identity publicly according to the OP
No, I do think it’s owning to some extent already.
I think the reason you’re mentioning is also partly included in “people don’t talk to you” (because they project some beliefs on you you actually don’t have). But my certainty on that line of thought is lower as it’s something I thought less about.
It’s not a risk to say I’m EA, it’s just not informative in many contexts. It conveys a specific and false impression of my priorities and what I work on—so as a matter of cooperative communication, I don’t do it. I don’t take any actions to hide it.
I don’t really think the risk is that people don’t talk to me because they project false beliefs onto me. Because I’m not worried about negative consequences for myself if people think I work on AI. It’s just not a helpful thing to say in most contexts, because i don’t actually work on AI.
I think it’s different for professional community builders. In your job, EA is a community to be represented. In my job, EA is a signifier for a cluster of beliefs. Sometimes that cluster is relevant and sometimes it isn’t.
Thanks for clarifying, that makes a lot of sense. I’m not sure yet, but I think those considerations are not in the scope of my post, then? Let me know what you think.
Maybe this part
I’m also not necessarily saying that one needs to shout it everywhere, but simply be transparent about it.
I think you left out another key reason: you do not agree with lots of EAs about lots of things and think that telling people you are an EA will give them false impressions about your beliefs.
I am a development economist, and I often tell other development economists that I am an EA. That tells them “this is someone who cares a lot about cost-effectiveness and finding impactful interventions, not just answering questions for the sake of answering questions”, which is true. But if I said I was an EA to random people in the Bay Area, they would infer “this is someone who thinks AI risk is a big deal”, which is not true of me, so I don’t want to convey that. This example could apply to lots of people who work on global development or animal welfare and don’t feel compelled by AI risk. (ETA: one solution would be to signal the flavor of EA you’re most involved in, e.g. “bed nets not light cone” but it sounds like that would not be owning your EA identity publicly according to the OP.)
I also don’t follow much of the media around EA. I don’t have any special insight into the OpenAI board drama, or FTX, or anything. I don’t see myself as a resource people should come to when they want to make sense of EA being in the news for whatever reason. I think this behavior is specific to me, so maybe it’s not a great reason for others.
I’d also add that I took great care in not using “identity” but “affiliation” instead, I think it’s important not to make it about identity.
Not sure I correctly understand your situation (I have not lived in the bay for more than a few weeks), but I think it can be worth doing the following:
State your affinity for EA, maybe even with some explanation
Let people get the wrong impression of what it means to you anyway
[Highly contextual] correct this impression, either through immediate explanation or letting your actions speak
-> over time, this can help everyone see the core value of what you cherish and reduce the all too common understanding of EA as an identity (within and outside of EA). We all need to work on not identifying with our convictions to avoid being soldiers.
Most interactions are iterative, not one offs. You could help people understand that EA is not == AI xrisk.
If you think EA is about a general approach to doing good, explaining this more often would help you and the xrisk people. Identities are often pushed onto us and distort discourse. I see it as part of my responsibility to counteract this wherever I can. Otherwise, my affiliation would mostly be a way to identify myself as “in-group”—which reinforces the psycho-social dynamics that build the “out-group” and thus identity politics.
Your example seems to be an opportunity to help people better understand EA or even to improve EA with the feedback you get. You don’t necessarily have to stay on top of the news—on the contrary, it helps if you show that everyone can make it their own thing as long as the basic tenets are preserved.
I understand this might be effortful for many. I don’t want to pressure anyone into doing this because it can also look defensive and reinforce identity politics. I figured it might be worth laying out this model to make it easier for people to sit through the discomfort and counteract an oversimplified understanding of what they cherish—whether that’s EA or anything else.
I think this is a good approach for iterative interactions.
I also want to flag that for one off interactions, they might be one off as a result of people evaluating you based on their (false) impression of you. Job interviews, cover letters, and first dates are obvious example. But even casually meeting a new person for the first time without a structure of assessment/evaluation (imagine a friend’s birthday party, an conference, or a Meetup-type social event) involves that person deciding if they would want to spend time with you. So if I tell a sibling that I am interested in EA ideas we will almost certainly have the space/time to discuss the details of what that means. But new and unformed social relationships often won’t be given that space/time to understand nuance.
This is interesting because I can tell you that—being a retired military servicemember—I’ve encountered some of the same discrimination and labeling within the EA community that EAs claim to experience from non-EAs. To use Alix Pham’s verbiage in an earlier post in a separate context “‘people don’t talk to you’ (because they project some beliefs on you you actually don’t have)”. Thankfully, engaging with the EA community (at least those with whom I’ve engaged) over the last several years have changed their minds (e.g. that I don’t come here to infiltrate the EA community or to arm the EA community, etc). In my Defense community, it is generally inadvisable to claim the EA moniker, while ironically it seemed inadvisable to claim being a former servicemember in my EA community. (It was a challenge being PNG’ed* in both communities that one wants to represent and bridge together, but I digress.)
Additionally, I believe that the term “soldier bias” creates the very confirmation bias within the EA community that EAs generally try to avoid; by automatically claiming that all soldiers have this particularly zealous bias. See the irony there? I know that there are several former servicemembers within the EA community who are proud and outstanding EAs (though many of them have been hesitant to openly divulge their previous profession, as they have told me). I think the “soldier bias” term would be unacceptable as a professional and formal naming convention if you replaced “soldier” with any other profession’s name when it is meant in a negative context.
*Persona non grata
Thank you John for sharing! This is super interesting.
Particularly, the “PNG” part makes me reflect on community belonging and inclusivity, I think it’s an important part.
That’s interesting, I’ll reflect on that. I would be curious to explore how the reason you mention can be a risk for you? And to what extent you’ll undertake actions to make sure people don’t know your EA affiliation for that reason?
No, I do think it’s owning to some extent already.
I think the reason you’re mentioning is also partly included in “people don’t talk to you” (because they project some beliefs on you you actually don’t have). But my certainty on that line of thought is lower as it’s something I thought less about.
It’s not a risk to say I’m EA, it’s just not informative in many contexts. It conveys a specific and false impression of my priorities and what I work on—so as a matter of cooperative communication, I don’t do it. I don’t take any actions to hide it.
I don’t really think the risk is that people don’t talk to me because they project false beliefs onto me. Because I’m not worried about negative consequences for myself if people think I work on AI. It’s just not a helpful thing to say in most contexts, because i don’t actually work on AI.
I think it’s different for professional community builders. In your job, EA is a community to be represented. In my job, EA is a signifier for a cluster of beliefs. Sometimes that cluster is relevant and sometimes it isn’t.
Thanks for clarifying, that makes a lot of sense. I’m not sure yet, but I think those considerations are not in the scope of my post, then? Let me know what you think.
Maybe this part
conveys it, maybe?