I’m not going to concede the ground that this conversation is about kindness or intellectual autonomy. Because it’s really not what’s at stake. This is about telling certain kinds of people that EA isn’t for them.
there are only some people who have had experiences that would point them to this correct conclusion
But this is about optimal marketing and movement growth, a very objective empirical question. It doesn’t seem to have much to do with personal experiences; we don’t normally bring up intersectionalism in debates about other ordinary things like this, we just talk about experiences and knowledge in common terms, since race and so on aren’t dominant factors.
By the way, think of the kind of message that would be sent. “Hey you! Don’t come to effective altruism! It probably isn’t for you!” That would be interpreted as elitist and close-minded, because there are smart people who don’t have the same views that other EAs do and they ought to be involved.
Let’s be really clear. The points given in the OP, even if steelmanned, do not contradict EA. They happened to cause trouble for one person, that’s all.
I have some sort of dispreference for speech about how “we” in EA believe one thing or another.
You can interpret that kind of speech prescriptively—i.e., I am making the claim that given the premises of our shared activities and values, effective altruists should agree that reducing world poverty is overwhelmingly more important than aspiring to be the nicest, meekest social movement in the world.
Edit: also, since you stated earlier that you don’t actually identify as EA, it really doesn’t make any sense for you to complain about how we talk about what we believe.
I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That’s because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who’d otherwise feel welcome in EA.
But this is about optimal marketing and movement growth, a very empirical question. It doesn’t seem to have much to do with personal experiences
I’m happy to discuss optimal marketing and movement growth strategies, but I don’t think the question of how to optimally grow EA is best answered as an empirical question at all. I’m generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.
The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don’t understand or don’t appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I’d expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA’s overall conversation norms isn’t possible. If you’re the sort of person who doesn’t use particularly friendly conversation norms in the first place, you’re likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.
“Conversation norms” might seem like a dangerously broad term, but I think it’s pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA’s conversation norms worse. There’s no reason to think that a decrease in quality of EA’s conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA’s conversation norms become less healthy, key people are pushed away, or don’t engage with us in the first place, and this destroys utility we’d have otherwise produced.
It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.
That’s because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold
No, it’s not cold. It’s indifferent, and normal. No one in any social movement worries about wasting the time of people who come to learn about things. Churches don’t worry that they’re wasting people’s time when inviting them to come in for a sermon; they don’t advertise all the reasons that people don’t believe in God. Feminists don’t worry that they’re wasting people’s time by not advertising that they want white women to check their privilege before colored ones. BLM doesn’t worry that it’s wasting people’s time by not advertising that they don’t welcome people who are primarily concerned with combating black-on-black violence. And so on.
Learning what EA is about does not take a long time. This is not like asking people to read Marx or the LessWrong sequences. The books by Singer and MacAskill are very accessible and do not take long to read. If someone reads it and doesn’t like it, so what? They heard a different perspective before going back to their ordinary life.
is cold in a way that disproportionately drives away certain types of skilled people who’d otherwise feel welcome in EA.
Who thinks “I’m an effective altruist and I feel unwelcome here in effective altruism because people who don’t agree with effective altruism aren’t properly shielded from our movement”? If you want to make people feel welcome then make it a movement that works for them. I fail to see how publicly broadcasting incompatibility with others does any good.
Sure, it’s nice to have a clearly defined outgroup that you can contrast yourselves with, to promote solidarity. Is that what you mean? But there are much easier and safer punching bags to be used for this purpose, like selfish capitalists or snobby Marxist intellectuals.
Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.
Intersectionality does not mean simply looking at people’s experiences from different backgrounds. It means critiquing and moving past sweeping modernist narratives of the experiences of large groups by investigating the unique ways in which orthogonal identity categories interact. I don’t see why it’s helpful, given that identity hasn’t previously entered the picture at all in this conversation, and that there don’t seem to be any problematic sweeping identity narratives floating around.
The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don’t understand or don’t appreciate that unhealthy conversation norms subtly but surely drive away valuable people.
I am a little bit confused here. You are the one saying that we should make outward facing statements telling people that EA isn’t suited for them. How is that not going to drive away valuable people, in particular the ones who have diverse perspectives?
And in what way is failing to make such statements an unhealthy conversational norm? I have never seen any social movement perform this sort of behavior. If doing so is a conversational norm then it’s not one which people have grown accustomed to expect.
Moreover, the street goes two ways. Here’s a different perspective which you may have overlooked due to your background: some people want to be in a movement that’s solid and self-assured. Creating an environment where language is constantly being policed for extreme niceness can lead some people to feel uninterested in engaging in honest dialogue.
If you’re the sort of person who doesn’t use particularly friendly conversation norms in the first place, you’re likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.
You can reject quantitative metrics, and you can also give some credence to allegations of bias. But you can’t rely on this sort of thing to form a narrative. You have to find some kind of evidence.
When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA’s conversation norms worse.
This is a strawman of my statements, which I have no interest in validating through response.
I’m not going to concede the ground that this conversation is about kindness or intellectual autonomy. Because it’s really not what’s at stake. This is about telling certain kinds of people that EA isn’t for them.
But this is about optimal marketing and movement growth, a very objective empirical question. It doesn’t seem to have much to do with personal experiences; we don’t normally bring up intersectionalism in debates about other ordinary things like this, we just talk about experiences and knowledge in common terms, since race and so on aren’t dominant factors.
By the way, think of the kind of message that would be sent. “Hey you! Don’t come to effective altruism! It probably isn’t for you!” That would be interpreted as elitist and close-minded, because there are smart people who don’t have the same views that other EAs do and they ought to be involved.
Let’s be really clear. The points given in the OP, even if steelmanned, do not contradict EA. They happened to cause trouble for one person, that’s all.
You can interpret that kind of speech prescriptively—i.e., I am making the claim that given the premises of our shared activities and values, effective altruists should agree that reducing world poverty is overwhelmingly more important than aspiring to be the nicest, meekest social movement in the world.
Edit: also, since you stated earlier that you don’t actually identify as EA, it really doesn’t make any sense for you to complain about how we talk about what we believe.
I agree with your last paragraph, as written. But this conversation is about kindness, and trusting people to be competent altruists, and epistemic humility. That’s because acting indifferent to whether or not people who care about similar things as we do waste time figuring things out is cold in a way that disproportionately drives away certain types of skilled people who’d otherwise feel welcome in EA.
I’m happy to discuss optimal marketing and movement growth strategies, but I don’t think the question of how to optimally grow EA is best answered as an empirical question at all. I’m generally highly supportive of trying to quantify and optimize things, but in this case, treating movement growth as something suited to empirical analysis may be harmful on net, because the underlying factors actually responsible for the way & extent to which movement growth maps to eventual impact are impossible to meaningfully track. Intersectionality comes into the picture when, due to their experiences, people from certain backgrounds are much, much likelier to be able to easily grasp how these underlying factors impact the way in which not all movement growth is equal.
The obvious-to-me way in which this could be true is if traditionally privileged people (especially first-worlders with testosterone-dominated bodies) either don’t understand or don’t appreciate that unhealthy conversation norms subtly but surely drive away valuable people. I’d expect the effect of unhealthy conversation norms to be mostly unnoticeable; for one, AB-testing EA’s overall conversation norms isn’t possible. If you’re the sort of person who doesn’t use particularly friendly conversation norms in the first place, you’re likely to underestimate how important friendly conversation norms are to the well-being of others, and overestimate the willingness of others to consider themselves a part of a movement with poor conversation norms.
“Conversation norms” might seem like a dangerously broad term, but I think it’s pointing at exactly the right thing. When people speak as if dishonesty is permissible, as if kindness is optional, or as if dominating others is ok, this makes EA’s conversation norms worse. There’s no reason to think that a decrease in quality of EA’s conversation norms would show up in quantitative metrics like number of new pledges per month. But when EA’s conversation norms become less healthy, key people are pushed away, or don’t engage with us in the first place, and this destroys utility we’d have otherwise produced.
It may be worse than this, even: if counterfactual EAs who care a lot about having healthy conversational norms are a somewhat homogeneous group of people with skill sets that are distinct from our own, this could cause us to disproportionately lack certain classes of talented people in EA.
Really liked this comment. Would be happy to see a top level post on the issue.
I agree that it would be better out of context, since it’s strawmanning the comment that it’s trying to respond to.
No, it’s not cold. It’s indifferent, and normal. No one in any social movement worries about wasting the time of people who come to learn about things. Churches don’t worry that they’re wasting people’s time when inviting them to come in for a sermon; they don’t advertise all the reasons that people don’t believe in God. Feminists don’t worry that they’re wasting people’s time by not advertising that they want white women to check their privilege before colored ones. BLM doesn’t worry that it’s wasting people’s time by not advertising that they don’t welcome people who are primarily concerned with combating black-on-black violence. And so on.
Learning what EA is about does not take a long time. This is not like asking people to read Marx or the LessWrong sequences. The books by Singer and MacAskill are very accessible and do not take long to read. If someone reads it and doesn’t like it, so what? They heard a different perspective before going back to their ordinary life.
Who thinks “I’m an effective altruist and I feel unwelcome here in effective altruism because people who don’t agree with effective altruism aren’t properly shielded from our movement”? If you want to make people feel welcome then make it a movement that works for them. I fail to see how publicly broadcasting incompatibility with others does any good.
Sure, it’s nice to have a clearly defined outgroup that you can contrast yourselves with, to promote solidarity. Is that what you mean? But there are much easier and safer punching bags to be used for this purpose, like selfish capitalists or snobby Marxist intellectuals.
Intersectionality does not mean simply looking at people’s experiences from different backgrounds. It means critiquing and moving past sweeping modernist narratives of the experiences of large groups by investigating the unique ways in which orthogonal identity categories interact. I don’t see why it’s helpful, given that identity hasn’t previously entered the picture at all in this conversation, and that there don’t seem to be any problematic sweeping identity narratives floating around.
I am a little bit confused here. You are the one saying that we should make outward facing statements telling people that EA isn’t suited for them. How is that not going to drive away valuable people, in particular the ones who have diverse perspectives?
And in what way is failing to make such statements an unhealthy conversational norm? I have never seen any social movement perform this sort of behavior. If doing so is a conversational norm then it’s not one which people have grown accustomed to expect.
Moreover, the street goes two ways. Here’s a different perspective which you may have overlooked due to your background: some people want to be in a movement that’s solid and self-assured. Creating an environment where language is constantly being policed for extreme niceness can lead some people to feel uninterested in engaging in honest dialogue.
You can reject quantitative metrics, and you can also give some credence to allegations of bias. But you can’t rely on this sort of thing to form a narrative. You have to find some kind of evidence.
This is a strawman of my statements, which I have no interest in validating through response.