In response, I ended up writing a lot of words, so I thought it was worth editing them a bit and putting them in a shortform. I’ve also added some ‘counterpoints’ that weren’t in the original comment.
To lay my cards on the table: I’m a social progressive and leftist, and I think it would be cool if more EAs thought about equity, justice, human rights and discrimination—as cause areas to work in, rather than just within the EA community. (I’ll call this cluster just ‘equity’ going forward). I also think it would be cool if left/progressive organisations had a more EA mindset sometimes. At the same time, as I hope my answers below show, I do think there are some good reasons that EAs don’t prioritize equity, as well as some bad reasons.
So, why don’t EAs priority gender and racial equity, as cause areas?
1. Other groups are already doing good work on equity (i.e. equity is less neglected)
The social justice/progressive movement has got feminism and anti-racism pretty well covered. On the other hand, the central EA causes—global health, AI safety, existential risk, animal welfare -are comparatively neglected by other groups. So it kinda makes sense for EAs to say ‘we’ll let these other movements keep doing their good work on these issues, and we’ll focus on these other issues that not many people care about’.
Counter-point: are other groups using the most (cost)-effective methods to achieve their goals? EAs should, of course, be epistemically modest; but it seems that (e.g.) someone who was steeped in both EA and feminism, might have some great suggestions for how to improve gender equality and women’s experiences, effectively.
2. Equity work isn’t cost-effective
EAs care a lot about cost-effectiveness, ie how much demonstrable good impact you can get for your money. Maybe lots of equity/diversity work is, though important, not cost-effective: ie it’s expensive, and the benefit is uncertain.
Counter-point: maybe EAs should try to work out how one might do equity work cost-effectively. (‘Social justice’ as such is seen as a western/rich world thing, but the countries where EA organisations often work also have equity problems, presumably).
3. Equity isn’t an EA-shaped problem
EAs focus on what I think of as ‘technical’ solutions to social problems—i.e., ‘fixes’ that can be unilaterally performed by powerful actors such as nonprofits or governments or even singular wealthy individuals. I see many equity issues as cultural problems—that is, a nonprofit can’t just swoop in and offer a wonk-ish fix; whole swathes of people have to be convinced to see the world differently and change their behaviour. Obviously, governments and NGOs do work on equity issues, but a big part of the “solution” to (e.g.) sexism is just “people, especially guys, learn basic feminist principles and stop being sexist to women”. This is really important to work on, but it’s not the style of solution that EAs tend to be drawn to.
4. EA is STEM-biased and equity is Humanities-biased
Related: historically many EAs have been from STEM or analytic philosophy academic backgrounds (citation needed: is this in the survey?) These EAs are naturally more drawn to science-and-maths-y problems and solutions, like ‘how to cure diseases and distribute those cures’ or ‘how to align artificial intelligence with human values’, since these are the types of problems they’ve learnt how to solve. More ‘humanities’-ish problems and solutions—like ‘what are the interpersonal dynamics that make people’s lives better or worse and how can we change culture in the positive direction?’ - are out of the modal EA’s comfort zone.
5. EAs are privileged and underrate the need for equity
There are lots of people with social privilege in EA: it’s majority white, straight, male, middle-class, etc. (Though getting more diverse on gender and race all the time, I think, and already more diverse on sexuality and stuff like mental health and neurodiversity than many social groups, I’d guess). You might predict that socially-privileged people would care less about equity issues than people who are directly impacted by those issues (though obviously this is not inevitable; many privileged people do care about equity issues that don’t affect them).
6. EA is apolitical and equity is left-coded
Perhaps relatedly, EA is ‘officially’ apolitical, and equity/discrimination issues are more associated with left-wing or liberal politics. In fact, most EAs are liberal or politically left, but a decent amount are centrist, apolitical, libertarian or conservative. These EAs might not be interested in equity/discrimination issues; they might think that they’re overrated, or they might dislike standard progressive approaches to equity (i.e. they might be “anti-woke”). This political minority might be vocal or central enough to sway the consensus.
EAs are privileged and underrate the need for equity
How do you reconcile this hypothesis with the huge importance EAs assign, relative to almost everyone else, to causes that typically affect even less privileged beings than the victims of injustice and inequity social justice and progressive folk normally focus on (i.e. oppressed people in rich countries and especially in the United States)? I’m thinking of “the bottom billion” people globally, nonhuman animals in factory farms, nonhuman animals in the wild (including invertebrates), digital minds (who may experience astronomical amounts of suffering), and future people (who may never exist). EAs may still exhibit major moral blindspots and failings, but if we do much better than most people (including most of our critics) in the most extreme cases, it is hard to see why we may be overlooking (as opposed to consciously deprioritizing) the most mundane cases.
I’m not sure it’s right to call EA apolitical. If we define politics about who should have power in society and how that power should be used, EA is most definitely political. It may not be party political, or traditional left-right coding, but its clearly political
On number 1, my understanding is that upstream disciplines (e.g., medicine, public health) created most of the highly effective interventions that EAs deployed, implemented, and scaled (e.g., bednets, vaccinations). EA brought in the resources and execution ability to implement stuff that already existed in the upstream disciplines but wasn’t being implemented well due to a lack of will, or a lack of emphasis on EA principles like cost-effectiveness and impartiality. So the question I’d have for the person who was steeped in both EA and feminism is whether there is an intervention already present in women’s studies, sociology, or another field that scored well on cost-effectiveness and might be something EA could implement.
I’m skeptical that EA would have been better at inventing highly cost-effective public health strategies than physicians, public-health experts, etc. with much greater subject-matter expertise. Likewise, I’d be skeptical of EA’s ability to invent a highly cost-effective equity strategy that mainstream subject-matter experts haven’t already come up with.
On number 3, I think it’s not only that potential solutions for equity in developing countries aren’t the kind of “solution that EAs tend to be drawn to.” There’s also a mismatch between EA’s available resources and the resources needed for most potential solutions. (As far as equity in developed countries, your first and second points are rather strong.) One could describe EA’s practical resources available for implementation—with some imprecision—as some significant financial firepower and a few thousand really smart people, predominately from the US and Western Europe.
But it’s doubtful that smart Westerners is the resource that high-impact equity work in developing countries really needs. As you implied, the skill set of those particular smart Westerners may not be the ideal skill set for equity work either. In contrast, malaria biology is the same everywhere, so the dropoff in effectiveness for Westerners working on malaria in cultures other than their own is manageable.
I think this dovetails with your number 4: I would suggest that ‘humanities’-ish work is significantly more difficult to do effectively when someone is trying to do it in a culture that is significantly different from their own. But I would characterize both number 3 and 4 somewhat more in terms of equity often being a less good fit for the resources EA has available to it (although I think lower comfort / interest is also a factor).
Why doesn’t EA focus on equity, human rights, and opposing discrimination (as cause areas)?
KJonEA asks:
‘How focused do you think EA is on topics of race and gender equity/justice, human rights, and anti-discrimination? What do you think are factors that shape the community’s focus?‘
In response, I ended up writing a lot of words, so I thought it was worth editing them a bit and putting them in a shortform. I’ve also added some ‘counterpoints’ that weren’t in the original comment.
To lay my cards on the table: I’m a social progressive and leftist, and I think it would be cool if more EAs thought about equity, justice, human rights and discrimination—as cause areas to work in, rather than just within the EA community. (I’ll call this cluster just ‘equity’ going forward). I also think it would be cool if left/progressive organisations had a more EA mindset sometimes. At the same time, as I hope my answers below show, I do think there are some good reasons that EAs don’t prioritize equity, as well as some bad reasons.
So, why don’t EAs priority gender and racial equity, as cause areas?
1. Other groups are already doing good work on equity (i.e. equity is less neglected)
The social justice/progressive movement has got feminism and anti-racism pretty well covered. On the other hand, the central EA causes—global health, AI safety, existential risk, animal welfare -are comparatively neglected by other groups. So it kinda makes sense for EAs to say ‘we’ll let these other movements keep doing their good work on these issues, and we’ll focus on these other issues that not many people care about’.
Counter-point: are other groups using the most (cost)-effective methods to achieve their goals? EAs should, of course, be epistemically modest; but it seems that (e.g.) someone who was steeped in both EA and feminism, might have some great suggestions for how to improve gender equality and women’s experiences, effectively.
2. Equity work isn’t cost-effective
EAs care a lot about cost-effectiveness, ie how much demonstrable good impact you can get for your money. Maybe lots of equity/diversity work is, though important, not cost-effective: ie it’s expensive, and the benefit is uncertain.
Counter-point: maybe EAs should try to work out how one might do equity work cost-effectively. (‘Social justice’ as such is seen as a western/rich world thing, but the countries where EA organisations often work also have equity problems, presumably).
3. Equity isn’t an EA-shaped problem
EAs focus on what I think of as ‘technical’ solutions to social problems—i.e., ‘fixes’ that can be unilaterally performed by powerful actors such as nonprofits or governments or even singular wealthy individuals. I see many equity issues as cultural problems—that is, a nonprofit can’t just swoop in and offer a wonk-ish fix; whole swathes of people have to be convinced to see the world differently and change their behaviour. Obviously, governments and NGOs do work on equity issues, but a big part of the “solution” to (e.g.) sexism is just “people, especially guys, learn basic feminist principles and stop being sexist to women”. This is really important to work on, but it’s not the style of solution that EAs tend to be drawn to.
4. EA is STEM-biased and equity is Humanities-biased
Related: historically many EAs have been from STEM or analytic philosophy academic backgrounds (citation needed: is this in the survey?) These EAs are naturally more drawn to science-and-maths-y problems and solutions, like ‘how to cure diseases and distribute those cures’ or ‘how to align artificial intelligence with human values’, since these are the types of problems they’ve learnt how to solve. More ‘humanities’-ish problems and solutions—like ‘what are the interpersonal dynamics that make people’s lives better or worse and how can we change culture in the positive direction?’ - are out of the modal EA’s comfort zone.
5. EAs are privileged and underrate the need for equity
There are lots of people with social privilege in EA: it’s majority white, straight, male, middle-class, etc. (Though getting more diverse on gender and race all the time, I think, and already more diverse on sexuality and stuff like mental health and neurodiversity than many social groups, I’d guess). You might predict that socially-privileged people would care less about equity issues than people who are directly impacted by those issues (though obviously this is not inevitable; many privileged people do care about equity issues that don’t affect them).
6. EA is apolitical and equity is left-coded
Perhaps relatedly, EA is ‘officially’ apolitical, and equity/discrimination issues are more associated with left-wing or liberal politics. In fact, most EAs are liberal or politically left, but a decent amount are centrist, apolitical, libertarian or conservative. These EAs might not be interested in equity/discrimination issues; they might think that they’re overrated, or they might dislike standard progressive approaches to equity (i.e. they might be “anti-woke”). This political minority might be vocal or central enough to sway the consensus.
How do you reconcile this hypothesis with the huge importance EAs assign, relative to almost everyone else, to causes that typically affect even less privileged beings than the victims of injustice and inequity social justice and progressive folk normally focus on (i.e. oppressed people in rich countries and especially in the United States)? I’m thinking of “the bottom billion” people globally, nonhuman animals in factory farms, nonhuman animals in the wild (including invertebrates), digital minds (who may experience astronomical amounts of suffering), and future people (who may never exist). EAs may still exhibit major moral blindspots and failings, but if we do much better than most people (including most of our critics) in the most extreme cases, it is hard to see why we may be overlooking (as opposed to consciously deprioritizing) the most mundane cases.
I’m not sure it’s right to call EA apolitical. If we define politics about who should have power in society and how that power should be used, EA is most definitely political. It may not be party political, or traditional left-right coding, but its clearly political
On number 1, my understanding is that upstream disciplines (e.g., medicine, public health) created most of the highly effective interventions that EAs deployed, implemented, and scaled (e.g., bednets, vaccinations). EA brought in the resources and execution ability to implement stuff that already existed in the upstream disciplines but wasn’t being implemented well due to a lack of will, or a lack of emphasis on EA principles like cost-effectiveness and impartiality. So the question I’d have for the person who was steeped in both EA and feminism is whether there is an intervention already present in women’s studies, sociology, or another field that scored well on cost-effectiveness and might be something EA could implement.
I’m skeptical that EA would have been better at inventing highly cost-effective public health strategies than physicians, public-health experts, etc. with much greater subject-matter expertise. Likewise, I’d be skeptical of EA’s ability to invent a highly cost-effective equity strategy that mainstream subject-matter experts haven’t already come up with.
On number 3, I think it’s not only that potential solutions for equity in developing countries aren’t the kind of “solution that EAs tend to be drawn to.” There’s also a mismatch between EA’s available resources and the resources needed for most potential solutions. (As far as equity in developed countries, your first and second points are rather strong.) One could describe EA’s practical resources available for implementation—with some imprecision—as some significant financial firepower and a few thousand really smart people, predominately from the US and Western Europe.
But it’s doubtful that smart Westerners is the resource that high-impact equity work in developing countries really needs. As you implied, the skill set of those particular smart Westerners may not be the ideal skill set for equity work either. In contrast, malaria biology is the same everywhere, so the dropoff in effectiveness for Westerners working on malaria in cultures other than their own is manageable.
I think this dovetails with your number 4: I would suggest that ‘humanities’-ish work is significantly more difficult to do effectively when someone is trying to do it in a culture that is significantly different from their own. But I would characterize both number 3 and 4 somewhat more in terms of equity often being a less good fit for the resources EA has available to it (although I think lower comfort / interest is also a factor).