How much does EA focus on anti-discrimination, equity and human rights?
I’d say ‘not all that much, but somewhat’. There are lots of EAs who are definitely trying to make the EA community itself more diverse, more just, and a better place for people with oppressed/marginalized identities (see e.g. this post on advice for addressing sexual misconduct in the community). But EAs don’t tend to focus on addressing anti-discrimination etc in the world more broadly, as a cause area.
Why doesn’t EA focus on these things? I think for a few reasons, some of which are better than others:
-there is already a lot of good work being done on equity and human rights issues by other groups (e.g. the social justice/progressive movement broadly). On the other hand, some of the causes EAs focus on—global health, AI safety, animal welfare—are comparatively neglected by other groups. So it kinda makes sense for EAs to say ‘we’ll let these other movements keep doing their good work on these issues, and we’ll focus on these other issues that not many people care about’.
-relatedly, EAs care a lot about cost-effectiveness (ie how much demonstrable good can you get for your money), and maybe lots of equity/diversity work is, though important, not cost-effective (ie it’s expensive and the benefit is a bit uncertain)
-EAs focus on what I think of as ‘technical’ solutions to problems—e.g., ‘fixes’ that powerful entities can perform, whereas I see a lot of equity/diversity issues as cultural problems—that is, an NGO or government can’t just swoop in and do some wonk-ish quick fix; whole swathes of people have to change their behaviour. Obviously governments and NGOs do work on diversity/equity issues, but a big part of (eg) feminist or anti-sexist work is just ‘guys learn basic feminist principles and stop being sexist to women’. This is a really important thing to work on, but this is not the ‘style’ of solution that EAs tend to like.
I personally think it’s a shame that more EA work isn’t done on this, because if you can successfully change the culture in positive directions, that can be massively impactful.
-relatedly, historically many EAs have been from STEM or analytic philosophy academic backgrounds, so they are more drawn to ‘science-y’ problems and solutions (like ‘how to cure diseases and distribute those cures’ or ‘how to align artificial intelligence with human values’) rather than ‘humanities’ problems and solutions (like ‘what are the interpersonal dynamics that make people’s lives better or worse and how can we change culture in the positive direction?‘)
-there are lots of people with social privilege in EA: it’s majority white, straight, male, middle-class, etc. (Though getting more diverse on gender and race all the time, I think, and already more diverse on sexuality and stuff like mental health and neurodiversity than many social groups, I’d guess). You might predict that socially-privileged people would care less about equity issues than people who are directly impacted by those issues (obviously not inevitable, many privileged people care about equity issues)
-perhaps relatedly, EA is ‘officially’ apolitical, and equity/discrimination issues are more associated with left-wing or liberal politics. In fact, most EAs are liberal or left, but a decent amount are centrist, apolitical, libertarian or conservative. These EAs might not be interested in equity/discrimination issues, on the basis that they don’t think they’re important, or they dislike standard progressive approaches to them.
Anyway, I wrote a mini-essay there XD but I hope it’s somewhat helpful. Fwiw, I’m a social progressive and I would love to see projects that brought an EA mindset to equity, human rights or anti-discrimination projects.
Hi!
First of all, welcome :)
Second, answers to your questions:
How much does EA focus on anti-discrimination, equity and human rights?
I’d say ‘not all that much, but somewhat’. There are lots of EAs who are definitely trying to make the EA community itself more diverse, more just, and a better place for people with oppressed/marginalized identities (see e.g. this post on advice for addressing sexual misconduct in the community). But EAs don’t tend to focus on addressing anti-discrimination etc in the world more broadly, as a cause area.
Why doesn’t EA focus on these things? I think for a few reasons, some of which are better than others:
-there is already a lot of good work being done on equity and human rights issues by other groups (e.g. the social justice/progressive movement broadly). On the other hand, some of the causes EAs focus on—global health, AI safety, animal welfare—are comparatively neglected by other groups. So it kinda makes sense for EAs to say ‘we’ll let these other movements keep doing their good work on these issues, and we’ll focus on these other issues that not many people care about’.
-relatedly, EAs care a lot about cost-effectiveness (ie how much demonstrable good can you get for your money), and maybe lots of equity/diversity work is, though important, not cost-effective (ie it’s expensive and the benefit is a bit uncertain)
-EAs focus on what I think of as ‘technical’ solutions to problems—e.g., ‘fixes’ that powerful entities can perform, whereas I see a lot of equity/diversity issues as cultural problems—that is, an NGO or government can’t just swoop in and do some wonk-ish quick fix; whole swathes of people have to change their behaviour. Obviously governments and NGOs do work on diversity/equity issues, but a big part of (eg) feminist or anti-sexist work is just ‘guys learn basic feminist principles and stop being sexist to women’. This is a really important thing to work on, but this is not the ‘style’ of solution that EAs tend to like.
I personally think it’s a shame that more EA work isn’t done on this, because if you can successfully change the culture in positive directions, that can be massively impactful.
-relatedly, historically many EAs have been from STEM or analytic philosophy academic backgrounds, so they are more drawn to ‘science-y’ problems and solutions (like ‘how to cure diseases and distribute those cures’ or ‘how to align artificial intelligence with human values’) rather than ‘humanities’ problems and solutions (like ‘what are the interpersonal dynamics that make people’s lives better or worse and how can we change culture in the positive direction?‘)
-there are lots of people with social privilege in EA: it’s majority white, straight, male, middle-class, etc. (Though getting more diverse on gender and race all the time, I think, and already more diverse on sexuality and stuff like mental health and neurodiversity than many social groups, I’d guess). You might predict that socially-privileged people would care less about equity issues than people who are directly impacted by those issues (obviously not inevitable, many privileged people care about equity issues)
-perhaps relatedly, EA is ‘officially’ apolitical, and equity/discrimination issues are more associated with left-wing or liberal politics. In fact, most EAs are liberal or left, but a decent amount are centrist, apolitical, libertarian or conservative. These EAs might not be interested in equity/discrimination issues, on the basis that they don’t think they’re important, or they dislike standard progressive approaches to them.
Anyway, I wrote a mini-essay there XD but I hope it’s somewhat helpful. Fwiw, I’m a social progressive and I would love to see projects that brought an EA mindset to equity, human rights or anti-discrimination projects.