I first discovered the EA community in March, after reading Max H. Bazerman’s book: Better Not Perfect. From there, I discovered 80,000 Hours, Open Philanthropy, etc. I was really inspired by EA’s utilitarian approach to maximizing social impact. Many of my own thoughts about philanthropy had been validated by EA’s philosophy.
However, I noticed that the EA community was strongly white, male, and tech-focused. This demographic has historically been disconnected from social impact, so I am curious on the communities thoughts on this lack of diversity and if there are any initiatives being taken to improve diversity within the EA community.
[Edit: This post had negative karma when I made this comment]
I’m sorry that your first post on the forum is getting a bit of a negative reaction. It’s great to have you here and I hope this isn’t super off-putting!
If you are interested in some actual numbers on demographics, check out the EA Survey posts, like this one from 2024.
My thoughts on your questions:
I personally don’t mind much. Almost any community will have demographic weirdness of some kind, and I don’t inherently value diversity. I know others find it concerning, but I am not aware on survey data on this. You can definitely find quite a bit of writing on this topic if you search the forum.
I am unsure what you mean by “improve”. But most EAG events I have been to have meet-ups for various demographic groups. I have never attended these (despite “qualifying” for a few). But I know others really like organising and attending these. The most recent EAG London had meet-ups for the following: neurodiversity, people of colour, Christians, women and NB people, people from low and middle income countries, Jews, socioeconomic diversity, LGBTQ+.
(You may be getting downvoted for the line “This demographic has historically been disconnected from social impact”. Perhaps consider elaborating why you think that.)
This seems extremely false to me. What evidence do you have for this being true?
I mostly just want to join the chorus of people welcoming you here and repudiate the negative reaction that a very reasonable question is getting. It’s worth adding 3 things:
The demographic balance is improving over time, with new EAs in 2023 and 2024 being substantially more diverse.
Anecdotally, the EA forum skews whiter, more male, and more Bay Area. I personally feel that the forum is increasingly out of touch with ‘EA on the ground’, especially in cause areas such as global health. Implementing EAs very rarely post here, unfortunately. Don’t take any reaction here as representative of broader EA.
Anecdotally (and I believe I’ve seen some stats to back this up), some cause areas are more diverse than others. Global health and animal welfare seem to be substantially more diverse, specifically.
For what it’s worth, this is not my impression at all. Bay Area EAs (e.g. me) mostly consider the EA Forum to be very unrepresentative of their perspective, to the extent that it’s very rarely worthwhile to post here (which is why they often post on LessWrong instead).
In what way do you find it unrepresentative? Just curious because I am unfamiliar with the dynamics here.
There’s a social and professional community of Bay Area EAs who work on issues related to transformative AI. People in this cluster tend to have median timelines to transformative AI of 5 to 15 years, tend to think that AI takeover is 5-70% likely, tend to think that we should be fairly cosmopolitan in our altruism.
People in this cluster mostly don’t post on the EA Forum for a variety of reasons:
Many users here don’t seem very well-informed.
Lots of users here disagree with me on some of the opinions about AI that I stated above. Obviously it’s totally reasonable for people to disagree on those points, at least before people tell them arguments about them. But it usually doesn’t feel worth my time to argue about those points. I want to spend much more of my time discussing the implications of these basic beliefs than arguing about their probabilities. LessWrong is a much better place for this.
The culture here seems pretty toxic. I don’t really feel welcome here. I expect people to treat me with hostility as a result of being moderately influential as an AI safety researcher and executive.
To be clear, I think it’s a shame that the EA Forum isn’t a better place for people like me to post and comment.
You can check for yourself that the Bay Area EAs don’t really want to post here by looking up examples of prominent Bay Area EAs and noting that they commented here much more several years ago than they do today.
I wasn’t sure myself about what has done in the past to improve diversity, so I checked the diversity and inclusion tag on the forum. This led me to a very thorough, very detailed post from March 2025 called “History of diversity efforts and trends in EA” by Julia Wise, who works at the Centre for Effective Altruism (CEA) as a community liaison. (That post is quite long, so you might want to look at the SummaryBot summary first and then check the parts of the article that interest you.)
I also know that there are some groups online like the Facebook group Women and non-binary people in Effective Altruism. Julia Wise’s post lists some others under “Organizational / program efforts”.
Beyond the groups Julia’s post lists, there might also be Discord servers, Slack teams, or other private or semi-private groups for people in EA of certain demographics. You’d have to look around and maybe ask around.
I’m bringing up these kind of groups both because a) I have to think they increase retention of people in EA because people in these demographics have a place to commiserate and support each other and b) a group like this might be a good place for you to inquire about this topic further.
Part of the difficulty with improving diversity in EA is EA’s connection to LessWrong and the San Francisco Bay Area rationalist community, which has an alarmingly high level of endorsement of, curiosity toward, or sympathy to far-right, alt-right, racist, anti-feminist, anti-trans, white nationalist, white supremacist, and/or authoritarian views.[1] It would be great if EA could just chop off this noxious community like a necrotic limb, alas, for now, it is part and parcel of the EA community. Improving diversity in EA will always be met with some very vocal resistance as long as the two communities remain entwined. I don’t know what to do about this, except either give up on EA or keep arguing with people about this.
My recommendation is if you care about diversity and want to be involved in EA, it might be better to focus on an EA group local to you or some other corner of the EA movement where a significant majority of the people do care about diversity and would support diversity initiatives. In public online spaces related to EA like the EA Forum, diversity and diversity initiatives are a contested topic that not everyone agrees is good. Similarly, the online groups that focus on certain demographics in EA might be a nice experience for that reason.
In terms of effective altruism conferences like EA Global (EAG) or the various EAGx conferences, I don’t know what the atmosphere around diversity is like. I also don’t know how diversity is handled at the various major EA organizations like the Centre for Effective Altruism, 80,000 Hours, Giving What We Can, or EA Funds.
I think different pockets of the EA movement in different places or focusing on different cause areas or involving different networks of people can sometimes have quite different atmospheres or subcultures (for lack of a better word), so I don’t want to give you the impression that all of EA is the same with respect to diversity.
See this blog post by the philosopher David Thorstad for an overview/introduction. I also think this EA Forum post by a pseudonymous participant in several Bay Area rationalist events in 2024 encapsulates some of the very serious problems with that community.
In previous surveys, diversity/JEID related issued have often been mentioned as reasons for dissatisfaction. That said, there are diverse views about the topic (many of which you can read here, it’s much discussed).
Community Health Supplementary Survey
EA Survey 2019: Community Information
Hello! Great to have you on-board.
While I’m very supportive of improving the intracommunity experience of members of marginalised groups, I’m skeptical of outreach to them (above and beyond general outreach). What being an EA is, is a substantial personal sacrifice in favour of helping other people you’ll probably never meet. I’m not sure it’s effective, or appropriate, to disproportionately ask for that sacrifice from members of marginalised groups.
For example, I don’t think we should be asking black Americans (more than white Americans) to fundraise for malaria prevention for black African children, just because they’re both black. That, to me, seems incredibly crass. Being an EA as a whole is basically that, a couple steps removed—make substantial sacrifices to effectively help some of the most marginalised people in the world.
I don’t see anything in the OP about asking for disproportionate representation of minorities. They seem to be advocating for proportionate representation, and noticing that EA fails to live up to this expectation.
I also don’t think that EA truly is only a “sacrifice”. For one thing, plenty of EA jobs pay quite well. EA is also an opportunity to do good. EA also has a lot of influence, and directs substantial amounts of money. It’s totally reasonable to be concerned that the people making these decisions are not representative of the people that they affect, and may lack useful insight as a result.
I would argue that EA jobs don’t pay well at all for the level of work they expect, and that they all have a substantial sacrifice premium as compared to other jobs. Also EA job hunting is awful, the work is quite horrendously insecure, and I definitely wouldn’t recommend EA as a kind of way to get ahead in life. I consider the overmarketing of this to ambitious young idealists to be one of EA’s worst failures.
I would agree that I view EA as a great opportunity, and by such sacrifice we achieve a somewhat spiritual process of transformative self-actualisation. But I don’t think someone else should have to, particularly not a someone else that is struggling in a more marginalised space. And it’s my experience that they generally don’t.
Legitimately, if you have any “you should join up with EA” argument that works on a marginalised person that isn’t just lying by pretending there’s more in it for them than there actually is, please let me know because I’d like to use it.
EA jobs pay great if you’re from the global south—and yet!
(This discussion seems to be anchoring on diversity as it’s practiced in wealthy economies, which I don’t think necessarily has to be the main way of making EA diverse)
Even if you think that involvement in EA is mostly a matter of altruistic self-sacrifice (which I think is an oversimplification), it can still be true that there are women, people of colour, LGBTQ people, or people from other marginalized groups who want to make that altruistic self-sacrifice. Should they have the autonomy, the right to make that decision? I think so.
If people from these groups say that the barriers to their involvement in EA is not the amount of self-sacrifice involved, but other factors like perceived unwelcomingness toward people of their demographic, not seeing other people like them represented in EA, or a lack of trust or a sense of safety (e.g. around sexual harassment or instances of discrimination or prejudice, or the community’s response to it) — or other things of that nature — then that is a moral failing on the part of EA, and not noblesse oblige.
I agree with you that improving intracommunity experience is important.
I’m glad to hear you are inspired by EA’s utilitarian approach to maximizing social impact; I too am inspired by it and I have very much appreciated being involved with EA for the last decade.
I think you should probably ask questions as basic as this to AIs before asking people to talk to you about them. Here’s what Claude responded with.
Claude’s answer is nearly useless, so this seems to confirm that asking an LLM this question would not have been particularly helpful. [Substantially edited on 2025-11-10 at 17:08 UTC.]
I feel like Claude’s answer is totally fine. The original question seemed to me consistent with the asker having read literally nothing on this topic before asking; I think that the content Claude said adds value given that.
Not knowing anything about an obscure topic relating to the internal dynamics or composition of the EA community and asking here is perfectly fine. [Substantially edited on 2025-11-10 at 17:04 UTC.]
This is not an obscure topic. It’s been written about endlessly! I do not want to encourage people to make top-level posts asking questions before Googling or talking to AIs, especially on this topic.
I like Claude’s response a lot more than you do. I’m not sure why. I agree that it’s a lot less informative than your response.
(The post including “This demographic has historically been disconnected from social impact” made me much less inclined to want this person to stick around.)
”To a worm in horseradish, the world is horseradish.” What’s an obscure topic or not is a matter of perspective.
If you don’t want to deal with people who are curious about effective altruism asking questions, you can safely ignore such posts. Four people were willing to leave supportive and informative comments on the topic. The human touch may be as important as the information.
I edited my comments above because I worried what I originally wrote was too heated and I wanted to make a greater effort to be kind. I also worried I mistakenly read a dismissive or scolding tone in your original comment, and I would especially regret getting heated over a misunderstanding.
But your latest comment comes across to me as very unkind and I find it upsetting. I’m not really sure what to say. I really don’t feel okay with people saying things like that.
I think if you don’t want to interact with people who are newly interested in EA or want to get involved for the first time, you don’t have to, and it’s easily avoided. I’m not interested in a lot of posts on the EA Forum, and I don’t comment on them. If it ever gets to the point where posts like this one become so common it makes it harder to navigate the forum, everyone involved would want to address that (e.g. maybe have a tag for questions from newcomers that can be filtered out). For now, why not simply leave it to the people who want to engage?
Barring pretty unusual circumstances, I don’t think commenting on the relative undesirability of an individual poster sticking around is warranted. Especially when the individual poster is new and commenting on a criticism-adjacent area.
I don’t like the quoted sentence from the original poster either, as it stands—if someone is going to make that assertion, it needs to be better specified and supported. But there are lots of communities in which it wouldn’t be seen as controversial or needing support (especially in the context of a short post). So judging a newcomer for not knowing that this community would expect specification/support does not seem appropriate.
Moreover, if we’re going to take LLM outputs seriously, it’s worth noting that ChatGPT thinks the quote is significantly true:
Even though I don’t take ChatGPT’s answer too seriously, I do think it is evidence that the original statement was neither frivolous nor presented in bad faith.
I find this post quite good, especially the section I linked, specifically noting that solidarity≠altruism. Also this post.