I’m glad to hear you are inspired by EA’s utilitarian approach to maximizing social impact; I too am inspired by it and I have very much appreciated being involved with EA for the last decade.
I think you should probably ask questions as basic as this to AIs before asking people to talk to you about them. Here’s what Claude responded with.
The observation about EA’s demographic skew is accurate and widely acknowledged within the community. A few points worth making:
On the historical pattern: The claim that white, male, tech-focused demographics are “historically disconnected from social impact” isn’t quite right—these demographics have been heavily involved in philanthropy and social reform movements throughout history (from industrialist philanthropy to the civil rights movement’s diverse coalition). But the observation that EA specifically has a particular demographic concentration is valid.
Why this pattern exists: Several factors likely contribute:
EA grew out of academic philosophy and rationalist communities that had their own demographic patterns
The movement’s early focus areas (AI safety, global poverty, animal welfare) and analytical approach appealed to certain demographics more than others
Network effects and social clustering naturally amplified initial patterns
Geographic concentration in places like the Bay Area and Oxford
On diversity efforts: EA organizations have made various attempts to broaden participation, though with mixed results. There are efforts around:
Outreach to different universities and regions
Scholarships and programs aimed at underrepresented groups
Discussion of how framing and culture might inadvertently exclude some people
The harder question: There’s ongoing debate about whether demographic diversity is primarily valuable instrumentally (does it improve EA’s thinking and impact?) or intrinsically (is it important regardless of instrumental benefits?). Different people in EA would answer this differently, and it connects to deeper questions about EA’s core commitments and priorities.
Worth noting that some core EA principles (like cause impartiality and willingness to update beliefs based on evidence) might themselves be culturally specific in ways the movement doesn’t always recognize.
Claude’s answer is nearly useless, so this seems to confirm that asking an LLM this question would not have been particularly helpful. [Substantially edited on 2025-11-10 at 17:08 UTC.]
I feel like Claude’s answer is totally fine. The original question seemed to me consistent with the asker having read literally nothing on this topic before asking; I think that the content Claude said adds value given that.
Not knowing anything about an obscure topic relating to the internal dynamics or composition of the EA community and asking here is perfectly fine. [Substantially edited on 2025-11-10 at 17:04 UTC.]
This is not an obscure topic. It’s been written about endlessly! I do not want to encourage people to make top-level posts asking questions before Googling or talking to AIs, especially on this topic.
I like Claude’s response a lot more than you do. I’m not sure why. I agree that it’s a lot less informative than your response.
(The post including “This demographic has historically been disconnected from social impact” made me much less inclined to want this person to stick around.)
”To a worm in horseradish, the world is horseradish.” What’s an obscure topic or not is a matter of perspective.
If you don’t want to deal with people who are curious about effective altruism asking questions, you can safely ignore such posts. Four people were willing to leave supportive and informative comments on the topic. The human touch may be as important as the information.
I edited my comments above because I worried what I originally wrote was too heated and I wanted to make a greater effort to be kind. I also worried I mistakenly read a dismissive or scolding tone in your original comment, and I would especially regret getting heated over a misunderstanding.
But your latest comment comes across to me as very unkind and I find it upsetting. I’m not really sure what to say. I really don’t feel okay with people saying things like that.
I think if you don’t want to interact with people who are newly interested in EA or want to get involved for the first time, you don’t have to, and it’s easily avoided. I’m not interested in a lot of posts on the EA Forum, and I don’t comment on them. If it ever gets to the point where posts like this one become so common it makes it harder to navigate the forum, everyone involved would want to address that (e.g. maybe have a tag for questions from newcomers that can be filtered out). For now, why not simply leave it to the people who want to engage?
(The post including “This demographic has historically been disconnected from social impact” made me much less inclined to want this person to stick around.)
Barring pretty unusual circumstances, I don’t think commenting on the relative undesirability of an individual poster sticking around is warranted. Especially when the individual poster is new and commenting on a criticism-adjacent area.
I don’t like the quoted sentence from the original poster either, as it stands—if someone is going to make that assertion, it needs to be better specified and supported. But there are lots of communities in which it wouldn’t be seen as controversial or needing support (especially in the context of a short post). So judging a newcomer for not knowing that this community would expect specification/support does not seem appropriate.
Moreover, if we’re going to take LLM outputs seriously, it’s worth noting that ChatGPT thinks the quote is significantly true:
Even though I don’t take ChatGPT’s answer too seriously, I do think it is evidence that the original statement was neither frivolous nor presented in bad faith.
I’m glad to hear you are inspired by EA’s utilitarian approach to maximizing social impact; I too am inspired by it and I have very much appreciated being involved with EA for the last decade.
I think you should probably ask questions as basic as this to AIs before asking people to talk to you about them. Here’s what Claude responded with.
Claude’s answer is nearly useless, so this seems to confirm that asking an LLM this question would not have been particularly helpful. [Substantially edited on 2025-11-10 at 17:08 UTC.]
I feel like Claude’s answer is totally fine. The original question seemed to me consistent with the asker having read literally nothing on this topic before asking; I think that the content Claude said adds value given that.
Not knowing anything about an obscure topic relating to the internal dynamics or composition of the EA community and asking here is perfectly fine. [Substantially edited on 2025-11-10 at 17:04 UTC.]
This is not an obscure topic. It’s been written about endlessly! I do not want to encourage people to make top-level posts asking questions before Googling or talking to AIs, especially on this topic.
I like Claude’s response a lot more than you do. I’m not sure why. I agree that it’s a lot less informative than your response.
(The post including “This demographic has historically been disconnected from social impact” made me much less inclined to want this person to stick around.)
”To a worm in horseradish, the world is horseradish.” What’s an obscure topic or not is a matter of perspective.
If you don’t want to deal with people who are curious about effective altruism asking questions, you can safely ignore such posts. Four people were willing to leave supportive and informative comments on the topic. The human touch may be as important as the information.
I edited my comments above because I worried what I originally wrote was too heated and I wanted to make a greater effort to be kind. I also worried I mistakenly read a dismissive or scolding tone in your original comment, and I would especially regret getting heated over a misunderstanding.
But your latest comment comes across to me as very unkind and I find it upsetting. I’m not really sure what to say. I really don’t feel okay with people saying things like that.
I think if you don’t want to interact with people who are newly interested in EA or want to get involved for the first time, you don’t have to, and it’s easily avoided. I’m not interested in a lot of posts on the EA Forum, and I don’t comment on them. If it ever gets to the point where posts like this one become so common it makes it harder to navigate the forum, everyone involved would want to address that (e.g. maybe have a tag for questions from newcomers that can be filtered out). For now, why not simply leave it to the people who want to engage?
Barring pretty unusual circumstances, I don’t think commenting on the relative undesirability of an individual poster sticking around is warranted. Especially when the individual poster is new and commenting on a criticism-adjacent area.
I don’t like the quoted sentence from the original poster either, as it stands—if someone is going to make that assertion, it needs to be better specified and supported. But there are lots of communities in which it wouldn’t be seen as controversial or needing support (especially in the context of a short post). So judging a newcomer for not knowing that this community would expect specification/support does not seem appropriate.
Moreover, if we’re going to take LLM outputs seriously, it’s worth noting that ChatGPT thinks the quote is significantly true:
Even though I don’t take ChatGPT’s answer too seriously, I do think it is evidence that the original statement was neither frivolous nor presented in bad faith.