I feel like Claude’s answer is totally fine. The original question seemed to me consistent with the asker having read literally nothing on this topic before asking; I think that the content Claude said adds value given that.
Not knowing anything about an obscure topic relating to the internal dynamics or composition of the EA community and asking here is perfectly fine. [Substantially edited on 2025-11-10 at 17:04 UTC.]
This is not an obscure topic. It’s been written about endlessly! I do not want to encourage people to make top-level posts asking questions before Googling or talking to AIs, especially on this topic.
I like Claude’s response a lot more than you do. I’m not sure why. I agree that it’s a lot less informative than your response.
(The post including “This demographic has historically been disconnected from social impact” made me much less inclined to want this person to stick around.)
”To a worm in horseradish, the world is horseradish.” What’s an obscure topic or not is a matter of perspective.
If you don’t want to deal with people who are curious about effective altruism asking questions, you can safely ignore such posts. Four people were willing to leave supportive and informative comments on the topic. The human touch may be as important as the information.
I edited my comments above because I worried what I originally wrote was too heated and I wanted to make a greater effort to be kind. I also worried I mistakenly read a dismissive or scolding tone in your original comment, and I would especially regret getting heated over a misunderstanding.
But your latest comment comes across to me as very unkind and I find it upsetting. I’m not really sure what to say. I really don’t feel okay with people saying things like that.
I think if you don’t want to interact with people who are newly interested in EA or want to get involved for the first time, you don’t have to, and it’s easily avoided. I’m not interested in a lot of posts on the EA Forum, and I don’t comment on them. If it ever gets to the point where posts like this one become so common it makes it harder to navigate the forum, everyone involved would want to address that (e.g. maybe have a tag for questions from newcomers that can be filtered out). For now, why not simply leave it to the people who want to engage?
(The post including “This demographic has historically been disconnected from social impact” made me much less inclined to want this person to stick around.)
Barring pretty unusual circumstances, I don’t think commenting on the relative undesirability of an individual poster sticking around is warranted. Especially when the individual poster is new and commenting on a criticism-adjacent area.
I don’t like the quoted sentence from the original poster either, as it stands—if someone is going to make that assertion, it needs to be better specified and supported. But there are lots of communities in which it wouldn’t be seen as controversial or needing support (especially in the context of a short post). So judging a newcomer for not knowing that this community would expect specification/support does not seem appropriate.
Moreover, if we’re going to take LLM outputs seriously, it’s worth noting that ChatGPT thinks the quote is significantly true:
Even though I don’t take ChatGPT’s answer too seriously, I do think it is evidence that the original statement was neither frivolous nor presented in bad faith.
I feel like Claude’s answer is totally fine. The original question seemed to me consistent with the asker having read literally nothing on this topic before asking; I think that the content Claude said adds value given that.
Not knowing anything about an obscure topic relating to the internal dynamics or composition of the EA community and asking here is perfectly fine. [Substantially edited on 2025-11-10 at 17:04 UTC.]
This is not an obscure topic. It’s been written about endlessly! I do not want to encourage people to make top-level posts asking questions before Googling or talking to AIs, especially on this topic.
I like Claude’s response a lot more than you do. I’m not sure why. I agree that it’s a lot less informative than your response.
(The post including “This demographic has historically been disconnected from social impact” made me much less inclined to want this person to stick around.)
”To a worm in horseradish, the world is horseradish.” What’s an obscure topic or not is a matter of perspective.
If you don’t want to deal with people who are curious about effective altruism asking questions, you can safely ignore such posts. Four people were willing to leave supportive and informative comments on the topic. The human touch may be as important as the information.
I edited my comments above because I worried what I originally wrote was too heated and I wanted to make a greater effort to be kind. I also worried I mistakenly read a dismissive or scolding tone in your original comment, and I would especially regret getting heated over a misunderstanding.
But your latest comment comes across to me as very unkind and I find it upsetting. I’m not really sure what to say. I really don’t feel okay with people saying things like that.
I think if you don’t want to interact with people who are newly interested in EA or want to get involved for the first time, you don’t have to, and it’s easily avoided. I’m not interested in a lot of posts on the EA Forum, and I don’t comment on them. If it ever gets to the point where posts like this one become so common it makes it harder to navigate the forum, everyone involved would want to address that (e.g. maybe have a tag for questions from newcomers that can be filtered out). For now, why not simply leave it to the people who want to engage?
Barring pretty unusual circumstances, I don’t think commenting on the relative undesirability of an individual poster sticking around is warranted. Especially when the individual poster is new and commenting on a criticism-adjacent area.
I don’t like the quoted sentence from the original poster either, as it stands—if someone is going to make that assertion, it needs to be better specified and supported. But there are lots of communities in which it wouldn’t be seen as controversial or needing support (especially in the context of a short post). So judging a newcomer for not knowing that this community would expect specification/support does not seem appropriate.
Moreover, if we’re going to take LLM outputs seriously, it’s worth noting that ChatGPT thinks the quote is significantly true:
Even though I don’t take ChatGPT’s answer too seriously, I do think it is evidence that the original statement was neither frivolous nor presented in bad faith.