I used to feel so strongly about effective altruism. But my heart isnât in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I havenât been able to sustain a vegan diet for more than a short time. And so on.
But there isnât a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.
These are the reasons for my feelings about the effective altruist community/âmovement, especially over the last one or two years:
-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGIâs takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about.
-The extent to which LessWrong culture has taken over or âcolonizedâ effective altruism culture is such a bummer. I know thereâs been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or ârationalismâ and the more knowledge and experience of it Iâve gained, the more Iâve accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like.
-The stories about sexual harassment are so disgusting. Theyâre really, really bad and crazy. And itâs so annoying how many comments you see on EA Forum posts about sexual harassment that make exhausting, unempathetic, arrogant, and frankly ridiculous statements, if not borderline incomprehensible in some cases. You see these stories of sexual harassment in the posts and you see evidence of the culture that enables sexual harassment in the comments. Very, very, very bad. Not my idea of a community I can wholeheartedly feel I belong to.
-Kind of a similar story with sexism, racism, and transphobia. The level of underreaction Iâve seen to instances of racism has been crazymaking. Itâs similar to the comments under the posts about sexual harassment. You see people justifying or downplaying clearly immoral behaviour. Itâs sickening.
-A lot of the response to the Nonlinear controversy was disheartening. It was disheartening to see how many people were eager to enable, justify, excuse, downplay, etc. bad behaviour. Sometimes aggressively, arrogantly, and rudely. It was also disillusioning to see how many people were so⌠easily fooled.
-Nobody talks normal in this community. At least not on this forum, in blogs, and on podcasts. I hate the LessWrong lingo. To the extent the EA Forum has its own distinct lingo, I probably hate that too. The lingo is great if you want to look smart. Itâs not so great if you want other people to understand what the hell you are talking about. In a few cases, it seems like it might even be deliberate obscurantism. But mostly itâs just people making poor choices around communication and writing style and word choice, maybe for some good reasons, maybe for some bad reasons, but bad choices either way. I think itâs rare that writing with a more normal diction wouldnât enhance peopleâs understanding of what youâre trying to say, even if youâre only trying to communicate with people who are steeped in the effective altruist niche. I donât think the effective altruist sublanguage is serving good thinking or good communication.
-I see a lot of interesting conjecture elevated to the level of conventional wisdom. Someone in the EA or LessWrong or rationalist subculture writes a creative, original, evocative blog post or forum post and then it becomes a meme, and those memes end up taking on a lot of influence over the discourse. Some of these ideas are probably promising. Many of them probably contain at least a grain of truth or insight. But they become conventional wisdom without enough scrutiny. Just because an idea is âhomegrownâ it takes on the force of a scientific idea thatâs been debated and tested in peer-reviewed journals for 20 years, or a widely held precept of academic philosophy. That seems just intellectually the wrong thing to do and also weirdly self-aggrandizing.
-An attitude I could call âEA exceptionalismâ, where people assert that people involved in effective altruism are exceptionally smart, exceptionally wise, exceptionally good, exceptionally selfless, etc. Not just above the average or median (however you would measure that), but part of a rare elite and maybe even superior to everyone else in the world. I see no evidence this is true. (In these sorts of discussions, you also sometimes see the lame argument that effective altruism is definitionally the correct approach to life because effective altruism means doing the most good and if something isnât doing the most good, then it isnât EA. The obvious implication of this argument is that whatâs called âEAâ might not be true EA, and maybe true EA looks nothing like âEAâ. So, this argument is not a defense of the self-identified âEAâ movement or community or self-identified âEAâ thought.)
-There is a dark undercurrent to some EA thought, along the lines of negative utilitarianism, anti-natalism, misanthropy, and pessimism. I think there is a risk of this promoting suicidal ideation because it basically is suicidal ideation.
-Too much of the discourse seems to revolve around how to control peopleâs behaviours or beliefs. Itâs a bit too House of Cards. I recently read about the psychologist Kurt Lewinâs study on the most effective ways to convince women to use animal organs (e.g. kidneys, livers, hearts) in their cooking during meat shortages during World War II. He found that a less paternalistic approach that showed more respect for the womenâs was more effective in getting them to incorporate animal organs into their cooking. The way I think about this is: you didnât have to be manipulated to get to the point where you are in believing what you believe or caring this much about this issue. So, instead of thinking of how to best manipulate people, think about how you got to the point where you are and try to let people in on that in an honest, straightforward way. Not only is this probably more effective, itâs also more moral and shows more epistemic humility (you might be wrong about what you believe and thatâs one reason not to try to manipulate people into believing it).
-A few more things but this list is already long enough.
Put all this together and the old stuff I cared about (charity effectiveness, giving what I can, expanding my moral circle) is lost in a mess of other stuff that is antithetical to what I value and what I believe. Iâm not even sure the effective altruism movement should exist anymore. The world might be better off if it closed down shop. I donât know. It could free up a lot of creativity and focus and time and resources to work on other things that might end up being better things to work on.
I still think there is value in the version of effective altruism I knew around 2015, when the primary focus was on global poverty and the secondary focus was on animal welfare, and AGI was on the margins. That version of effective altruism is so different from what exists today â which is mostly about AGI and has mostly been taken over by the rationalist subculture â that I have to consider those two different things. Maybe the old thing will find new life in some new form. I hope so.
Iâd distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/âor publicly admittted to a degree of focus on AI theyâve always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Philâs website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isnât much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
Yes, this seems similar to how I feel: I think the major donor(s) have re-prioritized, but am not so sure how many people have switched from other causes to AI. I think EA is more left to the grassroots now, and the forum has probably increased in importance. As long as the major donors donât make the forum all about AIâthen we have to create a new forum! But as donors change towards AI, the forum will inevitable see more AI content. Maybe some functions to âbalanceâ the forum posts so one gets representative content across all cause areas? Much like they made it possible to separate out community posts?
Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/âracism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you donât want to interact with the community anymore, I just want to share that I believe your voice is really important and I hope you continue to engage with EA! I wouldnât want the movement to discourage anyone who shares its principles (like âletâs use our time and resources to help others the mostâ), but disagrees with how itâs being put into practice, from actively participating.
I donât think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
Good point, I guess my lasting impression wasnât entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesnât feels discouraged from actively participating in EA.
I used to feel so strongly about effective altruism. But my heart isnât in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I havenât been able to sustain a vegan diet for more than a short time. And so on.
But there isnât a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.
These are the reasons for my feelings about the effective altruist community/âmovement, especially over the last one or two years:
-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGIâs takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about.
-The extent to which LessWrong culture has taken over or âcolonizedâ effective altruism culture is such a bummer. I know thereâs been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or ârationalismâ and the more knowledge and experience of it Iâve gained, the more Iâve accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like.
-The stories about sexual harassment are so disgusting. Theyâre really, really bad and crazy. And itâs so annoying how many comments you see on EA Forum posts about sexual harassment that make exhausting, unempathetic, arrogant, and frankly ridiculous statements, if not borderline incomprehensible in some cases. You see these stories of sexual harassment in the posts and you see evidence of the culture that enables sexual harassment in the comments. Very, very, very bad. Not my idea of a community I can wholeheartedly feel I belong to.
-Kind of a similar story with sexism, racism, and transphobia. The level of underreaction Iâve seen to instances of racism has been crazymaking. Itâs similar to the comments under the posts about sexual harassment. You see people justifying or downplaying clearly immoral behaviour. Itâs sickening.
-A lot of the response to the Nonlinear controversy was disheartening. It was disheartening to see how many people were eager to enable, justify, excuse, downplay, etc. bad behaviour. Sometimes aggressively, arrogantly, and rudely. It was also disillusioning to see how many people were so⌠easily fooled.
-Nobody talks normal in this community. At least not on this forum, in blogs, and on podcasts. I hate the LessWrong lingo. To the extent the EA Forum has its own distinct lingo, I probably hate that too. The lingo is great if you want to look smart. Itâs not so great if you want other people to understand what the hell you are talking about. In a few cases, it seems like it might even be deliberate obscurantism. But mostly itâs just people making poor choices around communication and writing style and word choice, maybe for some good reasons, maybe for some bad reasons, but bad choices either way. I think itâs rare that writing with a more normal diction wouldnât enhance peopleâs understanding of what youâre trying to say, even if youâre only trying to communicate with people who are steeped in the effective altruist niche. I donât think the effective altruist sublanguage is serving good thinking or good communication.
-I see a lot of interesting conjecture elevated to the level of conventional wisdom. Someone in the EA or LessWrong or rationalist subculture writes a creative, original, evocative blog post or forum post and then it becomes a meme, and those memes end up taking on a lot of influence over the discourse. Some of these ideas are probably promising. Many of them probably contain at least a grain of truth or insight. But they become conventional wisdom without enough scrutiny. Just because an idea is âhomegrownâ it takes on the force of a scientific idea thatâs been debated and tested in peer-reviewed journals for 20 years, or a widely held precept of academic philosophy. That seems just intellectually the wrong thing to do and also weirdly self-aggrandizing.
-An attitude I could call âEA exceptionalismâ, where people assert that people involved in effective altruism are exceptionally smart, exceptionally wise, exceptionally good, exceptionally selfless, etc. Not just above the average or median (however you would measure that), but part of a rare elite and maybe even superior to everyone else in the world. I see no evidence this is true. (In these sorts of discussions, you also sometimes see the lame argument that effective altruism is definitionally the correct approach to life because effective altruism means doing the most good and if something isnât doing the most good, then it isnât EA. The obvious implication of this argument is that whatâs called âEAâ might not be true EA, and maybe true EA looks nothing like âEAâ. So, this argument is not a defense of the self-identified âEAâ movement or community or self-identified âEAâ thought.)
-There is a dark undercurrent to some EA thought, along the lines of negative utilitarianism, anti-natalism, misanthropy, and pessimism. I think there is a risk of this promoting suicidal ideation because it basically is suicidal ideation.
-Too much of the discourse seems to revolve around how to control peopleâs behaviours or beliefs. Itâs a bit too House of Cards. I recently read about the psychologist Kurt Lewinâs study on the most effective ways to convince women to use animal organs (e.g. kidneys, livers, hearts) in their cooking during meat shortages during World War II. He found that a less paternalistic approach that showed more respect for the womenâs was more effective in getting them to incorporate animal organs into their cooking. The way I think about this is: you didnât have to be manipulated to get to the point where you are in believing what you believe or caring this much about this issue. So, instead of thinking of how to best manipulate people, think about how you got to the point where you are and try to let people in on that in an honest, straightforward way. Not only is this probably more effective, itâs also more moral and shows more epistemic humility (you might be wrong about what you believe and thatâs one reason not to try to manipulate people into believing it).
-A few more things but this list is already long enough.
Put all this together and the old stuff I cared about (charity effectiveness, giving what I can, expanding my moral circle) is lost in a mess of other stuff that is antithetical to what I value and what I believe. Iâm not even sure the effective altruism movement should exist anymore. The world might be better off if it closed down shop. I donât know. It could free up a lot of creativity and focus and time and resources to work on other things that might end up being better things to work on.
I still think there is value in the version of effective altruism I knew around 2015, when the primary focus was on global poverty and the secondary focus was on animal welfare, and AGI was on the margins. That version of effective altruism is so different from what exists today â which is mostly about AGI and has mostly been taken over by the rationalist subculture â that I have to consider those two different things. Maybe the old thing will find new life in some new form. I hope so.
Iâd distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/âor publicly admittted to a degree of focus on AI theyâve always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Philâs website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isnât much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
Yes, this seems similar to how I feel: I think the major donor(s) have re-prioritized, but am not so sure how many people have switched from other causes to AI. I think EA is more left to the grassroots now, and the forum has probably increased in importance. As long as the major donors donât make the forum all about AIâthen we have to create a new forum! But as donors change towards AI, the forum will inevitable see more AI content. Maybe some functions to âbalanceâ the forum posts so one gets representative content across all cause areas? Much like they made it possible to separate out community posts?
On cause prioritization, is there a more recent breakdown of how more and less engaged EAs prioritize? Like an update of this? I looked for this from the 2024 survey but could not find it easily: https://ââforum.effectivealtruism.org/ââposts/ââsK5TDD8sCBsga5XYg/ââea-survey-cause-prioritization
Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/âracism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you donât want to interact with the community anymore, I just want to share that I believe your voice is really important and I hope you continue to engage with EA! I wouldnât want the movement to discourage anyone who shares its principles (like âletâs use our time and resources to help others the mostâ), but disagrees with how itâs being put into practice, from actively participating.
My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrowâs comment saying NL still seem bad: https://ââforum.effectivealtruism.org/ââposts/ââH4DYehKLxZ5NpQdBC/âânonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb
I donât think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
Good point, I guess my lasting impression wasnât entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesnât feels discouraged from actively participating in EA.