The context for what Iâm discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here.
Warning: This is a polemic that uses harsh language. I still completely, sincerely mean everything I say here and I consciously endorse it.[1]
It has never stopped shocking and disgusting me that the EA Forum is a place where someone can write a post arguing that Black Africans need Western-funded programs to edit their genomes to increase their intelligence in order to overcome global poverty and can cite overtly racist and white supremacist sources to support this argument (even a source with significant connections to the 1930s and 1940s Nazi Party in Germany and the American Nazi Party, a neo-Nazi party) and that post can receive a significant amount of approval and defense from people in EA, even after the thin disguise over top of the racism is removed by perceptive readers. That is such a bonkers thing and such a morally repugnant thing, I keep struggling to find words to express my exasperation and disbelief. Effective altruism as a movement probably deserves to fail for that, if it canât correct it.[2]
My loose, general impression is that people who got involved in EA because of global poverty and animal welfare tend to be broadly liberal or centre-left and tend to be at least sympathetic toward arguments about social justice and anti-racism. Conversely, my impression of LessWrong and the online/âBay Area rationalist community is that they donât like social justice, anti-racism, or socially/âculturally progressive views. One of the most bewildering things I ever read on LessWrong was one of the site admins (an employee of Lightcone Infrastructure) arguing that closeted gay people probably tend to have low moral integrity because being closeted is a form of deception. I mean, what?! This is the ârationalistâ community?? What are you talking about?! As I recall based on votes, a majority of forum users who voted on the comment agreed.[3]
Overall, LessWrong users seem broadly sympathetic to racist arguments and views.[4] Same for sexist or anti-feminist views, and extremely so for anti-LGBT (especially anti-trans) views. Personally, I find it to be the most unpleasant website Iâve spent more than ten hours reading. When I think of LessWrong, I picture a dark, dingy corner of a house. I truly find it to be awful.
The more Iâve thought about it, the more truth I find in the blogger Ozy Brennanâs interpretation of the LessWrong and the rationalist community through the concept of the âcultic milieuâ and a comparison to new religious movements (not cults in the more usual sense connoting high-control groups). Ozy Brennan self-identifies as a rationalist and is active in the community, which makes this analysis far more believable than if it came from an outsider. The way Iâd interpret Ozyâs blog post, which Ozy may not agree with, is that rationalists are in some sense fundamentally devoted to being incorrect, since theyâre fundamentally devoted to being against consensus or majority views on many major topics â regardless of whether those views are correct or incorrect â and inevitably that will lead to having a lot of incorrect views.
I see very loose, very limited analogies between LessWrong and online communities devoted discussing conspiracy theories like QAnon or to online incel communities. Conspiracy theories because LessWrong has a suspicious, distrustful, at least somewhat paranoid or hypervigilant view on people and the world, this impulse to turn over rocks to find where the bad stuff is. Also, thereâs the impulse to connect too much. To subsume too much under one theory or worldview. And too much reliance on oneâs own fringe community to explain the world and interpret everything. Both, in a sense, are communities built around esoteric knowledge. And, indeed, Iâve seen some typical sort of conspiracy theory-seeming stuff on LessWrong related to American intelligence agencies and so on.
Incel communities because the atmosphere of LessWrong feels rather bitter, resentful, angry, unhappy, isolated, desperate, arrogant, and hateful, and in its own way is also a sort of self-help or commiseration community for young men who feel left out of the normal social world. But rather than encouraging healthy, adaptive responses to that experience, instead both communities encourage anti-social behaviour, leaning into distorted thinking, resentment, and disdainful views of other people.
I just noticed that Ozy recently published a much longer article in Asterisk Magazine on the topic of actual high-control groups or high-demand groups with some connection to the rationalist community. It will take me a while to properly read the whole thing and to think about it. But at a glance, there are some aspects of the article that are relevant to what Iâm discussing here, such as this quote:
Nevertheless, some groups within the community have wound up wildly dysfunctionalâa term Iâm using to sidestep definitional arguments about what is and isnât a cult. And some of the blame can be put on the rationalist communityâs marketing.
The Sequences make certain implicit promises. There is an art of thinking better, and weâve figured it out. If you learn it, you can solve all your problems, become brilliant and hardworking and successful and happy, and be one of the small elite shaping not only society but the entire future of humanity.
This is, not to put too fine a point on it, not true.
Multiple interviewees remarked that the Sequences create the raw material for a cult. To his credit, their author, Eliezer Yudkowsky, shows little interest in running one.
And this quote:
But people who are drawn to the rationalist community by the Sequences often want to be in a cult. To be sure, no one wants to be exploited or traumatized. But they want some trustworthy authority to change the way they think until they become perfect, and then to assign them to their role in the grand plan to save humanity. Theyâre disappointed to discover a community made of mere mortals, with no brain tricks you canât get from Statistics 101 and a good CBT workbook, whose approach to world problems involves a lot fewer grand plans and a lot more muddling through.
And this one:
Jessica Taylor, an AI researcher who knew both Zizians and participants in Leverage Research, put it bluntly. âThereâs this belief [among rationalists],â she said, âthat society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really badânot just religion, but also normal âtrust-the-expertsâ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people arenât actually smart enough to form very good conclusions once they start thinking for themselves.â
One way that thinking for yourself goes wrong is that you realize your society is wrong about something, donât realize that you canât outperform it, and wind up even wronger. But another potential failure is that, knowing both that your society is wrong and that you canât do better, you start looking for someone even more right. Paradoxically, the desire to ignore the experts can make rationalists more vulnerable to a charismatic leader.
Or, as Jessica Taylor said, âThey do outsource their thinking to others, but not to the typical authorities.â
In principle, you could have the view that the typical or median person is benefitted by the Sequences or by LessWrong or the rationalist community, and itâs just an unfortunate but uncommon side-effect for people to slip into cults or high-control groups. It sounds like thatâs what Ozy believes. My view is much harsher: by and large, the influence that LessWrong/âthe rationalist community has on people is bad, and people who take these ideas and this subculture to an extreme are just experiencing a more extreme version of the bad that happens to pretty much everyone who is influenced by these ideas and this subculture. (There might be truly minor exceptions to this, but I still see this as the overall trend.)
Obviously, there is now a lot of overlap between the EA Forum and LessWrong and between EA and the rationalist community. I think to the extent that LessWrong and the rationalist community have influenced EA, EA has become something much worse. Itâs become something repelling to me. I donât want any of this cult stuff. I donât want any of this racist stuff. Or conspiracy theory stuff. Or harmful self-help stuff for isolated young men. Iâm happy to agree with the consensus view most of the time because I care about being correct much more than I care about being counter-consensus. I am extremely skeptical toward esoteric knowledge and I think itâs virtually always either nonsense or prosaic stuff repackaged to look esoteric. I donât buy these promises of unlocking powerful secrets through obscure websites.
There was always a little bit of overlap between EA and the rationalist community, starting very early on, but it wasnât a ruinous amount. And itâs not like EA didnât independently have its own problems before the rationalist communityâs influence increased a lot, but those problems seemed more manageable. The situation now feels like the rationalist community is unloading more and more of its cargo onto the boat that is EA, and EA is just sinking deeper and deeper into the water over time. I feel sour and queasy about this because EA was once something I loved and itâs becoming increasingly laden with things I oppose in the strongest possible terms, like racism, interpersonal cruelty, and extremely irrational thinking patterns. How can people who were in EA because of global poverty and animal welfare, who had no previous affiliation with the rationalist community, stand this? Are they all gone already? Have they opted to recede from public arguments and just focus on their own particular niches? What gives?
And to the extent that the racism in EA is independently EAâs problem and has nothing to do with the influence of the rationalist community (which obviously has to be more than nothing), then that is 100% EAâs problem, but I canât imagine racism in EA could be satisfactorily addressed without significant conflict with and alienation of many people who overlap between the rationalist community and EA and who either endorse or strongly sympathize with racist views. (For example, in January 2023, when the Centre for Effective Altruism published a brief statement that affirmed the equality of Black people in response to the publication of a racist email by the philosopher Nick Bostrom, the most upvoted comment was from a prominent rationalist that started, âI feel really quite bad about this post,â and argued at length that universal human equality is not a tenet of effective altruism. This is unbelievably foolish and unbelievably morally wrong. Independently, that person has said and done things that seem to indicate either support or sympathy for racist views, which makes me think it was probably not just a big misunderstanding.)[5] Thatâs why the diversion from talking about racism on the EA Forum into discussing the rationalist communityâs influence on EA.
Racism is paradigmatically evil and there is no moral or rational justification for it. Donât lose sight of something so fundamental and clear. Donât let EA drown under the racism and all the other bad stuff people want to bring to it. (Hey, now EA is the drowning child! Talk about irony!)
Incidentally, LessWrong and the rationalist community are dead wrong about near-term AGI as well â specifically, the probability of AGI before January 1, 2033 is significantly less than 0.1% and the MIRI worldview on alignment is most likely either just generally wrong/âmisguided or at least not applicable to deep learning-based systems â and that poses its own big problem for EA to the extent that EA has been influenced to accept LessWrong/ârationalistsâ views about near-term AGI. So, the influence of the rationalist community on EA has been damaging in multiple respects. (Although, again, EA bears responsibility for its part in all of it, both allowing the influence and for whatever portion of the mistake it would have made without that influence.)
About a day after posting this quick take, I changed the first sentence of this quick take from just italicized to a heading to make the links to the Reflective Altruism post more prominent and harder to miss. The sentence was always there.
Edited on October 25, 2025 at 11:22 PM Eastern to add: If you donât know about the incident Iâm referring to here, the context for what Iâm discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here. The links to these Reflective Altruism posts were always in the first sentence of this quick take, but Iâm adding this footnote to make those links harder to miss.
Edited on October 25, 2025 at 10:52 PM Eastern to add: I purposely omitted a link to this comment because I didnât want to make this quick take a confrontation against the person who wrote it. But if you donât believe me and you want to see the comment for yourself, send me a private message and Iâll send you the link.
Edited on October 25, 2025 at 11:25 PM Eastern to add: This is extensively documented in a different Reflective Altruism post from the two I have already linked, which you can find here.
Edited on October 25, 2025 at 11:19 PM Eastern to add: Iâm referring here to the Manifest 2024 conference, which was held at Lighthaven in Berkely, California, a venue owned by Lightcone Infrastructure, the same organization that owns and operates LessWrong. Iâm also referring to the discussions that happened after the event. There have been many posts about this event on the EA Forum. One post I found interesting was from a pseudonymous self-described member of the rationalist community that was critical of some aspects of the event and of some aspects of the rationalist community. You can read that post here.
Edited on October 26, 2025 at 11:57 PM Eastern to add: See also the philosopher David Thorstadâs post about Manifest 2024 on his blog Reflective Altruism. Davidâs posts are nearly encyclopedic in their thoroughness and have an incredibly high information density.
A possible explanation for why this post is heavily downvoted:
It makes serious, inflammatory, accusative, broad claims in a way that does not promote civil discussion
It rarely cites specific examples and facts that would serve to justify these claims
You linked to an article by Reflective Altruism, but I think it would have been beneficial to put links to specific examples directly in your text.
Two of the specific examples you use do not seem to be presented accurately:
About the post about genetically editing Africans to overcome poverty: âThat post can receive a significant amount of approval and defense from people in EA. [...] Effective altruism as a movement probably deserves to fail for that, if it canât correct itâ
Youâre talking about that post. However, youâre failing to mention that it currently has negative karma and twice as much disagreement as agreement. If anything, it is representative of something that EA (as a whole) does not support.
âWhen the CEA published a brief statement that affirmed the equality of Black people in response to the publication of a racist email by the philosopher Nick Bostrom, the most upvoted comment was from a prominent rationalist that started, âI feel really quite bad about this post,â and argued at length that universal human equality is not a tenet of effective altruism. This is unbelievably foolish and unbelievably morally wrong.â
Your claim implies that the commenter said that because of racist reasons. However, they say in that very comment that âI value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality.â And much of their disagreement centred on the form of the statement.
I am a strong believer in civility and kindness, and although my quick take used harsh language, I think that is appropriate. I think, in a way, it can even be more respectful to speak plainly, directly, and honestly, as opposed to being passive-aggressive and dressing up insults in formal language.
I am expecting people to know the context or else learn what it is. It would not be economical for me to simply recreate the work already done on Reflective Altruism in my quick take.
That post only has negative karma because I strong downvoted it. If I remove my strong downvote, it has 1 karma. 8 agrees and 14 disagrees is more disagrees than agrees, but this is not a good ratio. Also, this is about more than just the scores on the post, itâs also about the comments defending the post, both on that post itself and elsewhere, and the scores on those comments.
I donât think racist ideas should have 1 karma when you exclude my vote.
I think if someone says in response to a racist email about Black people from someone in our community that Black people have equal value to everyone else, your response should not be to argue that Black people have âapproximatelyâ as much value as white people. Normally, I would extend much more benefit of the doubt and try to interpret the comment more charitably, but subsequent evidence â namely, the commenterâs association with and defense of people with extreme racist views â has made me interpret that comment much less charitably than I otherwise would. In any case, even on the most charitable interpretation, it is foolish and morally wrong.
I think the issue is that, from my standpoint, there is a combination of harsh language, many broad claims about EA and LessWrong, which are both very negative and vague, and a lack of specific evidence in the text.
I expect few people here to be swayed by this kind of communication, since you may simply be overreacting and having an extremely low threshold to use terms like âracismâ. Itâs the discourse I tend to see on Twitter.
As an example of what Iâd call an overreaction, when you say that someone did something âunbelievably foolish and unbelievably morally wrong,â I am thinking of very bad stuff, like doing fraud with charity money.
I am not thinking about a comment where someone said that âI value people approximately equally in impact estimatesâ (instead of âabsolutely equallyâ). The lack of evidence means I canât base myself on the commenterâs specific intentions.
There is a lot of context to fill people in on and Iâll leave that to the Reflective Altruism posts. I also added some footnotes that provide a bit more context. I wasnât even really thinking about explaining everything to people who donât already know the background.
I may be overreacting or you may be underreacting. Whoâs to say? The only way to find out is to read the Reflective Altruism posts I cited and get the background knowledge that my quick take presumes.
I agree that discourse on Twitter is unbelievably terrible, but one of the ways that I believe using Twitter harms your mind is you just hear the most terrible points and arguments all the time, so you come to discount things that sound facially similar in non-Twitter contexts. I advocate that people completely quit Twitter (and other microblogging platforms like Bluesky) because I think it gets people into the habit of thinking in tweets, and thinking in tweets is ridiculous. When Twitter started, it was delightfully inane. The idea of trying to say anything in such a short space was whimsical. That itâs been elevated to a platform for serious discourse is absurd.
Again, the key context for that comment is that an extremely racist email by the philosopher Nick Bostrom was published that used the N word and said Black people are stupid. The Centre for Effective Altruism (CEA) released a very short, very simple statement that said all people are equal, i.e., in this context, Black people are equal to everyone else.
The commenter responded harshly against CEAâs statement and argued a point of view that, in context, reads as the view that Black people have less moral value than white people. And since then that commenter was involved in a controversy around racism, i.e., the Manifest 2024 conference. If youâre unfamiliar can read about that conference on Reflective Altruism here.
In that post, thereâs a quote from Shakeel Hasim, who previously was the Head of Communications at the Centre for Effective Altruism (CEA):
By far the most dismaying part of my work at CEA was the increasing realisation that a big chunk of the rationalist community is just straight up racist. EA != rationalism, but I do think major EA orgs need to do a better job at cutting ties with them. Fixing the rationalism community seems beyond hope for me â prominent leaders are some of the worst offenders here, and itâs hard to see them going away. And the entire âtruth seekingâ approach is often a thinly veiled disguise for indulging in racism.
So, donât take my word for it.
The hazard of speaking too dispassionately or understating things is it gives people a misleading impression. Underreacting is dangerous, just as overreacting is. This is why the harsh language is necessary.
Yes, knowing the context is vital to understanding where the harsh language is coming from, but I wasnât really writing for people who donât have the context (or who wonât go and find out what it is). People who donât know the context can dismiss it, or they can become curious and want to find out more.
But colder, calmer, more understated language can also be easily dismissed, and is not guaranteed to elicit curiosity, either. And the danger there is that people tend to assume if youâre not speaking harshly and passionately, then what youâre talking about isnât a big deal. (Also, why should people not just say what they really mean?)
Thanks for the answer; it explains things better for me.
Iâll just point out that another element that bugged me about the post was the lack of balance. It felt that things were made with an attitude that tries to judge everything in a negative light, which doesnât make it trustworthy in my opinion.
Two examples:
The key context for that comment is that an extremely racist email by the philosopher Nick Bostrom was published that used the N word and said Black people are stupid
The email was indeed racist, but Nick Bostrom said this in an email that was 26 years old, for which he apologised since (the apology itself may be discussed, but this is still important context missing).
The commenter responded harshly against CEAâs statement and argued a point of view that, in context, reads as the view that Black people have less moral value than white people.
The comment literally states the opposite, and I did provide quotes. It really feels like you are trying to interpret things uncharitably.
So far, I feel like the examples provided are mostly debatable. Iâd expect more convincing stuff before concluding there is a deep systemic issue to fix.
The CEAâs former head of communicationsâ quote is more relevant evidence, I must admit, though I donât know how widespread or accurate their perception is (it doesnât really match what Iâve seen).
Iâd also appreciate some balance by highlighting all the positive elements EA brings to the table, such as literally saving the lives of thousands of Black people in Africa.
I think the overall theme of your complaints is that I donât provide enough context for what Iâm talking about, which is fair if youâre reading the post without context, but a lot of posts on the EA Forum are âinside baseballâ that assume the reader has a lot of context. So, maybe this is an instance of context collapse, where something written with one audience in mind is interpreted differently because another audience with less context or a different context.
I donât think itâs wrong for you to have the issues youâre having. If I were in your shoes, I would probably have the same issues.
But I donât know how you could avoid these issues and still have âinside baseballâ discussion on the EA Forum. This is a reason the âcommunityâ tag exists on the forum. Itâs so people can separate posts that are interesting and accessible to a general audience from posts that only make sense if your head has already been immersed in the community stuff for a while.
The email was indeed racist, but Nick Bostrom said this in an email that was 26 years old, for which he apologised since (the apology itself may be discussed, but this is still important context missing).
I agree this is important context, but this is the sort of âinside baseballâ stuff where I generally assume the kind of people interested in reading EA Forum community posts are already well aware of what happened and now Iâm only providing more context because youâre directly asking me about it. Reflective Altruism is excellent because the author of that blog, David Thorstad, writes like encyclopedia articles of context for these sort of things. So, I just refer you to the relevant Reflective Altruism posts about the topics youâre interested in. (There is a post on the Bostrom email, for example.)
The comment literally states the opposite, and I did provide quotes.
The comment says that people are approximately equally valuable, not that they are equally valuable, and itâs hard to know what exactly this means to the author. But the context is CEA is saying that Black people are equally valuable and that commenter is saying he disagrees, feels bad about what the CEA is saying, and harshly criticizes the CEA for saying this. And, subsequently, the author has organized a conference that was friendly to people with extreme racist views such as white nationalism. The subsequent discussion of that conference did not allay the concerns of people who find that concerning.
What weâre talking about here is stuff like people defending slavery, defending colonialism, defending white nationalism, defending segregation, defending the Nazi regime in Germany, and so on. I am not exaggerating. This is literally the kind of things these people say. And the defenses about why people who say such things should be welcomed into the effective altruist community are not satisfactory.
For me, this is a case where, at multiple steps, I have left a more charitable interpretation open, but, at multiple turns, the subsequent evidence has pointed to the conclusion that Shakeel Hasim (the former Head of Communications at the CEA) came to, that this is just straight-up racism.
I refer you to the following Reflective Altruism posts: Human biodiversity (Part 2: Manifest), about the Manifest 2024 conference and the ensuing controversy around racism, and Human Biodiversity (Part 7: LessWrong). The post on LessWrong has survey data that supports Shakeel Hasimâs comment about racism in the rationalist community.
Iâd also appreciate some balance by highlighting all the positive elements EA brings to the table, such as literally saving the lives of thousands of Black people in Africa.
Iâve had an intense interest in and affinity for effective altruism since before it was called effective altruism. I think it must have been in 2008 when I joined a Facebook group called Giving What We Can created by the philosopher Toby Ord. As I recall, it had just a few hundred members, maybe around 200. The website for Giving What We Can was still under construction and I donât think the organization had been legally incorporated at that point. So, this has been a journey of 17 years for me, which is more than my entire adult life. Effective altruism has been an important part of my life story. Some of my best memories of my time at university was with my friends I made through my university effective altruism group. Thatâs a time in my life I will always treasure and bittersweetly reminisce on, sweetly because it was so beautiful, bitterly because itâs over.
If I thought there was nothing good about EA, I wouldnât be on the EA Forum, and I wouldnât be writing things about how to diagnose and fix EAâs problems. I would just disavow it and disassociate myself from it, as sadly many people have already done by now. I love the effective altruism I knew in the decade from 2008 to 2018, and it would be sad to me if thatâs no longer on the Earth. For instance, I do think saving the lives of people living in poverty in sub-Saharan Africa is a worthy cause and a worthy achievement. This is precisely why I donât like EA both abandoning global poverty as a cause area and allowing the encroachment of the old colonialist, racist ideas that the people I admire in international development like the economist William Easterly (author of the book The White Manâs Burden and the old blog Aid Watch) warned us so insistently we needed to avoid in contemporary international aid work.
Can you imagine a worse corruption, a worse twisting of this than to allow talk about why Black people are more genetically suited to slavery than white people, or how Europe did Africa a favour by colonizing it, or how Western countries should embrace white nationalism? Thatâs fucking insanity. That is evil. If this is what effective altruism is becoming, then as much as I love what effective altruism once was, effective altruism should die. It has betrayed what it once was and, on the values of the old effective altruism, the right decision would be to oppose the new effective altruism. It really couldnât be more clear.
Thanks, I understand better the context and where youâre coming from. The stylĂŠ is easier for me to read and I appreciate that.
I wonât have much more time for this conversation, but just two points:
This is precisely why I donât like EA both abandoning global poverty as a cause area
Is this actually true? To me global poverty was still number one in terms of donations, Give well is doing great, and most of the charity entrepreneurship charities are on this topic.
Can you imagine a worse corruption, a worse twisting of this than to allow talk about why Black people are more genetically suited to slavery than white people, or how Europe did Africa a favour by colonizing it, or how Western countries should embrace white nationalism?
Oh, yes, that would be awful. But Iâd expect that virtually everybody in the EA forum would be against that.
And so far, in the examples youâve given, you donât show that even a sizeable minority of people would agree with these claims. For instance, for Manifold, you pointed to the fact that some EAs work with a forecasting organisation from the rationalist community who did a conference that invited many speakers to speak on forecasting and some of these speakers previously wrote some racist stuff on a topic unrelated to the conference (and even then that lead to quite a debate).
My understanding might be inaccurate, of course, but thatâs such a long chain that I would consider this as quite far from a prevalent issue which currently has large negative consequences.
Another issue, and why the comment is getting downvoted heavily (including by myself) is because you seem to conflate the is-ought distinction with this post, and without the is-ought distinction being conflated, this post would not exist.
You routinely leap from âa person has moral views that are offensive to youâ to âthey are wrong about the facts of the matterâ, and your evidence for this is paper thin at best.
Being able to separate moral views from beliefs on factual claims is one of the things that is expected if you are in EA/âLW spaces.
This is not mutually exclusive with the issues CBhasfound.
Another issue, and why the comment is getting downvoted heavily (including by myself) is because you seem to conflate the is-ought distinction with this post, and without the is-ought distinction being conflated, this post would not exist.
You routinely leap from âa person has moral views that are offensive to youâ to âthey are wrong about the facts of the matterâ, and your evidence for this is paper thin at best.
Being able to separate moral views from beliefs on factual claims is one of the things that is expected if you are in EA/âLW spaces.
I donât agree with this evaluation and, as stated, itâs just an unsupported assertion. So, there is nothing really here for me to respond to except to say I disagree.
It would help to have an example of what you mean by this. I imagine, if you gave an example, I would probably say that I think your characterization is simply wrong, and I find your wording obnoxious. This comes across as trying to insult me personally rather than trying to make a substantive argument that could conceivably be persuasive to me or to any outside person whoâs on the fence about this topic.
Iâm guessing you may have wrongly inferred that I reject certain factual claims on moral grounds, when really I reject them on factual grounds and part of what Iâm criticizing is the ignorance or poor reasoning that I strain to imagine must be required to believe such plainly false and obviously ridiculous things. Yet it is also fair to criticize such epistemic mistakes for their moral ramifications. For example, if someone thinks world affairs are orchestrated by a global Jewish conspiracy, thatâs just an unbelievably stupid thing to think and they can be rightly criticized for believing something so stupid. They can also rightly be criticized for this mistake because it also implies immoral conduct, namely, unjustifiable discrimination and hatred against Jewish people. If someone thinks this is a failure to decouple or a failure to appreciate the is/âought distinction, they donât know what theyâre talking about. In that case, they should study philosophy and not make up nonsense.[1]
But I will caveat that I actually have no idea what you meant, specifically, because you didnât say. And maybe what you intended to say was actually correct and well-reasoned. Maybe if you explained your logic, I would accept it and agree. I donât know.
I donât know what you meant by your comment specifically, but, in general, I have sometimes found arguments about decoupling to be just unbelievably poorly reasoned because they donât account for the most basic considerations. (The problem is not with the concept of decoupling in principle, in the abstract, itâs that people try to apply this concept in ways that make no sense.)[2] They are woefully incurious about what the opposing case might be and often contradict plain facts. For example, they might fail to distinguish between the concept of a boycott of an organization with morally objectionable views that is intended to have a causal impact on the world vs. the concept of acknowledging both positive and negative facts about that organization. For example:
Person A:I donât want to buy products from Corporation Inc. because they fund lobbying for evil policies.
Person B:But Corporation Inc. makes good products! Learn to decouple!
(This is based on a real example. Yes, this is ridiculous, and yet something very similar to this was actually said.)
People donât understand the basic concepts being discussed â e.g., the concept of a boycott and the rationale for boycotts â and then they say, âtut, tut, be rational!â but anyone could say âtut, tut, be rationalâ when anyone disagrees with them about anything (even in the cases they happen to be dead wrong and say things that donât make sense), so what on Earth is the point of saying that?
This kind of âtut, tutâ comes across to me as epistemically sloppy. The more you scold someone who disagrees with you, the more you lose face if you have to admit you made an embarrassing reasoning mistake, so the less likely you will be to admit such mistakes and the more youâll double down on silly arguments because losing face is so uncomfortable. So, a good way to hold wrong views indefinitely is to say âtut, tutâ as much as possible.
But, thatâs only generally speaking, and I donât know what you meant specifically. Maybe what you meant to say actually made sense. Iâll give you the benefit of the doubt, and an opportunity to elaborate, if you want.
This also obviously applies to prudential cases, in addition to moral cases. If you make a stupid mistake like putting the cereal in the fridge and the milk in the cupboard, you can laugh about that because the stakes are low. If you make a stupid mistake that is also dangerous to you, such as mixing cleaning products that contain bleach and ammonia (which produces chlorine gas), then you can criticize this mistake on prudential grounds as well as epistemic grounds. (To criticize a mistake on prudential or moral grounds is only valid if it is indeed a mistake, obviously.) And no one should assert this criticism is based on some kind of basic logical error where youâre failing to distinguish prudential considerations from epistemic ones â anyone saying that would not know what theyâre talking about and should take a philosophy class.
In general, a common sort of reasoning error I observe is that people invoke a correct principle and apply it incorrectly. When they are pressed on the incorrect application, they fall back to defending the principle in the abstract, which is obviously not the point. By analogy, if someone you knew was talking about investing 100% of their savings in GameStop, it would be exasperating if they defended this decision by citing â very strong, quite plausibly completely correct â research about how itâs optimal to have an all-equity portfolio. It would be infuriating if they accused you of not understanding the rationale for investing in equities simply because you think a 100% GameStop portfolio is reckless. The simple lesson of this analogy: applying correct principles does not lead to correct conclusions if the principle is applied incorrectly! Itâs obvious to spot when I deliberately make the example obvious to illustrate the point, but often less obvious to spot in practice â which is why so many people make errors of this kind so often.
An example here is this quote, which straddles dangerously close to âthese people have morality that you find to be offensive, therefore they are wrong on the actual facts of the matterâ (Otherwise you would make the Nazi source allegations less central to your criticism here):
(I donât hold the moral views of what the quote is saying, to be clear).
It has never stopped shocking and disgusting me that the EA Forum is a place where someone can write a post arguing that Black Africans need Western-funded programs to edit their genomes to increase their intelligence in order to overcome global poverty and can cite overtly racist and white supremacist sources to support this argument (even a source with significant connections to the 1930s and 1940s Nazi Party in Germany and the American Nazi Party, a neo-Nazi party) and that post can receive a significant amount of approval and defense from people in EA, even after the thin disguise over top of the racism is removed by perceptive readers. That is such a bonkers thing and such a morally repugnant thing, I keep struggling to find words to express my exasperation and disbelief. Effective altruism as a movement probably deserves to fail for that, if it canât correct it.[2]
Itâs really quite something that you wrote almost 2000 words and didnât include a single primary citation to support any of those claims. Even given that most of them are transparently false to anyone whoâs spent 5 minutes reading either LW or the EA Forum, I think Iâd be able to dig up something superficially plausible with which to smear them.
And if anyone is curious about why Yarrow might have an axe to grind, theyâre welcome to examine this post, along with the associated comment thread.
Edit: changed the link to an archive.org copy, since the post was moved to draft after I posted this.
Edit2: I was incorrect about when it was moved back to a draft, see this comment.
The sources are cited in quite literally the first sentence of the quick take.
To my knowledge, every specific factual claim I made is true and none are false. If you want to challenge one specific factual claim, I would be willing to provide sources for that one claim. But I donât want to be here all day.
Since I guess you have access to LessWrongâs logs given your bio, are you able to check when and by whom that LessWrong post was moved to drafts, i.e., if it was indeed moved to drafts after your comment and not before, and if it was, whether it was moved to drafts by the user who posted it rather than by a site admin or moderator?
And, indeed, this seems to show your accusation that there was an attempt to hide the post after you brought it up was false. An apology wouldnât hurt!
The other false accusation was that I didnât cite any sources, when in fact I did in the very first sentence of my quick take. Apart from that, I also directly linked to an EA Forum post in my quick take. So, however you slice it, that accusation is wrong. Here, too, an apology wouldnât hurt if you want to signal good faith.
My offer is still open to provide sources for any one factual claim in the quick take if you want to challenge one of them. (But, as I said, I donât want to be here all day, so please keep it to one.)
Incidentally, in my opinion, that post supports my argument about anti-LGBT attitudes on LessWrong. I donât think I could have much success persuading LessWrong users of that, however, and that was not the intention of this quick take.
Yes, indeed, there was only an attempt to hide the post three weeks ago. I regret the sloppiness in the details of my accusation.
The other false accusation was that I didnât cite any sources
I did not say that you did not cite any sources. Perhaps the thing I said was confusingly worded? You did not include any links to any of the incidents that you describe.
Huh? Why not just admit your mistake? Why double down on an error?
By the way, who do you think saved that post in the Wayback Machine on the exact same date it was moved to drafts? A remarkable coincidence, wouldnât you say?
Your initial comment insinuated that the incidents I described were made up. But the incidents were not made up. They really happened. And I linked both to extensive documentation on Reflective Altruism and directly to a post on the EA Forum so that anyone could verify that the incidents I described occurred.
There was one incident I described that I chose not to include a link to out of consideration for your coworker. I wanted to avoid presenting the quick take as a personal attack on them. (That was not the point of what I wrote.) I still think that is the right call. But I can privately provide the link to anyone who requests it if there is any doubt this incident actually occurred.
But, in any case, I very much doubt we are going to have a constructive conversation at this point. Even though I strongly disagree with your views and I still think you owe me an apology, I sincerely wish you happiness.
I think it may be illuminating to conceptualise that EA has several âattractor failure modesâ that it can coalesce into if insufficient attention is paid to methods of making EA community spaces not do that. Youâve noted some of these attractor failures in your post, and they are often related to other things that overlap with EA. They include (but are not limited to):
The cultic self-help conspiratorial milieu (probably from rationalism)
Racism and eugenicist ideas
Doomspirals (many versions depending on cause area, but âAI will kill us all P(doom) = 95%â is definitely one of them)
The question, then, is how does one balance community moderation to both promote the environment of individual truth seeking necessary to support EA as a philosophical concept, while also striving to avoid these, given a documented history within EA of them leading to things that donât work out so well? I wonder what CEAâs community health team have said on the matter.
Iâm very glad of Reflective Altruismâs work and Iâm sorry to see the downvotes on this post. Would you consider a repost as a main post with dialed down emotive language in order to better reach people? Iâd be happy to give you feedback on a draft.
Thanks. Iâll think about the idea of doing a post, but, honestly, what I wrote was what I wanted to write. I donât see the emotion or the intensity of the writing as a failure or an indulgence, but as me saying what I really mean, and saying what needs to be said. What goodâs sugar-coating it?
Something that anyone can do (David Thorstad has given permission in comments Iâve seen) is simply repost the Reflective Altruism posts about LessWrong and about the EA Forum here, on the EA Forum. Those posts are extremely dry, extremely factual, and not particularly opinionated. Theyâre more investigative than argumentative.
I have thought about what, practically, to do about these problems in EA, but I donât think I have particularly clear thoughts or good thoughts on that. An option that would feel deeply regrettable and unfortunate to me would be for the subset of the EA movement that shares my discomfort to try to distinguish itself under some label such as effective giving. (Someone could probably come up with a better label if they thought about it for a while.)
I hope that there is a way for people like me to save what they love about this movement. I would be curious to hear ideas about this from people who feel similarly.
The context for what Iâm discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here.
Warning: This is a polemic that uses harsh language. I still completely, sincerely mean everything I say here and I consciously endorse it.[1]
It has never stopped shocking and disgusting me that the EA Forum is a place where someone can write a post arguing that Black Africans need Western-funded programs to edit their genomes to increase their intelligence in order to overcome global poverty and can cite overtly racist and white supremacist sources to support this argument (even a source with significant connections to the 1930s and 1940s Nazi Party in Germany and the American Nazi Party, a neo-Nazi party) and that post can receive a significant amount of approval and defense from people in EA, even after the thin disguise over top of the racism is removed by perceptive readers. That is such a bonkers thing and such a morally repugnant thing, I keep struggling to find words to express my exasperation and disbelief. Effective altruism as a movement probably deserves to fail for that, if it canât correct it.[2]
My loose, general impression is that people who got involved in EA because of global poverty and animal welfare tend to be broadly liberal or centre-left and tend to be at least sympathetic toward arguments about social justice and anti-racism. Conversely, my impression of LessWrong and the online/âBay Area rationalist community is that they donât like social justice, anti-racism, or socially/âculturally progressive views. One of the most bewildering things I ever read on LessWrong was one of the site admins (an employee of Lightcone Infrastructure) arguing that closeted gay people probably tend to have low moral integrity because being closeted is a form of deception. I mean, what?! This is the ârationalistâ community?? What are you talking about?! As I recall based on votes, a majority of forum users who voted on the comment agreed.[3]
Overall, LessWrong users seem broadly sympathetic to racist arguments and views.[4] Same for sexist or anti-feminist views, and extremely so for anti-LGBT (especially anti-trans) views. Personally, I find it to be the most unpleasant website Iâve spent more than ten hours reading. When I think of LessWrong, I picture a dark, dingy corner of a house. I truly find it to be awful.
The more Iâve thought about it, the more truth I find in the blogger Ozy Brennanâs interpretation of the LessWrong and the rationalist community through the concept of the âcultic milieuâ and a comparison to new religious movements (not cults in the more usual sense connoting high-control groups). Ozy Brennan self-identifies as a rationalist and is active in the community, which makes this analysis far more believable than if it came from an outsider. The way Iâd interpret Ozyâs blog post, which Ozy may not agree with, is that rationalists are in some sense fundamentally devoted to being incorrect, since theyâre fundamentally devoted to being against consensus or majority views on many major topics â regardless of whether those views are correct or incorrect â and inevitably that will lead to having a lot of incorrect views.
I see very loose, very limited analogies between LessWrong and online communities devoted discussing conspiracy theories like QAnon or to online incel communities. Conspiracy theories because LessWrong has a suspicious, distrustful, at least somewhat paranoid or hypervigilant view on people and the world, this impulse to turn over rocks to find where the bad stuff is. Also, thereâs the impulse to connect too much. To subsume too much under one theory or worldview. And too much reliance on oneâs own fringe community to explain the world and interpret everything. Both, in a sense, are communities built around esoteric knowledge. And, indeed, Iâve seen some typical sort of conspiracy theory-seeming stuff on LessWrong related to American intelligence agencies and so on.
Incel communities because the atmosphere of LessWrong feels rather bitter, resentful, angry, unhappy, isolated, desperate, arrogant, and hateful, and in its own way is also a sort of self-help or commiseration community for young men who feel left out of the normal social world. But rather than encouraging healthy, adaptive responses to that experience, instead both communities encourage anti-social behaviour, leaning into distorted thinking, resentment, and disdainful views of other people.
I just noticed that Ozy recently published a much longer article in Asterisk Magazine on the topic of actual high-control groups or high-demand groups with some connection to the rationalist community. It will take me a while to properly read the whole thing and to think about it. But at a glance, there are some aspects of the article that are relevant to what Iâm discussing here, such as this quote:
And this quote:
And this one:
In principle, you could have the view that the typical or median person is benefitted by the Sequences or by LessWrong or the rationalist community, and itâs just an unfortunate but uncommon side-effect for people to slip into cults or high-control groups. It sounds like thatâs what Ozy believes. My view is much harsher: by and large, the influence that LessWrong/âthe rationalist community has on people is bad, and people who take these ideas and this subculture to an extreme are just experiencing a more extreme version of the bad that happens to pretty much everyone who is influenced by these ideas and this subculture. (There might be truly minor exceptions to this, but I still see this as the overall trend.)
Obviously, there is now a lot of overlap between the EA Forum and LessWrong and between EA and the rationalist community. I think to the extent that LessWrong and the rationalist community have influenced EA, EA has become something much worse. Itâs become something repelling to me. I donât want any of this cult stuff. I donât want any of this racist stuff. Or conspiracy theory stuff. Or harmful self-help stuff for isolated young men. Iâm happy to agree with the consensus view most of the time because I care about being correct much more than I care about being counter-consensus. I am extremely skeptical toward esoteric knowledge and I think itâs virtually always either nonsense or prosaic stuff repackaged to look esoteric. I donât buy these promises of unlocking powerful secrets through obscure websites.
There was always a little bit of overlap between EA and the rationalist community, starting very early on, but it wasnât a ruinous amount. And itâs not like EA didnât independently have its own problems before the rationalist communityâs influence increased a lot, but those problems seemed more manageable. The situation now feels like the rationalist community is unloading more and more of its cargo onto the boat that is EA, and EA is just sinking deeper and deeper into the water over time. I feel sour and queasy about this because EA was once something I loved and itâs becoming increasingly laden with things I oppose in the strongest possible terms, like racism, interpersonal cruelty, and extremely irrational thinking patterns. How can people who were in EA because of global poverty and animal welfare, who had no previous affiliation with the rationalist community, stand this? Are they all gone already? Have they opted to recede from public arguments and just focus on their own particular niches? What gives?
And to the extent that the racism in EA is independently EAâs problem and has nothing to do with the influence of the rationalist community (which obviously has to be more than nothing), then that is 100% EAâs problem, but I canât imagine racism in EA could be satisfactorily addressed without significant conflict with and alienation of many people who overlap between the rationalist community and EA and who either endorse or strongly sympathize with racist views. (For example, in January 2023, when the Centre for Effective Altruism published a brief statement that affirmed the equality of Black people in response to the publication of a racist email by the philosopher Nick Bostrom, the most upvoted comment was from a prominent rationalist that started, âI feel really quite bad about this post,â and argued at length that universal human equality is not a tenet of effective altruism. This is unbelievably foolish and unbelievably morally wrong. Independently, that person has said and done things that seem to indicate either support or sympathy for racist views, which makes me think it was probably not just a big misunderstanding.)[5] Thatâs why the diversion from talking about racism on the EA Forum into discussing the rationalist communityâs influence on EA.
Racism is paradigmatically evil and there is no moral or rational justification for it. Donât lose sight of something so fundamental and clear. Donât let EA drown under the racism and all the other bad stuff people want to bring to it. (Hey, now EA is the drowning child! Talk about irony!)
Incidentally, LessWrong and the rationalist community are dead wrong about near-term AGI as well â specifically, the probability of AGI before January 1, 2033 is significantly less than 0.1% and the MIRI worldview on alignment is most likely either just generally wrong/âmisguided or at least not applicable to deep learning-based systems â and that poses its own big problem for EA to the extent that EA has been influenced to accept LessWrong/ârationalistsâ views about near-term AGI. So, the influence of the rationalist community on EA has been damaging in multiple respects. (Although, again, EA bears responsibility for its part in all of it, both allowing the influence and for whatever portion of the mistake it would have made without that influence.)
About a day after posting this quick take, I changed the first sentence of this quick take from just italicized to a heading to make the links to the Reflective Altruism post more prominent and harder to miss. The sentence was always there.
Edited on October 25, 2025 at 11:22 PM Eastern to add: If you donât know about the incident Iâm referring to here, the context for what Iâm discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here. The links to these Reflective Altruism posts were always in the first sentence of this quick take, but Iâm adding this footnote to make those links harder to miss.
Edited on October 25, 2025 at 10:52 PM Eastern to add: I purposely omitted a link to this comment because I didnât want to make this quick take a confrontation against the person who wrote it. But if you donât believe me and you want to see the comment for yourself, send me a private message and Iâll send you the link.
Edited on October 25, 2025 at 11:25 PM Eastern to add: This is extensively documented in a different Reflective Altruism post from the two I have already linked, which you can find here.
Edited on October 25, 2025 at 11:19 PM Eastern to add: Iâm referring here to the Manifest 2024 conference, which was held at Lighthaven in Berkely, California, a venue owned by Lightcone Infrastructure, the same organization that owns and operates LessWrong. Iâm also referring to the discussions that happened after the event. There have been many posts about this event on the EA Forum. One post I found interesting was from a pseudonymous self-described member of the rationalist community that was critical of some aspects of the event and of some aspects of the rationalist community. You can read that post here.
Edited on October 26, 2025 at 11:57 PM Eastern to add: See also the philosopher David Thorstadâs post about Manifest 2024 on his blog Reflective Altruism. Davidâs posts are nearly encyclopedic in their thoroughness and have an incredibly high information density.
A possible explanation for why this post is heavily downvoted:
It makes serious, inflammatory, accusative, broad claims in a way that does not promote civil discussion
It rarely cites specific examples and facts that would serve to justify these claims
You linked to an article by Reflective Altruism, but I think it would have been beneficial to put links to specific examples directly in your text.
Two of the specific examples you use do not seem to be presented accurately:
Youâre talking about that post. However, youâre failing to mention that it currently has negative karma and twice as much disagreement as agreement. If anything, it is representative of something that EA (as a whole) does not support.
Your claim implies that the commenter said that because of racist reasons. However, they say in that very comment that âI value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality.â And much of their disagreement centred on the form of the statement.
Why did you not specify that in your post?
Thank you for your comment.
I am a strong believer in civility and kindness, and although my quick take used harsh language, I think that is appropriate. I think, in a way, it can even be more respectful to speak plainly, directly, and honestly, as opposed to being passive-aggressive and dressing up insults in formal language.
I am expecting people to know the context or else learn what it is. It would not be economical for me to simply recreate the work already done on Reflective Altruism in my quick take.
That post only has negative karma because I strong downvoted it. If I remove my strong downvote, it has 1 karma. 8 agrees and 14 disagrees is more disagrees than agrees, but this is not a good ratio. Also, this is about more than just the scores on the post, itâs also about the comments defending the post, both on that post itself and elsewhere, and the scores on those comments.
I donât think racist ideas should have 1 karma when you exclude my vote.
I think if someone says in response to a racist email about Black people from someone in our community that Black people have equal value to everyone else, your response should not be to argue that Black people have âapproximatelyâ as much value as white people. Normally, I would extend much more benefit of the doubt and try to interpret the comment more charitably, but subsequent evidence â namely, the commenterâs association with and defense of people with extreme racist views â has made me interpret that comment much less charitably than I otherwise would. In any case, even on the most charitable interpretation, it is foolish and morally wrong.
I think the issue is that, from my standpoint, there is a combination of harsh language, many broad claims about EA and LessWrong, which are both very negative and vague, and a lack of specific evidence in the text.
I expect few people here to be swayed by this kind of communication, since you may simply be overreacting and having an extremely low threshold to use terms like âracismâ. Itâs the discourse I tend to see on Twitter.
As an example of what Iâd call an overreaction, when you say that someone did something âunbelievably foolish and unbelievably morally wrong,â I am thinking of very bad stuff, like doing fraud with charity money.
I am not thinking about a comment where someone said that âI value people approximately equally in impact estimatesâ (instead of âabsolutely equallyâ). The lack of evidence means I canât base myself on the commenterâs specific intentions.
There is a lot of context to fill people in on and Iâll leave that to the Reflective Altruism posts. I also added some footnotes that provide a bit more context. I wasnât even really thinking about explaining everything to people who donât already know the background.
I may be overreacting or you may be underreacting. Whoâs to say? The only way to find out is to read the Reflective Altruism posts I cited and get the background knowledge that my quick take presumes.
I agree that discourse on Twitter is unbelievably terrible, but one of the ways that I believe using Twitter harms your mind is you just hear the most terrible points and arguments all the time, so you come to discount things that sound facially similar in non-Twitter contexts. I advocate that people completely quit Twitter (and other microblogging platforms like Bluesky) because I think it gets people into the habit of thinking in tweets, and thinking in tweets is ridiculous. When Twitter started, it was delightfully inane. The idea of trying to say anything in such a short space was whimsical. That itâs been elevated to a platform for serious discourse is absurd.
Again, the key context for that comment is that an extremely racist email by the philosopher Nick Bostrom was published that used the N word and said Black people are stupid. The Centre for Effective Altruism (CEA) released a very short, very simple statement that said all people are equal, i.e., in this context, Black people are equal to everyone else.
The commenter responded harshly against CEAâs statement and argued a point of view that, in context, reads as the view that Black people have less moral value than white people. And since then that commenter was involved in a controversy around racism, i.e., the Manifest 2024 conference. If youâre unfamiliar can read about that conference on Reflective Altruism here.
In that post, thereâs a quote from Shakeel Hasim, who previously was the Head of Communications at the Centre for Effective Altruism (CEA):
So, donât take my word for it.
The hazard of speaking too dispassionately or understating things is it gives people a misleading impression. Underreacting is dangerous, just as overreacting is. This is why the harsh language is necessary.
Yes, knowing the context is vital to understanding where the harsh language is coming from, but I wasnât really writing for people who donât have the context (or who wonât go and find out what it is). People who donât know the context can dismiss it, or they can become curious and want to find out more.
But colder, calmer, more understated language can also be easily dismissed, and is not guaranteed to elicit curiosity, either. And the danger there is that people tend to assume if youâre not speaking harshly and passionately, then what youâre talking about isnât a big deal. (Also, why should people not just say what they really mean?)
Thanks for the answer; it explains things better for me.
Iâll just point out that another element that bugged me about the post was the lack of balance. It felt that things were made with an attitude that tries to judge everything in a negative light, which doesnât make it trustworthy in my opinion.
Two examples:
The email was indeed racist, but Nick Bostrom said this in an email that was 26 years old, for which he apologised since (the apology itself may be discussed, but this is still important context missing).
The comment literally states the opposite, and I did provide quotes. It really feels like you are trying to interpret things uncharitably.
So far, I feel like the examples provided are mostly debatable. Iâd expect more convincing stuff before concluding there is a deep systemic issue to fix.
The CEAâs former head of communicationsâ quote is more relevant evidence, I must admit, though I donât know how widespread or accurate their perception is (it doesnât really match what Iâve seen).
Iâd also appreciate some balance by highlighting all the positive elements EA brings to the table, such as literally saving the lives of thousands of Black people in Africa.
I think the overall theme of your complaints is that I donât provide enough context for what Iâm talking about, which is fair if youâre reading the post without context, but a lot of posts on the EA Forum are âinside baseballâ that assume the reader has a lot of context. So, maybe this is an instance of context collapse, where something written with one audience in mind is interpreted differently because another audience with less context or a different context.
I donât think itâs wrong for you to have the issues youâre having. If I were in your shoes, I would probably have the same issues.
But I donât know how you could avoid these issues and still have âinside baseballâ discussion on the EA Forum. This is a reason the âcommunityâ tag exists on the forum. Itâs so people can separate posts that are interesting and accessible to a general audience from posts that only make sense if your head has already been immersed in the community stuff for a while.
I agree this is important context, but this is the sort of âinside baseballâ stuff where I generally assume the kind of people interested in reading EA Forum community posts are already well aware of what happened and now Iâm only providing more context because youâre directly asking me about it. Reflective Altruism is excellent because the author of that blog, David Thorstad, writes like encyclopedia articles of context for these sort of things. So, I just refer you to the relevant Reflective Altruism posts about the topics youâre interested in. (There is a post on the Bostrom email, for example.)
The comment says that people are approximately equally valuable, not that they are equally valuable, and itâs hard to know what exactly this means to the author. But the context is CEA is saying that Black people are equally valuable and that commenter is saying he disagrees, feels bad about what the CEA is saying, and harshly criticizes the CEA for saying this. And, subsequently, the author has organized a conference that was friendly to people with extreme racist views such as white nationalism. The subsequent discussion of that conference did not allay the concerns of people who find that concerning.
What weâre talking about here is stuff like people defending slavery, defending colonialism, defending white nationalism, defending segregation, defending the Nazi regime in Germany, and so on. I am not exaggerating. This is literally the kind of things these people say. And the defenses about why people who say such things should be welcomed into the effective altruist community are not satisfactory.
For me, this is a case where, at multiple steps, I have left a more charitable interpretation open, but, at multiple turns, the subsequent evidence has pointed to the conclusion that Shakeel Hasim (the former Head of Communications at the CEA) came to, that this is just straight-up racism.
I refer you to the following Reflective Altruism posts: Human biodiversity (Part 2: Manifest), about the Manifest 2024 conference and the ensuing controversy around racism, and Human Biodiversity (Part 7: LessWrong). The post on LessWrong has survey data that supports Shakeel Hasimâs comment about racism in the rationalist community.
Iâve had an intense interest in and affinity for effective altruism since before it was called effective altruism. I think it must have been in 2008 when I joined a Facebook group called Giving What We Can created by the philosopher Toby Ord. As I recall, it had just a few hundred members, maybe around 200. The website for Giving What We Can was still under construction and I donât think the organization had been legally incorporated at that point. So, this has been a journey of 17 years for me, which is more than my entire adult life. Effective altruism has been an important part of my life story. Some of my best memories of my time at university was with my friends I made through my university effective altruism group. Thatâs a time in my life I will always treasure and bittersweetly reminisce on, sweetly because it was so beautiful, bitterly because itâs over.
If I thought there was nothing good about EA, I wouldnât be on the EA Forum, and I wouldnât be writing things about how to diagnose and fix EAâs problems. I would just disavow it and disassociate myself from it, as sadly many people have already done by now. I love the effective altruism I knew in the decade from 2008 to 2018, and it would be sad to me if thatâs no longer on the Earth. For instance, I do think saving the lives of people living in poverty in sub-Saharan Africa is a worthy cause and a worthy achievement. This is precisely why I donât like EA both abandoning global poverty as a cause area and allowing the encroachment of the old colonialist, racist ideas that the people I admire in international development like the economist William Easterly (author of the book The White Manâs Burden and the old blog Aid Watch) warned us so insistently we needed to avoid in contemporary international aid work.
Can you imagine a worse corruption, a worse twisting of this than to allow talk about why Black people are more genetically suited to slavery than white people, or how Europe did Africa a favour by colonizing it, or how Western countries should embrace white nationalism? Thatâs fucking insanity. That is evil. If this is what effective altruism is becoming, then as much as I love what effective altruism once was, effective altruism should die. It has betrayed what it once was and, on the values of the old effective altruism, the right decision would be to oppose the new effective altruism. It really couldnât be more clear.
Thanks, I understand better the context and where youâre coming from. The stylĂŠ is easier for me to read and I appreciate that.
I wonât have much more time for this conversation, but just two points:
Is this actually true? To me global poverty was still number one in terms of donations, Give well is doing great, and most of the charity entrepreneurship charities are on this topic.
Oh, yes, that would be awful. But Iâd expect that virtually everybody in the EA forum would be against that.
And so far, in the examples youâve given, you donât show that even a sizeable minority of people would agree with these claims. For instance, for Manifold, you pointed to the fact that some EAs work with a forecasting organisation from the rationalist community who did a conference that invited many speakers to speak on forecasting and some of these speakers previously wrote some racist stuff on a topic unrelated to the conference (and even then that lead to quite a debate).
My understanding might be inaccurate, of course, but thatâs such a long chain that I would consider this as quite far from a prevalent issue which currently has large negative consequences.
Another issue, and why the comment is getting downvoted heavily (including by myself) is because you seem to conflate the is-ought distinction with this post, and without the is-ought distinction being conflated, this post would not exist.
You routinely leap from âa person has moral views that are offensive to youâ to âthey are wrong about the facts of the matterâ, and your evidence for this is paper thin at best.
Being able to separate moral views from beliefs on factual claims is one of the things that is expected if you are in EA/âLW spaces.
This is not mutually exclusive with the issues CB has found.
I donât agree with this evaluation and, as stated, itâs just an unsupported assertion. So, there is nothing really here for me to respond to except to say I disagree.
It would help to have an example of what you mean by this. I imagine, if you gave an example, I would probably say that I think your characterization is simply wrong, and I find your wording obnoxious. This comes across as trying to insult me personally rather than trying to make a substantive argument that could conceivably be persuasive to me or to any outside person whoâs on the fence about this topic.
Iâm guessing you may have wrongly inferred that I reject certain factual claims on moral grounds, when really I reject them on factual grounds and part of what Iâm criticizing is the ignorance or poor reasoning that I strain to imagine must be required to believe such plainly false and obviously ridiculous things. Yet it is also fair to criticize such epistemic mistakes for their moral ramifications. For example, if someone thinks world affairs are orchestrated by a global Jewish conspiracy, thatâs just an unbelievably stupid thing to think and they can be rightly criticized for believing something so stupid. They can also rightly be criticized for this mistake because it also implies immoral conduct, namely, unjustifiable discrimination and hatred against Jewish people. If someone thinks this is a failure to decouple or a failure to appreciate the is/âought distinction, they donât know what theyâre talking about. In that case, they should study philosophy and not make up nonsense.[1]
But I will caveat that I actually have no idea what you meant, specifically, because you didnât say. And maybe what you intended to say was actually correct and well-reasoned. Maybe if you explained your logic, I would accept it and agree. I donât know.
I donât know what you meant by your comment specifically, but, in general, I have sometimes found arguments about decoupling to be just unbelievably poorly reasoned because they donât account for the most basic considerations. (The problem is not with the concept of decoupling in principle, in the abstract, itâs that people try to apply this concept in ways that make no sense.)[2] They are woefully incurious about what the opposing case might be and often contradict plain facts. For example, they might fail to distinguish between the concept of a boycott of an organization with morally objectionable views that is intended to have a causal impact on the world vs. the concept of acknowledging both positive and negative facts about that organization. For example:
Person A: I donât want to buy products from Corporation Inc. because they fund lobbying for evil policies.
Person B: But Corporation Inc. makes good products! Learn to decouple!
(This is based on a real example. Yes, this is ridiculous, and yet something very similar to this was actually said.)
People donât understand the basic concepts being discussed â e.g., the concept of a boycott and the rationale for boycotts â and then they say, âtut, tut, be rational!â but anyone could say âtut, tut, be rationalâ when anyone disagrees with them about anything (even in the cases they happen to be dead wrong and say things that donât make sense), so what on Earth is the point of saying that?
This kind of âtut, tutâ comes across to me as epistemically sloppy. The more you scold someone who disagrees with you, the more you lose face if you have to admit you made an embarrassing reasoning mistake, so the less likely you will be to admit such mistakes and the more youâll double down on silly arguments because losing face is so uncomfortable. So, a good way to hold wrong views indefinitely is to say âtut, tutâ as much as possible.
But, thatâs only generally speaking, and I donât know what you meant specifically. Maybe what you meant to say actually made sense. Iâll give you the benefit of the doubt, and an opportunity to elaborate, if you want.
This also obviously applies to prudential cases, in addition to moral cases. If you make a stupid mistake like putting the cereal in the fridge and the milk in the cupboard, you can laugh about that because the stakes are low. If you make a stupid mistake that is also dangerous to you, such as mixing cleaning products that contain bleach and ammonia (which produces chlorine gas), then you can criticize this mistake on prudential grounds as well as epistemic grounds. (To criticize a mistake on prudential or moral grounds is only valid if it is indeed a mistake, obviously.) And no one should assert this criticism is based on some kind of basic logical error where youâre failing to distinguish prudential considerations from epistemic ones â anyone saying that would not know what theyâre talking about and should take a philosophy class.
In general, a common sort of reasoning error I observe is that people invoke a correct principle and apply it incorrectly. When they are pressed on the incorrect application, they fall back to defending the principle in the abstract, which is obviously not the point. By analogy, if someone you knew was talking about investing 100% of their savings in GameStop, it would be exasperating if they defended this decision by citing â very strong, quite plausibly completely correct â research about how itâs optimal to have an all-equity portfolio. It would be infuriating if they accused you of not understanding the rationale for investing in equities simply because you think a 100% GameStop portfolio is reckless. The simple lesson of this analogy: applying correct principles does not lead to correct conclusions if the principle is applied incorrectly! Itâs obvious to spot when I deliberately make the example obvious to illustrate the point, but often less obvious to spot in practice â which is why so many people make errors of this kind so often.
An example here is this quote, which straddles dangerously close to âthese people have morality that you find to be offensive, therefore they are wrong on the actual facts of the matterâ (Otherwise you would make the Nazi source allegations less central to your criticism here):
(I donât hold the moral views of what the quote is saying, to be clear).
Itâs really quite something that you wrote almost 2000 words and didnât include a single primary citation to support any of those claims. Even given that most of them are transparently false to anyone whoâs spent 5 minutes reading either LW or the EA Forum, I think Iâd be able to dig up something superficially plausible with which to smear them.
And if anyone is curious about why Yarrow might have an axe to grind, theyâre welcome to examine this post, along with the associated comment thread.
Edit: changed the link to an archive.org copy,
since the post was moved to draft after I posted this.Edit2: I was incorrect about when it was moved back to a draft, see this comment.
I believe Yarrow is referencing this series of articles from David Thorstadt, which quotes primary sources extensively.
The sources are cited in quite literally the first sentence of the quick take.
To my knowledge, every specific factual claim I made is true and none are false. If you want to challenge one specific factual claim, I would be willing to provide sources for that one claim. But I donât want to be here all day.
Since I guess you have access to LessWrongâs logs given your bio, are you able to check when and by whom that LessWrong post was moved to drafts, i.e., if it was indeed moved to drafts after your comment and not before, and if it was, whether it was moved to drafts by the user who posted it rather than by a site admin or moderator?
My bad, it was moved back to draft on October 3rd (~3 weeks ago) by you. I copied the link from another post that linked to it.
Hey, so youâre 0 for 2 on your accusations! Want to try again?
And, indeed, this seems to show your accusation that there was an attempt to hide the post after you brought it up was false. An apology wouldnât hurt!
The other false accusation was that I didnât cite any sources, when in fact I did in the very first sentence of my quick take. Apart from that, I also directly linked to an EA Forum post in my quick take. So, however you slice it, that accusation is wrong. Here, too, an apology wouldnât hurt if you want to signal good faith.
My offer is still open to provide sources for any one factual claim in the quick take if you want to challenge one of them. (But, as I said, I donât want to be here all day, so please keep it to one.)
Incidentally, in my opinion, that post supports my argument about anti-LGBT attitudes on LessWrong. I donât think I could have much success persuading LessWrong users of that, however, and that was not the intention of this quick take.
Yes, indeed, there was only an attempt to hide the post three weeks ago. I regret the sloppiness in the details of my accusation.
I did not say that you did not cite any sources. Perhaps the thing I said was confusingly worded? You did not include any links to any of the incidents that you describe.
Huh? Why not just admit your mistake? Why double down on an error?
By the way, who do you think saved that post in the Wayback Machine on the exact same date it was moved to drafts? A remarkable coincidence, wouldnât you say?
Your initial comment insinuated that the incidents I described were made up. But the incidents were not made up. They really happened. And I linked both to extensive documentation on Reflective Altruism and directly to a post on the EA Forum so that anyone could verify that the incidents I described occurred.
There was one incident I described that I chose not to include a link to out of consideration for your coworker. I wanted to avoid presenting the quick take as a personal attack on them. (That was not the point of what I wrote.) I still think that is the right call. But I can privately provide the link to anyone who requests it if there is any doubt this incident actually occurred.
But, in any case, I very much doubt we are going to have a constructive conversation at this point. Even though I strongly disagree with your views and I still think you owe me an apology, I sincerely wish you happiness.
Thanks for sharing your misgivings.
I think it may be illuminating to conceptualise that EA has several âattractor failure modesâ that it can coalesce into if insufficient attention is paid to methods of making EA community spaces not do that. Youâve noted some of these attractor failures in your post, and they are often related to other things that overlap with EA. They include (but are not limited to):
The cultic self-help conspiratorial milieu (probably from rationalism)
Racism and eugenicist ideas
Doomspirals (many versions depending on cause area, but âAI will kill us all P(doom) = 95%â is definitely one of them)
The question, then, is how does one balance community moderation to both promote the environment of individual truth seeking necessary to support EA as a philosophical concept, while also striving to avoid these, given a documented history within EA of them leading to things that donât work out so well? I wonder what CEAâs community health team have said on the matter.
Iâm very glad of Reflective Altruismâs work and Iâm sorry to see the downvotes on this post. Would you consider a repost as a main post with dialed down emotive language in order to better reach people? Iâd be happy to give you feedback on a draft.
Thanks. Iâll think about the idea of doing a post, but, honestly, what I wrote was what I wanted to write. I donât see the emotion or the intensity of the writing as a failure or an indulgence, but as me saying what I really mean, and saying what needs to be said. What goodâs sugar-coating it?
Something that anyone can do (David Thorstad has given permission in comments Iâve seen) is simply repost the Reflective Altruism posts about LessWrong and about the EA Forum here, on the EA Forum. Those posts are extremely dry, extremely factual, and not particularly opinionated. Theyâre more investigative than argumentative.
I have thought about what, practically, to do about these problems in EA, but I donât think I have particularly clear thoughts or good thoughts on that. An option that would feel deeply regrettable and unfortunate to me would be for the subset of the EA movement that shares my discomfort to try to distinguish itself under some label such as effective giving. (Someone could probably come up with a better label if they thought about it for a while.)
I hope that there is a way for people like me to save what they love about this movement. I would be curious to hear ideas about this from people who feel similarly.