If we’re going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which I’ve said in the past. They were also early to crypto, early to AI, early to Covid. It’s sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you don’t mention those, I think you’re probably fudging the numbers.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in,
I can’t help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
I have never thought that Yudkowsky is the smartest person in the world, so this doesn’t really bother me deeply.
On the charges of racism, I think you’ll have to present some evidence for that.
I’ve seen you complain elsewhere that the ban times for negative karma comments are too long. I think they may be, but I guess they exist to stop behaviour exactly like this. Personally, I think it’s pretty antisocial to respond to a short message with an extremely long one that is kind of aggressive.
I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz won’t fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesn’t want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who don’t believe it, and most rationalists are not right-wing extremists as far as I can tell.
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?
As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don’t fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it’s probably too early to judge.
Well, the evidence is there if you’re ever curious. You asked for it, and I gave it.
David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as a few other people well-known in the effective altruism community and the LessWrong community. He has a number of published academic papers on rationality, epistemology, cognition, existential risk, and AI. He’s about as deeply familiar with the effective altruist community as it’s possible for someone to be, and also has a deep familiarity with the LessWrong community.
In my opinion, David Thorstad has a deeper understanding of the EA community’s ideas and community dynamics than many in the community do, and, given the overlap between the EA community and the LessWrong community, his understanding also extends to a significant degree to the LessWrong community as well. I think people in the EA community are used to drive-by criticisms by people who paid minimal attention to EA and its ideas, but David has spent years interfacing with the community and doing both academic research and blogging related to EA. So, what he writes are not drive-by criticisms and, indeed, apparently a number of people in EA listen to him, read his blog posts and academic papers, and take him seriously. All this to say, his work isn’t something that can be dismissed out of hand. His work is the kind of scrutiny or critical appraisal that people in EA have been saying they want for years. Here it is, so folks better at least give it a chance.
To me, “ugly or difficult topics should be discussed” is an inaccurate euphemism. I don’t think the LessWrong community is particularly capable of or competent at discussing ugly or difficult topics. I think they shy away from the ugly and difficult parts, and generally don’t have the stomach or emotional stamina to sit through the discomfort. What instead is happening in the LessWrong community is people are credulously accepting ugly, wrong, evil ideas in some part due to an inability to handle the discomfort of scrutinizing them and in large part due to just an ideological trainwreck of a community that believes ridiculous stuff all the time (like the many examples I gave above) and typically has atrocious epistemic practices (e.g. just guess stuff or believe stuff based on a hunch without Googling it).
There are multiple adjacent cults, which I’ve said in the past.
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.
They were also early to crypto, early to AI, early to Covid.
I don’t think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.
For instance, many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, stores sold out of face masks in many cities in North America. (One example of many.) The oldest post on LessWrong tagged with “covid-19” is from well after this started happening. (I also searched the forum for posts containing “covid” or “coronavirus” and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described “prepper” who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don’t start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Here’s a New York Times article from February 2, 2020, entitled “Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say”, well before any of the worried posts on LessWrong:
The Wuhan coronavirus spreading from China is now likely to become a pandemic that circles the globe, according to many of the world’s leading infectious disease experts.
The prospect is daunting. A pandemic — an ongoing epidemic on two or more continents — may well have global consequences, despite the extraordinary travel restrictions and quarantines now imposed by China and other countries, including the United States.
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
It is “increasingly unlikely that the virus can be contained,” said Dr. Thomas R. Frieden, a former director of the Centers for Disease Control and Prevention who now runs Resolve to Save Lives, a nonprofit devoted to fighting epidemics.
The worried posts on LessWrong don’t start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDC’s guidance at the time about covid-19. It says that CFAR’s workshops “should be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.” This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
I’ve seen a few people in the LessWrong community congratulate the community on covid, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it.
Crypto and AI have obviously had many, many boosters and enthusiasts going back a long, long time.
I don’t know about the rest of the LessWrong community, but Eliezer Yudkowsky and MIRI were oddly late to the game with deep learning. I was complaining back in 2016 that none of MIRI’s research focused on machine learning. Yudkowsky’s response to me was that he didn’t think deep learning would lead to AGI. Eventually MIRI hired an intern or a junior researcher to focus on that. So, MIRI at least was late on deep learning.
Moreover, in the case of crypto and AI, or at least recent AI investment, so far these are mainly just speculative or “greater fool” investments that haven’t proved out any fundamentally profitable use case. (Picks and shovels may be profitable, speculation and gambling may be profitable for some, but the underlying technologies haven’t shown any profitable use cases for the end user/end customer to the extent that would be normally be required to justify the eye-watering valuations.) The latest Bank of America survey of professional investors found that slightly more than half of respondents think that AI is in a bubble — although, at the same time, most of them also remain heavily exposed to AI in their investments.
I have never thought that Yudkowsky is the smartest person in the world, so this doesn’t really bother me deeply.
I think it should bother you that he thinks so. How could someone be so wrong about such a thing?
On the charges of racism, I think you’ll have to present some evidence for that.
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
I can’t help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
Your comments about the Suez Canal insinuated that you think you’re smarter than the rest of the world. But you actually just didn’t understand the situation, and didn’t bother to do even a cursory Google search. You could have very quickly found out you were wrong about that if you thought to check. But instead you assumed the whole world — the whole world — is stupid and insane, and would be so much better off with only your guiding hand, I suppose. But maybe the world actually shouldn’t let your hand — or the hand of this community, or especially the LessWrong community — anywhere near the controls.
This kind of mistake is disqualifying and discrediting for anyone who aspires to that kind of power or influence. Which is explicitly what you were advocating — the world needs at least two movements or communities that think carefully about the world. Are EA and LessWrong really the only two? And do these communities actually think carefully about things? Apparently not the Suez Canal, at least.
Probably most or all of your opinions that take this form — the world is obviously stupid, I’m smarter than the world — are equally wrong. Probably most or all of the LessWrong community’s and the EA community’s opinions that take this form are equally wrong. Because they aren’t researched, they aren’t carefully thought about, they’re just shot off half-cocked and then assumed to be right. (And the outside world’s disagreement with them is sometimes circularly taken to be further evidence that the world is stupid and the community is smart.)
People in both communities pat themselves on the back ad nauseum for being the smartest people in the world or outsmarting the world — and for having great “epistemics”, which is ironic because if you Google “epistemics” or if you have studied philosophy, you know that “epistemics” is not a word.[1] This is infuriating when people routinely make mistakes this bad. Not just here — all the time, every day, everywhere, always. The same sort of mistakes — no basic fact checking, no Googling definitions of terms or concepts, no consulting expert opinion, simple logical or reasoning errors, methodological errors, math errors, “not even wrong” errors, and so on.
Mistakes are not necessarily bad, but the rate and severity of mistakes along with the messianic level of hubris — that combination is bad, very bad. That’s not intellectual or smart, that’s cult-y. (And LessWrong has literally created multiple cults, so I don’t think that’s an unfair descriptor.)
It’s not specifically your fault, it’s your fault and everyone else’s too.
I probably could have, maybe should have, made most of this a separate post or quick take, but your comment about the Suez Canal set me off. (Your recent comment about solving the science/philosophy of both shrimp and human consciousness in time for the Anthropic IPO also seems like an example of LessWrong/EA hubris.)
It’s not a word used in philosophy. Some people mistakenly think it is. It’s jargon of LessWrong’s/the EA Forum’s creation. If you look hard, you can find one EA definition of “epistemics” and one Center for Applied Rationality (CFAR) definition, but the two definitions contradict each other. The EA definition says epistemics is about the general quality of one’s thinking. CFAR, on the other hand, says that epistemics is the “construction of formal models” about knowledge. These are the only two definitions I’ve found, and they contradict each other.
I appreciate the correction on the Suez stuff.
If we’re going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which I’ve said in the past. They were also early to crypto, early to AI, early to Covid. It’s sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you don’t mention those, I think you’re probably fudging the numbers.
I can’t help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
I have never thought that Yudkowsky is the smartest person in the world, so this doesn’t really bother me deeply.
On the charges of racism, I think you’ll have to present some evidence for that.
I’ve seen you complain elsewhere that the ban times for negative karma comments are too long. I think they may be, but I guess they exist to stop behaviour exactly like this. Personally, I think it’s pretty antisocial to respond to a short message with an extremely long one that is kind of aggressive.
I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz won’t fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesn’t want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who don’t believe it, and most rationalists are not right-wing extremists as far as I can tell.
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
“Human biodiversity (Part 2: Manifest)” (June 27, 2024)
“Human Biodiversity (Part 7: LessWrong)” (April 18, 2025)
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?
As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don’t fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it’s probably too early to judge.
Well, the evidence is there if you’re ever curious. You asked for it, and I gave it.
David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as a few other people well-known in the effective altruism community and the LessWrong community. He has a number of published academic papers on rationality, epistemology, cognition, existential risk, and AI. He’s about as deeply familiar with the effective altruist community as it’s possible for someone to be, and also has a deep familiarity with the LessWrong community.
In my opinion, David Thorstad has a deeper understanding of the EA community’s ideas and community dynamics than many in the community do, and, given the overlap between the EA community and the LessWrong community, his understanding also extends to a significant degree to the LessWrong community as well. I think people in the EA community are used to drive-by criticisms by people who paid minimal attention to EA and its ideas, but David has spent years interfacing with the community and doing both academic research and blogging related to EA. So, what he writes are not drive-by criticisms and, indeed, apparently a number of people in EA listen to him, read his blog posts and academic papers, and take him seriously. All this to say, his work isn’t something that can be dismissed out of hand. His work is the kind of scrutiny or critical appraisal that people in EA have been saying they want for years. Here it is, so folks better at least give it a chance.
To me, “ugly or difficult topics should be discussed” is an inaccurate euphemism. I don’t think the LessWrong community is particularly capable of or competent at discussing ugly or difficult topics. I think they shy away from the ugly and difficult parts, and generally don’t have the stomach or emotional stamina to sit through the discomfort. What instead is happening in the LessWrong community is people are credulously accepting ugly, wrong, evil ideas in some part due to an inability to handle the discomfort of scrutinizing them and in large part due to just an ideological trainwreck of a community that believes ridiculous stuff all the time (like the many examples I gave above) and typically has atrocious epistemic practices (e.g. just guess stuff or believe stuff based on a hunch without Googling it).
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.
I don’t think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.
For instance, many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, stores sold out of face masks in many cities in North America. (One example of many.) The oldest post on LessWrong tagged with “covid-19” is from well after this started happening. (I also searched the forum for posts containing “covid” or “coronavirus” and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described “prepper” who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don’t start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Here’s a New York Times article from February 2, 2020, entitled “Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say”, well before any of the worried posts on LessWrong:
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
The worried posts on LessWrong don’t start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDC’s guidance at the time about covid-19. It says that CFAR’s workshops “should be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.” This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
I’ve seen a few people in the LessWrong community congratulate the community on covid, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it.
Crypto and AI have obviously had many, many boosters and enthusiasts going back a long, long time.
I don’t know about the rest of the LessWrong community, but Eliezer Yudkowsky and MIRI were oddly late to the game with deep learning. I was complaining back in 2016 that none of MIRI’s research focused on machine learning. Yudkowsky’s response to me was that he didn’t think deep learning would lead to AGI. Eventually MIRI hired an intern or a junior researcher to focus on that. So, MIRI at least was late on deep learning.
Moreover, in the case of crypto and AI, or at least recent AI investment, so far these are mainly just speculative or “greater fool” investments that haven’t proved out any fundamentally profitable use case. (Picks and shovels may be profitable, speculation and gambling may be profitable for some, but the underlying technologies haven’t shown any profitable use cases for the end user/end customer to the extent that would be normally be required to justify the eye-watering valuations.) The latest Bank of America survey of professional investors found that slightly more than half of respondents think that AI is in a bubble — although, at the same time, most of them also remain heavily exposed to AI in their investments.
I think it should bother you that he thinks so. How could someone be so wrong about such a thing?
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
“Human biodiversity (Part 2: Manifest)” (June 27, 2024)
“Human Biodiversity (Part 7: LessWrong)” (April 18, 2025)
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
Your comments about the Suez Canal insinuated that you think you’re smarter than the rest of the world. But you actually just didn’t understand the situation, and didn’t bother to do even a cursory Google search. You could have very quickly found out you were wrong about that if you thought to check. But instead you assumed the whole world — the whole world — is stupid and insane, and would be so much better off with only your guiding hand, I suppose. But maybe the world actually shouldn’t let your hand — or the hand of this community, or especially the LessWrong community — anywhere near the controls.
This kind of mistake is disqualifying and discrediting for anyone who aspires to that kind of power or influence. Which is explicitly what you were advocating — the world needs at least two movements or communities that think carefully about the world. Are EA and LessWrong really the only two? And do these communities actually think carefully about things? Apparently not the Suez Canal, at least.
Probably most or all of your opinions that take this form — the world is obviously stupid, I’m smarter than the world — are equally wrong. Probably most or all of the LessWrong community’s and the EA community’s opinions that take this form are equally wrong. Because they aren’t researched, they aren’t carefully thought about, they’re just shot off half-cocked and then assumed to be right. (And the outside world’s disagreement with them is sometimes circularly taken to be further evidence that the world is stupid and the community is smart.)
People in both communities pat themselves on the back ad nauseum for being the smartest people in the world or outsmarting the world — and for having great “epistemics”, which is ironic because if you Google “epistemics” or if you have studied philosophy, you know that “epistemics” is not a word.[1] This is infuriating when people routinely make mistakes this bad. Not just here — all the time, every day, everywhere, always. The same sort of mistakes — no basic fact checking, no Googling definitions of terms or concepts, no consulting expert opinion, simple logical or reasoning errors, methodological errors, math errors, “not even wrong” errors, and so on.
Mistakes are not necessarily bad, but the rate and severity of mistakes along with the messianic level of hubris — that combination is bad, very bad. That’s not intellectual or smart, that’s cult-y. (And LessWrong has literally created multiple cults, so I don’t think that’s an unfair descriptor.)
It’s not specifically your fault, it’s your fault and everyone else’s too.
I probably could have, maybe should have, made most of this a separate post or quick take, but your comment about the Suez Canal set me off. (Your recent comment about solving the science/philosophy of both shrimp and human consciousness in time for the Anthropic IPO also seems like an example of LessWrong/EA hubris.)
It’s not a word used in philosophy. Some people mistakenly think it is. It’s jargon of LessWrong’s/the EA Forum’s creation. If you look hard, you can find one EA definition of “epistemics” and one Center for Applied Rationality (CFAR) definition, but the two definitions contradict each other. The EA definition says epistemics is about the general quality of one’s thinking. CFAR, on the other hand, says that epistemics is the “construction of formal models” about knowledge. These are the only two definitions I’ve found, and they contradict each other.
I often don’t respond to people who write far more than I do.
I may not respond to this.