There are multiple adjacent cults, which Iâve said in the past.
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.
They were also early to crypto, early to AI, early to Covid.
I donât think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.
For instance, many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, stores sold out of face masks in many cities in North America. (One example of many.) The oldest post on LessWrong tagged with âcovid-19â is from well after this started happening. (I also searched the forum for posts containing âcovidâ or âcoronavirusâ and sorted by oldest. I couldnât find an older post that was relevant.) The LessWrong post is written by a self-described âprepperâ who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts donât start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Hereâs a New York Times article from February 2, 2020, entitled âWuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Sayâ, well before any of the worried posts on LessWrong:
The Wuhan coronavirus spreading from China is now likely to become a pandemic that circles the globe, according to many of the worldâs leading infectious disease experts.
The prospect is daunting. A pandemic â an ongoing epidemic on two or more continents â may well have global consequences, despite the extraordinary travel restrictions and quarantines now imposed by China and other countries, including the United States.
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
It is âincreasingly unlikely that the virus can be contained,â said Dr. Thomas R. Frieden, a former director of the Centers for Disease Control and Prevention who now runs Resolve to Save Lives, a nonprofit devoted to fighting epidemics.
The worried posts on LessWrong donât start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDCâs guidance at the time about covid-19. It says that CFARâs workshops âshould be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.â This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
Iâve seen a few people in the LessWrong community congratulate the community on covid, but I havenât actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it.
Crypto and AI have obviously had many, many boosters and enthusiasts going back a long, long time.
I donât know about the rest of the LessWrong community, but Eliezer Yudkowsky and MIRI were oddly late to the game with deep learning. I was complaining back in 2016 that none of MIRIâs research focused on machine learning. Yudkowskyâs response to me was that he didnât think deep learning would lead to AGI. Eventually MIRI hired an intern or a junior researcher to focus on that. So, MIRI at least was late on deep learning.
Moreover, in the case of crypto and AI, or at least recent AI investment, so far these are mainly just speculative or âgreater foolâ investments that havenât proved out any fundamentally profitable use case. (Picks and shovels may be profitable, speculation and gambling may be profitable for some, but the underlying technologies havenât shown any profitable use cases for the end user/âend customer to the extent that would be normally be required to justify the eye-watering valuations.) The latest Bank of America survey of professional investors found that slightly more than half of respondents think that AI is in a bubble â although, at the same time, most of them also remain heavily exposed to AI in their investments.
I have never thought that Yudkowsky is the smartest person in the world, so this doesnât really bother me deeply.
I think it should bother you that he thinks so. How could someone be so wrong about such a thing?
On the charges of racism, I think youâll have to present some evidence for that.
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camelâs back.
I canât help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
Your comments about the Suez Canal insinuated that you think youâre smarter than the rest of the world. But you actually just didnât understand the situation, and didnât bother to do even a cursory Google search. You could have very quickly found out you were wrong about that if you thought to check. But instead you assumed the whole world â the whole world â is stupid and insane, and would be so much better off with only your guiding hand, I suppose. But maybe the world actually shouldnât let your hand â or the hand of this community, or especially the LessWrong community â anywhere near the controls.
This kind of mistake is disqualifying and discrediting for anyone who aspires to that kind of power or influence. Which is explicitly what you were advocating â the world needs at least two movements or communities that think carefully about the world. Are EA and LessWrong really the only two? And do these communities actually think carefully about things? Apparently not the Suez Canal, at least.
Probably most or all of your opinions that take this form â the world is obviously stupid, Iâm smarter than the world â are equally wrong. Probably most or all of the LessWrong communityâs and the EA communityâs opinions that take this form are equally wrong. Because they arenât researched, they arenât carefully thought about, theyâre just shot off half-cocked and then assumed to be right. (And the outside worldâs disagreement with them is sometimes circularly taken to be further evidence that the world is stupid and the community is smart.)
People in both communities pat themselves on the back ad nauseum for being the smartest people in the world or outsmarting the world â and for having great âepistemicsâ, which is ironic because if you Google âepistemicsâ or if you have studied philosophy, you know that âepistemicsâ is not a word.[1] This is infuriating when people routinely make mistakes this bad. Not just here â all the time, every day, everywhere, always. The same sort of mistakes â no basic fact checking, no Googling definitions of terms or concepts, no consulting expert opinion, simple logical or reasoning errors, methodological errors, math errors, ânot even wrongâ errors, and so on.
Mistakes are not necessarily bad, but the rate and severity of mistakes along with the messianic level of hubris â that combination is bad, very bad. Thatâs not intellectual or smart, thatâs cult-y. (And LessWrong has literally created multiple cults, so I donât think thatâs an unfair descriptor.)
Itâs not specifically your fault, itâs your fault and everyone elseâs too.
I probably could have, maybe should have, made most of this a separate post or quick take, but your comment about the Suez Canal set me off. (Your recent comment about solving the science/âphilosophy of both shrimp and human consciousness in time for the Anthropic IPO also seems like an example of LessWrong/âEA hubris.)
Itâs not a word used in philosophy. Some people mistakenly think it is. Itâs jargon of LessWrongâs/âthe EA Forumâs creation. If you look hard, you can find one EA definition of âepistemicsâ and one Center for Applied Rationality (CFAR) definition, but the two definitions contradict each other. The EA definition says epistemics is about the general quality of oneâs thinking. CFAR, on the other hand, says that epistemics is the âconstruction of formal modelsâ about knowledge. These are the only two definitions Iâve found, and they contradict each other.
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.
I donât think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.
For instance, many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, stores sold out of face masks in many cities in North America. (One example of many.) The oldest post on LessWrong tagged with âcovid-19â is from well after this started happening. (I also searched the forum for posts containing âcovidâ or âcoronavirusâ and sorted by oldest. I couldnât find an older post that was relevant.) The LessWrong post is written by a self-described âprepperâ who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts donât start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Hereâs a New York Times article from February 2, 2020, entitled âWuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Sayâ, well before any of the worried posts on LessWrong:
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
The worried posts on LessWrong donât start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDCâs guidance at the time about covid-19. It says that CFARâs workshops âshould be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.â This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
Iâve seen a few people in the LessWrong community congratulate the community on covid, but I havenât actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it.
Crypto and AI have obviously had many, many boosters and enthusiasts going back a long, long time.
I donât know about the rest of the LessWrong community, but Eliezer Yudkowsky and MIRI were oddly late to the game with deep learning. I was complaining back in 2016 that none of MIRIâs research focused on machine learning. Yudkowskyâs response to me was that he didnât think deep learning would lead to AGI. Eventually MIRI hired an intern or a junior researcher to focus on that. So, MIRI at least was late on deep learning.
Moreover, in the case of crypto and AI, or at least recent AI investment, so far these are mainly just speculative or âgreater foolâ investments that havenât proved out any fundamentally profitable use case. (Picks and shovels may be profitable, speculation and gambling may be profitable for some, but the underlying technologies havenât shown any profitable use cases for the end user/âend customer to the extent that would be normally be required to justify the eye-watering valuations.) The latest Bank of America survey of professional investors found that slightly more than half of respondents think that AI is in a bubble â although, at the same time, most of them also remain heavily exposed to AI in their investments.
I think it should bother you that he thinks so. How could someone be so wrong about such a thing?
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
âHuman biodiversity (Part 2: Manifest)â (June 27, 2024)
âHuman Biodiversity (Part 7: LessWrong)â (April 18, 2025)
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camelâs back.
Your comments about the Suez Canal insinuated that you think youâre smarter than the rest of the world. But you actually just didnât understand the situation, and didnât bother to do even a cursory Google search. You could have very quickly found out you were wrong about that if you thought to check. But instead you assumed the whole world â the whole world â is stupid and insane, and would be so much better off with only your guiding hand, I suppose. But maybe the world actually shouldnât let your hand â or the hand of this community, or especially the LessWrong community â anywhere near the controls.
This kind of mistake is disqualifying and discrediting for anyone who aspires to that kind of power or influence. Which is explicitly what you were advocating â the world needs at least two movements or communities that think carefully about the world. Are EA and LessWrong really the only two? And do these communities actually think carefully about things? Apparently not the Suez Canal, at least.
Probably most or all of your opinions that take this form â the world is obviously stupid, Iâm smarter than the world â are equally wrong. Probably most or all of the LessWrong communityâs and the EA communityâs opinions that take this form are equally wrong. Because they arenât researched, they arenât carefully thought about, theyâre just shot off half-cocked and then assumed to be right. (And the outside worldâs disagreement with them is sometimes circularly taken to be further evidence that the world is stupid and the community is smart.)
People in both communities pat themselves on the back ad nauseum for being the smartest people in the world or outsmarting the world â and for having great âepistemicsâ, which is ironic because if you Google âepistemicsâ or if you have studied philosophy, you know that âepistemicsâ is not a word.[1] This is infuriating when people routinely make mistakes this bad. Not just here â all the time, every day, everywhere, always. The same sort of mistakes â no basic fact checking, no Googling definitions of terms or concepts, no consulting expert opinion, simple logical or reasoning errors, methodological errors, math errors, ânot even wrongâ errors, and so on.
Mistakes are not necessarily bad, but the rate and severity of mistakes along with the messianic level of hubris â that combination is bad, very bad. Thatâs not intellectual or smart, thatâs cult-y. (And LessWrong has literally created multiple cults, so I donât think thatâs an unfair descriptor.)
Itâs not specifically your fault, itâs your fault and everyone elseâs too.
I probably could have, maybe should have, made most of this a separate post or quick take, but your comment about the Suez Canal set me off. (Your recent comment about solving the science/âphilosophy of both shrimp and human consciousness in time for the Anthropic IPO also seems like an example of LessWrong/âEA hubris.)
Itâs not a word used in philosophy. Some people mistakenly think it is. Itâs jargon of LessWrongâs/âthe EA Forumâs creation. If you look hard, you can find one EA definition of âepistemicsâ and one Center for Applied Rationality (CFAR) definition, but the two definitions contradict each other. The EA definition says epistemics is about the general quality of oneâs thinking. CFAR, on the other hand, says that epistemics is the âconstruction of formal modelsâ about knowledge. These are the only two definitions Iâve found, and they contradict each other.
I often donât respond to people who write far more than I do.
I may not respond to this.