When the Ever Given got stuck in the Suez Canal in March 2021, it cost the global economy much less than trillions:
The Suez Canal blockage led to global losses of about $136.9 ($127.5-$147.3) billion
The Suez Canal is being expanded, and this was the plan before the Ever Given got stuck:
Following the 2021 grounding of the container ship Ever Given that blocked the vital waterway for six days, Egypt accelerated plans to extend the second channel in the southern reaches of the canal and widen the existing channel.
If members of the LessWrong community have truly found a reliably better way to think than the rest of the world, they should be able to achieve plenty of success in domains where success is externally verifiable, such as science, technology, engineering, medicine, business, economics, and so on. Since this is not the case, the LessWrong community has almost certainly not actually found a reliably better way to think. (It has started multiple cults, which is not something you typically associate with rationality.)
What the LessWrong community likes to do is fire off half-cocked opinions and assume the rest of the world must be stupid/âinsane without thinking about it that much, or looking into it. It hasnât invented a new, better way to think. Itâs just arrogance.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in, completely neglecting the astronomical cost of tunnels compared to roads â an obvious and well-known thing to consider. In his book Inadequate Equilibria, Yudkowsky specifically highlighted his opinion on Japanese monetary policy as the peak example of his superior rationality. He was wrong. In Harry Potter and the Methods of Rationality, Yudkowsky both attempts to teach readers about and condescend to them for not already knowing about various concepts in science, the social sciences, and other fields, and gets many of them wrong. His grasp on deep learning doesnât seem to be much better. This is a pattern.
Yudkowsky apparently never notices or admits these mistakes, possibly because they conflict with his self-image as by far the smartest person in the world â either in general or at least with AI safety/âalignment research.
Unfortunately, Yudkowsky is the role model and guru for the LessWrong community, and now everyone in that community has a bit of his disease. Fire off a half-cocked opinion, declare yourself smarter than everybody in the world, attack people who point out your mistakes (cast aspersions, question their motives, call them evil, whatever), double-down, repeat.
Letâs say you learned about a community of 3,000 people somewhere in Canada who claimed to have figured out how to be the smartest people in the world â theyâve been around for 15 years, but youâve only just heard of them. What test, standard, measure, or criteria would you apply to this community to tell if they really are smarter than everyone else in the world? Think about criteria that are as clear and objective as possible, the sort of thing you could use for resolution criteria on Metaculus, that others would agree on. How would you assess this?
On any reasonable, clear, objective test, standard, measure, or criteria, LessWrong fails. It simply overestimates its own abilities.
Yudkowsky himself articulated the logic here, when discussing the definition of rationality:
Be careful of this sort of argument, any time you find yourself defining the âwinnerâ as someone other than the agent who is currently smiling from on top of a giant heap of utility.
Unfortunately, Yudkowsky often doesnât take his own advice. For example, heâs paid a lot of lip service to the importance of changing oneâs mind, updating oneâs views based on new evidence, such as:
Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting, the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy. If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims.
Does that sound like Yudkowsky to you? He needs to follow that advice more than anyone. Itâs hard to think of someone who follows that advice less.
Iâm very glad that Dustin Moskovitz decided to stop funding projects related to the LessWrong community. Iâm glad that people from the LessWrong community who were upset that the EA community isnât racist enough for their liking decided to leave. (I would prefer they try to unlearn their racist biases instead, but in lieu of that, Iâm glad they left.) Something did go wrong with the LessWrong community, about 16 years ago. It was founded on a lie: that Eliezer Yudkowsky is by far the smartest person in the world, and he can teach you how to be smarter than anyone in the world, too.
My main hope is that people who have been ensnared by the LessWrong community will somehow find their way out. I donât know how that will happen, but I hope somehow it does.
Edited on Friday, December 12, 2025 at 6:35pm Eastern to add:
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camelâs back.
If weâre going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which Iâve said in the past. They were also early to crypto, early to AI, early to Covid. Itâs sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you donât mention those, I think youâre probably fudging the numbers.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in,
I canât help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
I have never thought that Yudkowsky is the smartest person in the world, so this doesnât really bother me deeply.
On the charges of racism, I think youâll have to present some evidence for that.
Iâve seen you complain elsewhere that the ban times for negative karma comments are too long. I think they may be, but I guess they exist to stop behaviour exactly like this. Personally, I think itâs pretty antisocial to respond to a short message with an extremely long one that is kind of aggressive.
I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz wonât fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesnât want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who donât believe it, and most rationalists are not right-wing extremists as far as I can tell.
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camelâs back.
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?
As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I donât fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But itâs probably too early to judge.
Well, the evidence is there if youâre ever curious. You asked for it, and I gave it.
David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as essays by a few other people well-known in the effective altruism community and the LessWrong community. He has a number of published academic papers on rationality, epistemology, cognition, existential risk, and AI. Heâs also about as deeply familiar with the effective altruist community as itâs possible for someone to be, and also has a deep familiarity with the LessWrong community.
In my opinion, David Thorstad has a deeper understanding of the EA communityâs ideas and community dynamics than many people in the community do, and, given the overlap between the EA community and the LessWrong community, his understanding also extends to a significant degree to the LessWrong community as well. I think people in the EA community are accustomed to drive-by criticisms by people who have paid minimal attention to EA and its ideas, but David has spent years interfacing with the community and doing both academic research and blogging related to EA. So, what he writes are not drive-by criticisms and, indeed, apparently a number of people in EA listen to him, read his blog posts and academic papers, and take him seriously. All this to say, his work isnât something that can be dismissed out of hand. His work is the kind of scrutiny or critical appraisal that people in EA have been saying they want for years. Here it is, so folks better at least give it a chance.
To me, âugly or difficult topics should be discussedâ is an inaccurate euphemism. I donât think the LessWrong community is particularly capable of or competent at discussing ugly or difficult topics. I think they shy away from the ugly and difficult parts, and generally donât have the stomach or emotional stamina to sit through the discomfort. What instead is happening in the LessWrong community is people are credulously accepting ugly, wrong, evil, and stupid ideas in some part due to an inability to handle the discomfort of scrutinizing them and in large part due to just an ideological trainwreck of a community that believes ridiculous stuff all the time (like the many examples I gave above) and typically has atrocious epistemic practices (e.g. people just guess stuff or believe stuff based on a hunch without Googling it; the community is extremely insular and fiercely polices the insider/âoutsider boundary â landing on the right side of that boundary is sometimes what even determines whether people keep their job, their friends, their current housing, or their current community).
There are multiple adjacent cults, which Iâve said in the past.
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.
They were also early to crypto, early to AI, early to Covid.
I donât think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.
For instance, many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, stores sold out of face masks in many cities in North America. (One example of many.) The oldest post on LessWrong tagged with âcovid-19â is from well after this started happening. (I also searched the forum for posts containing âcovidâ or âcoronavirusâ and sorted by oldest. I couldnât find an older post that was relevant.) The LessWrong post is written by a self-described âprepperâ who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts donât start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Hereâs a New York Times article from February 2, 2020, entitled âWuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Sayâ, well before any of the worried posts on LessWrong:
The Wuhan coronavirus spreading from China is now likely to become a pandemic that circles the globe, according to many of the worldâs leading infectious disease experts.
The prospect is daunting. A pandemic â an ongoing epidemic on two or more continents â may well have global consequences, despite the extraordinary travel restrictions and quarantines now imposed by China and other countries, including the United States.
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
It is âincreasingly unlikely that the virus can be contained,â said Dr. Thomas R. Frieden, a former director of the Centers for Disease Control and Prevention who now runs Resolve to Save Lives, a nonprofit devoted to fighting epidemics.
The worried posts on LessWrong donât start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDCâs guidance at the time about covid-19. It says that CFARâs workshops âshould be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.â This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
Iâve seen a few people in the LessWrong community congratulate the community on covid, but I havenât actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it.
Crypto and AI have obviously had many, many boosters and enthusiasts going back a long, long time.
I donât know about the rest of the LessWrong community, but Eliezer Yudkowsky and MIRI were oddly late to the game with deep learning. I was complaining back in 2016 that none of MIRIâs research focused on machine learning. Yudkowskyâs response to me was that he didnât think deep learning would lead to AGI. Eventually MIRI hired an intern or a junior researcher to focus on that. So, MIRI at least was late on deep learning.
Moreover, in the case of crypto and AI, or at least recent AI investment, so far these are mainly just speculative or âgreater foolâ investments that havenât proved out any fundamentally profitable use case. (Picks and shovels may be profitable, speculation and gambling may be profitable for some, but the underlying technologies havenât shown any profitable use cases for the end user/âend customer to the extent that would be normally be required to justify the eye-watering valuations.) The latest Bank of America survey of professional investors found that slightly more than half of respondents think that AI is in a bubble â although, at the same time, most of them also remain heavily exposed to AI in their investments.
I have never thought that Yudkowsky is the smartest person in the world, so this doesnât really bother me deeply.
I think it should bother you that he thinks so. How could someone be so wrong about such a thing?
On the charges of racism, I think youâll have to present some evidence for that.
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camelâs back.
I canât help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
Your comments about the Suez Canal insinuated that you think youâre smarter than the rest of the world. But you actually just didnât understand the situation, and didnât bother to do even a cursory Google search. You could have very quickly found out you were wrong about that if you thought to check. But instead you assumed the whole world â the whole world â is stupid and insane, and would be so much better off with only your guiding hand, I suppose. But maybe the world actually shouldnât let your hand â or the hand of this community, or especially the LessWrong community â anywhere near the controls.
This kind of mistake is disqualifying and discrediting for anyone who aspires to that kind of power or influence. Which is explicitly what you were advocating â the world needs at least two movements or communities that think carefully about the world. Are EA and LessWrong really the only two? And do these communities actually think carefully about things? Apparently not the Suez Canal, at least.
Probably most or all of your opinions that take this form â the world is obviously stupid, Iâm smarter than the world â are equally wrong. Probably most or all of the LessWrong communityâs and the EA communityâs opinions that take this form are equally wrong. Because they arenât researched, they arenât carefully thought about, theyâre just shot off half-cocked and then assumed to be right. (And the outside worldâs disagreement with them is sometimes circularly taken to be further evidence that the world is stupid and the community is smart.)
People in both communities pat themselves on the back ad nauseum for being the smartest people in the world or outsmarting the world â and for having great âepistemicsâ, which is ironic because if you Google âepistemicsâ or if you have studied philosophy, you know that âepistemicsâ is not a word.[1] This is infuriating when people routinely make mistakes this bad. Not just here â all the time, every day, everywhere, always. The same sort of mistakes â no basic fact checking, no Googling definitions of terms or concepts, no consulting expert opinion, simple logical or reasoning errors, methodological errors, math errors, ânot even wrongâ errors, and so on.
Mistakes are not necessarily bad, but the rate and severity of mistakes along with the messianic level of hubris â that combination is bad, very bad. Thatâs not intellectual or smart, thatâs cult-y. (And LessWrong has literally created multiple cults, so I donât think thatâs an unfair descriptor.)
Itâs not specifically your fault, itâs your fault and everyone elseâs too.
I probably could have, maybe should have, made most of this a separate post or quick take, but your comment about the Suez Canal set me off. (Your recent comment about solving the science/âphilosophy of both shrimp and human consciousness in time for the Anthropic IPO also seems like an example of LessWrong/âEA hubris.)
Itâs not a word used in philosophy. Some people mistakenly think it is. Itâs jargon of LessWrongâs/âthe EA Forumâs creation. If you look hard, you can find one EA definition of âepistemicsâ and one Center for Applied Rationality (CFAR) definition, but the two definitions contradict each other. The EA definition says epistemics is about the general quality of oneâs thinking. CFAR, on the other hand, says that epistemics is the âconstruction of formal modelsâ about knowledge. These are the only two definitions Iâve found, and they contradict each other.
When the Ever Given got stuck in the Suez Canal in March 2021, it cost the global economy much less than trillions:
The Suez Canal is being expanded, and this was the plan before the Ever Given got stuck:
If members of the LessWrong community have truly found a reliably better way to think than the rest of the world, they should be able to achieve plenty of success in domains where success is externally verifiable, such as science, technology, engineering, medicine, business, economics, and so on. Since this is not the case, the LessWrong community has almost certainly not actually found a reliably better way to think. (It has started multiple cults, which is not something you typically associate with rationality.)
What the LessWrong community likes to do is fire off half-cocked opinions and assume the rest of the world must be stupid/âinsane without thinking about it that much, or looking into it. It hasnât invented a new, better way to think. Itâs just arrogance.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in, completely neglecting the astronomical cost of tunnels compared to roads â an obvious and well-known thing to consider. In his book Inadequate Equilibria, Yudkowsky specifically highlighted his opinion on Japanese monetary policy as the peak example of his superior rationality. He was wrong. In Harry Potter and the Methods of Rationality, Yudkowsky both attempts to teach readers about and condescend to them for not already knowing about various concepts in science, the social sciences, and other fields, and gets many of them wrong. His grasp on deep learning doesnât seem to be much better. This is a pattern.
Yudkowsky apparently never notices or admits these mistakes, possibly because they conflict with his self-image as by far the smartest person in the world â either in general or at least with AI safety/âalignment research.
Unfortunately, Yudkowsky is the role model and guru for the LessWrong community, and now everyone in that community has a bit of his disease. Fire off a half-cocked opinion, declare yourself smarter than everybody in the world, attack people who point out your mistakes (cast aspersions, question their motives, call them evil, whatever), double-down, repeat.
Letâs say you learned about a community of 3,000 people somewhere in Canada who claimed to have figured out how to be the smartest people in the world â theyâve been around for 15 years, but youâve only just heard of them. What test, standard, measure, or criteria would you apply to this community to tell if they really are smarter than everyone else in the world? Think about criteria that are as clear and objective as possible, the sort of thing you could use for resolution criteria on Metaculus, that others would agree on. How would you assess this?
On any reasonable, clear, objective test, standard, measure, or criteria, LessWrong fails. It simply overestimates its own abilities.
Yudkowsky himself articulated the logic here, when discussing the definition of rationality:
Unfortunately, Yudkowsky often doesnât take his own advice. For example, heâs paid a lot of lip service to the importance of changing oneâs mind, updating oneâs views based on new evidence, such as:
Does that sound like Yudkowsky to you? He needs to follow that advice more than anyone. Itâs hard to think of someone who follows that advice less.
Iâm very glad that Dustin Moskovitz decided to stop funding projects related to the LessWrong community. Iâm glad that people from the LessWrong community who were upset that the EA community isnât racist enough for their liking decided to leave. (I would prefer they try to unlearn their racist biases instead, but in lieu of that, Iâm glad they left.) Something did go wrong with the LessWrong community, about 16 years ago. It was founded on a lie: that Eliezer Yudkowsky is by far the smartest person in the world, and he can teach you how to be smarter than anyone in the world, too.
My main hope is that people who have been ensnared by the LessWrong community will somehow find their way out. I donât know how that will happen, but I hope somehow it does.
Edited on Friday, December 12, 2025 at 6:35pm Eastern to add:
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
âHuman biodiversity (Part 2: Manifest)â (June 27, 2024)
âHuman Biodiversity (Part 7: LessWrong)â (April 18, 2025)
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camelâs back.
I appreciate the correction on the Suez stuff.
If weâre going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which Iâve said in the past. They were also early to crypto, early to AI, early to Covid. Itâs sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you donât mention those, I think youâre probably fudging the numbers.
I canât help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
I have never thought that Yudkowsky is the smartest person in the world, so this doesnât really bother me deeply.
On the charges of racism, I think youâll have to present some evidence for that.
Iâve seen you complain elsewhere that the ban times for negative karma comments are too long. I think they may be, but I guess they exist to stop behaviour exactly like this. Personally, I think itâs pretty antisocial to respond to a short message with an extremely long one that is kind of aggressive.
I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz wonât fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesnât want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who donât believe it, and most rationalists are not right-wing extremists as far as I can tell.
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
âHuman biodiversity (Part 2: Manifest)â (June 27, 2024)
âHuman Biodiversity (Part 7: LessWrong)â (April 18, 2025)
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camelâs back.
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?
As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I donât fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But itâs probably too early to judge.
Well, the evidence is there if youâre ever curious. You asked for it, and I gave it.
David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as essays by a few other people well-known in the effective altruism community and the LessWrong community. He has a number of published academic papers on rationality, epistemology, cognition, existential risk, and AI. Heâs also about as deeply familiar with the effective altruist community as itâs possible for someone to be, and also has a deep familiarity with the LessWrong community.
In my opinion, David Thorstad has a deeper understanding of the EA communityâs ideas and community dynamics than many people in the community do, and, given the overlap between the EA community and the LessWrong community, his understanding also extends to a significant degree to the LessWrong community as well. I think people in the EA community are accustomed to drive-by criticisms by people who have paid minimal attention to EA and its ideas, but David has spent years interfacing with the community and doing both academic research and blogging related to EA. So, what he writes are not drive-by criticisms and, indeed, apparently a number of people in EA listen to him, read his blog posts and academic papers, and take him seriously. All this to say, his work isnât something that can be dismissed out of hand. His work is the kind of scrutiny or critical appraisal that people in EA have been saying they want for years. Here it is, so folks better at least give it a chance.
To me, âugly or difficult topics should be discussedâ is an inaccurate euphemism. I donât think the LessWrong community is particularly capable of or competent at discussing ugly or difficult topics. I think they shy away from the ugly and difficult parts, and generally donât have the stomach or emotional stamina to sit through the discomfort. What instead is happening in the LessWrong community is people are credulously accepting ugly, wrong, evil, and stupid ideas in some part due to an inability to handle the discomfort of scrutinizing them and in large part due to just an ideological trainwreck of a community that believes ridiculous stuff all the time (like the many examples I gave above) and typically has atrocious epistemic practices (e.g. people just guess stuff or believe stuff based on a hunch without Googling it; the community is extremely insular and fiercely polices the insider/âoutsider boundary â landing on the right side of that boundary is sometimes what even determines whether people keep their job, their friends, their current housing, or their current community).
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.
I donât think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.
For instance, many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, stores sold out of face masks in many cities in North America. (One example of many.) The oldest post on LessWrong tagged with âcovid-19â is from well after this started happening. (I also searched the forum for posts containing âcovidâ or âcoronavirusâ and sorted by oldest. I couldnât find an older post that was relevant.) The LessWrong post is written by a self-described âprepperâ who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts donât start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Hereâs a New York Times article from February 2, 2020, entitled âWuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Sayâ, well before any of the worried posts on LessWrong:
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
The worried posts on LessWrong donât start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDCâs guidance at the time about covid-19. It says that CFARâs workshops âshould be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.â This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
Iâve seen a few people in the LessWrong community congratulate the community on covid, but I havenât actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it.
Crypto and AI have obviously had many, many boosters and enthusiasts going back a long, long time.
I donât know about the rest of the LessWrong community, but Eliezer Yudkowsky and MIRI were oddly late to the game with deep learning. I was complaining back in 2016 that none of MIRIâs research focused on machine learning. Yudkowskyâs response to me was that he didnât think deep learning would lead to AGI. Eventually MIRI hired an intern or a junior researcher to focus on that. So, MIRI at least was late on deep learning.
Moreover, in the case of crypto and AI, or at least recent AI investment, so far these are mainly just speculative or âgreater foolâ investments that havenât proved out any fundamentally profitable use case. (Picks and shovels may be profitable, speculation and gambling may be profitable for some, but the underlying technologies havenât shown any profitable use cases for the end user/âend customer to the extent that would be normally be required to justify the eye-watering valuations.) The latest Bank of America survey of professional investors found that slightly more than half of respondents think that AI is in a bubble â although, at the same time, most of them also remain heavily exposed to AI in their investments.
I think it should bother you that he thinks so. How could someone be so wrong about such a thing?
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
âHuman biodiversity (Part 2: Manifest)â (June 27, 2024)
âHuman Biodiversity (Part 7: LessWrong)â (April 18, 2025)
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camelâs back.
Your comments about the Suez Canal insinuated that you think youâre smarter than the rest of the world. But you actually just didnât understand the situation, and didnât bother to do even a cursory Google search. You could have very quickly found out you were wrong about that if you thought to check. But instead you assumed the whole world â the whole world â is stupid and insane, and would be so much better off with only your guiding hand, I suppose. But maybe the world actually shouldnât let your hand â or the hand of this community, or especially the LessWrong community â anywhere near the controls.
This kind of mistake is disqualifying and discrediting for anyone who aspires to that kind of power or influence. Which is explicitly what you were advocating â the world needs at least two movements or communities that think carefully about the world. Are EA and LessWrong really the only two? And do these communities actually think carefully about things? Apparently not the Suez Canal, at least.
Probably most or all of your opinions that take this form â the world is obviously stupid, Iâm smarter than the world â are equally wrong. Probably most or all of the LessWrong communityâs and the EA communityâs opinions that take this form are equally wrong. Because they arenât researched, they arenât carefully thought about, theyâre just shot off half-cocked and then assumed to be right. (And the outside worldâs disagreement with them is sometimes circularly taken to be further evidence that the world is stupid and the community is smart.)
People in both communities pat themselves on the back ad nauseum for being the smartest people in the world or outsmarting the world â and for having great âepistemicsâ, which is ironic because if you Google âepistemicsâ or if you have studied philosophy, you know that âepistemicsâ is not a word.[1] This is infuriating when people routinely make mistakes this bad. Not just here â all the time, every day, everywhere, always. The same sort of mistakes â no basic fact checking, no Googling definitions of terms or concepts, no consulting expert opinion, simple logical or reasoning errors, methodological errors, math errors, ânot even wrongâ errors, and so on.
Mistakes are not necessarily bad, but the rate and severity of mistakes along with the messianic level of hubris â that combination is bad, very bad. Thatâs not intellectual or smart, thatâs cult-y. (And LessWrong has literally created multiple cults, so I donât think thatâs an unfair descriptor.)
Itâs not specifically your fault, itâs your fault and everyone elseâs too.
I probably could have, maybe should have, made most of this a separate post or quick take, but your comment about the Suez Canal set me off. (Your recent comment about solving the science/âphilosophy of both shrimp and human consciousness in time for the Anthropic IPO also seems like an example of LessWrong/âEA hubris.)
Itâs not a word used in philosophy. Some people mistakenly think it is. Itâs jargon of LessWrongâs/âthe EA Forumâs creation. If you look hard, you can find one EA definition of âepistemicsâ and one Center for Applied Rationality (CFAR) definition, but the two definitions contradict each other. The EA definition says epistemics is about the general quality of oneâs thinking. CFAR, on the other hand, says that epistemics is the âconstruction of formal modelsâ about knowledge. These are the only two definitions Iâve found, and they contradict each other.
I often donât respond to people who write far more than I do.
I may not respond to this.