Reading Will’s post about the future of EA (here) I think that there is an option also to “hang around and see what happens”. It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.
A better earth would build a second suez canal, to ensure that we don’t suffer trillions in damage if the first one gets stuck. Likewise, having 2 “think carefully about things movements” seems fine.
It hasn’t always felt like this “two is better than one” feeling is mutual. I guess the rationalist in me feels slighted by EA discourse around and EA funder treatment of rationalist orgs over the years. But maybe we can let that go and instead be glad that should something go wrong with rationalism, that EA will still be around.
When the Ever Given got stuck in the Suez Canal in March 2021, it cost the global economy much less than trillions:
The Suez Canal blockage led to global losses of about $136.9 ($127.5-$147.3) billion
The Suez Canal is being expanded, and this was the plan before the Ever Given got stuck:
Following the 2021 grounding of the container ship Ever Given that blocked the vital waterway for six days, Egypt accelerated plans to extend the second channel in the southern reaches of the canal and widen the existing channel.
If members of the LessWrong community have truly found a reliably better way to think than the rest of the world, they should be able to achieve plenty of success in domains where success is externally verifiable, such as science, technology, engineering, medicine, business, economics, and so on. Since this is not the case, the LessWrong community has almost certainly not actually found a reliably better way to think. (It has started multiple cults, which is not something you typically associate with rationality.)
What the LessWrong community likes to do is fire off half-cocked opinions and assume the rest of the world must be stupid/insane without thinking about it that much, or looking into it. It hasn’t invented a new, better way to think. It’s just arrogance.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in, completely neglecting the astronomical cost of tunnels compared to roads — an obvious and well-known thing to consider. In his book Inadequate Equilibria, Yudkowsky specifically highlighted his opinion on Japanese monetary policy as the peak example of his superior rationality. He was wrong. In Harry Potter and the Methods of Rationality, Yudkowsky both attempts to teach readers about and condescend to them for not already knowing about various concepts in science, the social sciences, and other fields, and gets many of them wrong. His grasp on deep learning doesn’t seem to be much better. This is a pattern.
Yudkowsky apparently never notices or admits these mistakes, possibly because they conflict with his self-image as by far the smartest person in the world — either in general or at least with AI safety/alignment research.
Unfortunately, Yudkowsky is the role model and guru for the LessWrong community, and now everyone in that community has a bit of his disease. Fire off a half-cocked opinion, declare yourself smarter than everybody in the world, attack people who point out your mistakes (cast aspersions, question their motives, call them evil, whatever), double-down, repeat.
Let’s say you learned about a community of 3,000 people somewhere in Canada who claimed to have figured out how to be the smartest people in the world — they’ve been around for 15 years, but you’ve only just heard of them. What test, standard, measure, or criteria would you apply to this community to tell if they really are smarter than everyone else in the world? Think about criteria that are as clear and objective as possible, the sort of thing you could use for resolution criteria on Metaculus, that others would agree on. How would you assess this?
On any reasonable, clear, objective test, standard, measure, or criteria, LessWrong fails. It simply overestimates its own abilities.
Yudkowsky himself articulated the logic here, when discussing the definition of rationality:
Be careful of this sort of argument, any time you find yourself defining the “winner” as someone other than the agent who is currently smiling from on top of a giant heap of utility.
Unfortunately, Yudkowsky often doesn’t take his own advice. For example, he’s paid a lot of lip service to the importance of changing one’s mind, updating one’s views based on new evidence, such as:
Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting, the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy. If you regard evidence as a constraint and seek to free yourself, you sell yourself into the chains of your whims.
Does that sound like Yudkowsky to you? He needs to follow that advice more than anyone. It’s hard to think of someone who follows that advice less.
I’m very glad that Dustin Moskovitz decided to stop funding projects related to the LessWrong community. I’m glad that people from the LessWrong community who were upset that the EA community isn’t racist enough for their liking decided to leave. (I would prefer they try to unlearn their racist biases instead, but in lieu of that, I’m glad they left.) Something did go wrong with the LessWrong community, about 16 years ago. It was founded on a lie: that Eliezer Yudkowsky is by far the smartest person in the world, and he can teach you how to be smarter than anyone in the world, too.
My main hope is that people who have been ensnared by the LessWrong community will somehow find their way out. I don’t know how that will happen, but I hope somehow it does.
Edited on Friday, December 12, 2025 at 6:35pm Eastern to add:
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
If we’re going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which I’ve said in the past. They were also early to crypto, early to AI, early to Covid. It’s sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you don’t mention those, I think you’re probably fudging the numbers.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in,
I can’t help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
I have never thought that Yudkowsky is the smartest person in the world, so this doesn’t really bother me deeply.
On the charges of racism, I think you’ll have to present some evidence for that.
I’ve seen you complain elsewhere that the ban times for negative karma comments are too long. I think they may be, but I guess they exist to stop behaviour exactly like this. Personally, I think it’s pretty antisocial to respond to a short message with an extremely long one that is kind of aggressive.
I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz won’t fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesn’t want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who don’t believe it, and most rationalists are not right-wing extremists as far as I can tell.
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?
As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don’t fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it’s probably too early to judge.
Well, the evidence is there if you’re ever curious. You asked for it, and I gave it.
David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as a few other people well-known in the effective altruism community and the LessWrong community. He has a number of published academic papers on rationality, epistemology, cognition, existential risk, and AI. He’s about as deeply familiar with the effective altruist community as it’s possible for someone to be, and also has a deep familiarity with the LessWrong community.
In my opinion, David Thorstad has a deeper understanding of the EA community’s ideas and community dynamics than many in the community do, and, given the overlap between the EA community and the LessWrong community, his understanding also extends to a significant degree to the LessWrong community as well. I think people in the EA community are used to drive-by criticisms by people who paid minimal attention to EA and its ideas, but David has spent years interfacing with the community and doing both academic research and blogging related to EA. So, what he writes are not drive-by criticisms and, indeed, apparently a number of people in EA listen to him, read his blog posts and academic papers, and take him seriously. All this to say, his work isn’t something that can be dismissed out of hand. His work is the kind of scrutiny or critical appraisal that people in EA have been saying they want for years. Here it is, so folks better at least give it a chance.
To me, “ugly or difficult topics should be discussed” is an inaccurate euphemism. I don’t think the LessWrong community is particularly capable of or competent at discussing ugly or difficult topics. I think they shy away from the ugly and difficult parts, and generally don’t have the stomach or emotional stamina to sit through the discomfort. What instead is happening in the LessWrong community is people are credulously accepting ugly, wrong, evil ideas in some part due to an inability to handle the discomfort of scrutinizing them and in large part due to just an ideological trainwreck of a community that believes ridiculous stuff all the time (like the many examples I gave above) and typically has atrocious epistemic practices (e.g. just guess stuff or believe stuff based on a hunch without Googling it).
There are multiple adjacent cults, which I’ve said in the past.
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.
They were also early to crypto, early to AI, early to Covid.
I don’t think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.
For instance, many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, stores sold out of face masks in many cities in North America. (One example of many.) The oldest post on LessWrong tagged with “covid-19” is from well after this started happening. (I also searched the forum for posts containing “covid” or “coronavirus” and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described “prepper” who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don’t start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Here’s a New York Times article from February 2, 2020, entitled “Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say”, well before any of the worried posts on LessWrong:
The Wuhan coronavirus spreading from China is now likely to become a pandemic that circles the globe, according to many of the world’s leading infectious disease experts.
The prospect is daunting. A pandemic — an ongoing epidemic on two or more continents — may well have global consequences, despite the extraordinary travel restrictions and quarantines now imposed by China and other countries, including the United States.
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
It is “increasingly unlikely that the virus can be contained,” said Dr. Thomas R. Frieden, a former director of the Centers for Disease Control and Prevention who now runs Resolve to Save Lives, a nonprofit devoted to fighting epidemics.
The worried posts on LessWrong don’t start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDC’s guidance at the time about covid-19. It says that CFAR’s workshops “should be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.” This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
I’ve seen a few people in the LessWrong community congratulate the community on covid, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it.
Crypto and AI have obviously had many, many boosters and enthusiasts going back a long, long time.
I don’t know about the rest of the LessWrong community, but Eliezer Yudkowsky and MIRI were oddly late to the game with deep learning. I was complaining back in 2016 that none of MIRI’s research focused on machine learning. Yudkowsky’s response to me was that he didn’t think deep learning would lead to AGI. Eventually MIRI hired an intern or a junior researcher to focus on that. So, MIRI at least was late on deep learning.
Moreover, in the case of crypto and AI, or at least recent AI investment, so far these are mainly just speculative or “greater fool” investments that haven’t proved out any fundamentally profitable use case. (Picks and shovels may be profitable, speculation and gambling may be profitable for some, but the underlying technologies haven’t shown any profitable use cases for the end user/end customer to the extent that would be normally be required to justify the eye-watering valuations.) The latest Bank of America survey of professional investors found that slightly more than half of respondents think that AI is in a bubble — although, at the same time, most of them also remain heavily exposed to AI in their investments.
I have never thought that Yudkowsky is the smartest person in the world, so this doesn’t really bother me deeply.
I think it should bother you that he thinks so. How could someone be so wrong about such a thing?
On the charges of racism, I think you’ll have to present some evidence for that.
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
I can’t help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
Your comments about the Suez Canal insinuated that you think you’re smarter than the rest of the world. But you actually just didn’t understand the situation, and didn’t bother to do even a cursory Google search. You could have very quickly found out you were wrong about that if you thought to check. But instead you assumed the whole world — the whole world — is stupid and insane, and would be so much better off with only your guiding hand, I suppose. But maybe the world actually shouldn’t let your hand — or the hand of this community, or especially the LessWrong community — anywhere near the controls.
This kind of mistake is disqualifying and discrediting for anyone who aspires to that kind of power or influence. Which is explicitly what you were advocating — the world needs at least two movements or communities that think carefully about the world. Are EA and LessWrong really the only two? And do these communities actually think carefully about things? Apparently not the Suez Canal, at least.
Probably most or all of your opinions that take this form — the world is obviously stupid, I’m smarter than the world — are equally wrong. Probably most or all of the LessWrong community’s and the EA community’s opinions that take this form are equally wrong. Because they aren’t researched, they aren’t carefully thought about, they’re just shot off half-cocked and then assumed to be right. (And the outside world’s disagreement with them is sometimes circularly taken to be further evidence that the world is stupid and the community is smart.)
People in both communities pat themselves on the back ad nauseum for being the smartest people in the world or outsmarting the world — and for having great “epistemics”, which is ironic because if you Google “epistemics” or if you have studied philosophy, you know that “epistemics” is not a word.[1] This is infuriating when people routinely make mistakes this bad. Not just here — all the time, every day, everywhere, always. The same sort of mistakes — no basic fact checking, no Googling definitions of terms or concepts, no consulting expert opinion, simple logical or reasoning errors, methodological errors, math errors, “not even wrong” errors, and so on.
Mistakes are not necessarily bad, but the rate and severity of mistakes along with the messianic level of hubris — that combination is bad, very bad. That’s not intellectual or smart, that’s cult-y. (And LessWrong has literally created multiple cults, so I don’t think that’s an unfair descriptor.)
It’s not specifically your fault, it’s your fault and everyone else’s too.
I probably could have, maybe should have, made most of this a separate post or quick take, but your comment about the Suez Canal set me off. (Your recent comment about solving the science/philosophy of both shrimp and human consciousness in time for the Anthropic IPO also seems like an example of LessWrong/EA hubris.)
It’s not a word used in philosophy. Some people mistakenly think it is. It’s jargon of LessWrong’s/the EA Forum’s creation. If you look hard, you can find one EA definition of “epistemics” and one Center for Applied Rationality (CFAR) definition, but the two definitions contradict each other. The EA definition says epistemics is about the general quality of one’s thinking. CFAR, on the other hand, says that epistemics is the “construction of formal models” about knowledge. These are the only two definitions I’ve found, and they contradict each other.
Reading Will’s post about the future of EA (here) I think that there is an option also to “hang around and see what happens”. It seems valuable to have multiple similar communities. For a while I was more involved in EA, then more in rationalism. I can imagine being more involved in EA again.
A better earth would build a second suez canal, to ensure that we don’t suffer trillions in damage if the first one gets stuck. Likewise, having 2 “think carefully about things movements” seems fine.
It hasn’t always felt like this “two is better than one” feeling is mutual. I guess the rationalist in me feels slighted by EA discourse around and EA funder treatment of rationalist orgs over the years. But maybe we can let that go and instead be glad that should something go wrong with rationalism, that EA will still be around.
What have EA funders done that’s upset you?
When the Ever Given got stuck in the Suez Canal in March 2021, it cost the global economy much less than trillions:
The Suez Canal is being expanded, and this was the plan before the Ever Given got stuck:
If members of the LessWrong community have truly found a reliably better way to think than the rest of the world, they should be able to achieve plenty of success in domains where success is externally verifiable, such as science, technology, engineering, medicine, business, economics, and so on. Since this is not the case, the LessWrong community has almost certainly not actually found a reliably better way to think. (It has started multiple cults, which is not something you typically associate with rationality.)
What the LessWrong community likes to do is fire off half-cocked opinions and assume the rest of the world must be stupid/insane without thinking about it that much, or looking into it. It hasn’t invented a new, better way to think. It’s just arrogance.
For example, in 2014, Eliezer Yudkowsky wrote that Earth is silly for not building tunnels for self-driving cars to drive in, completely neglecting the astronomical cost of tunnels compared to roads — an obvious and well-known thing to consider. In his book Inadequate Equilibria, Yudkowsky specifically highlighted his opinion on Japanese monetary policy as the peak example of his superior rationality. He was wrong. In Harry Potter and the Methods of Rationality, Yudkowsky both attempts to teach readers about and condescend to them for not already knowing about various concepts in science, the social sciences, and other fields, and gets many of them wrong. His grasp on deep learning doesn’t seem to be much better. This is a pattern.
Yudkowsky apparently never notices or admits these mistakes, possibly because they conflict with his self-image as by far the smartest person in the world — either in general or at least with AI safety/alignment research.
Unfortunately, Yudkowsky is the role model and guru for the LessWrong community, and now everyone in that community has a bit of his disease. Fire off a half-cocked opinion, declare yourself smarter than everybody in the world, attack people who point out your mistakes (cast aspersions, question their motives, call them evil, whatever), double-down, repeat.
Let’s say you learned about a community of 3,000 people somewhere in Canada who claimed to have figured out how to be the smartest people in the world — they’ve been around for 15 years, but you’ve only just heard of them. What test, standard, measure, or criteria would you apply to this community to tell if they really are smarter than everyone else in the world? Think about criteria that are as clear and objective as possible, the sort of thing you could use for resolution criteria on Metaculus, that others would agree on. How would you assess this?
On any reasonable, clear, objective test, standard, measure, or criteria, LessWrong fails. It simply overestimates its own abilities.
Yudkowsky himself articulated the logic here, when discussing the definition of rationality:
Unfortunately, Yudkowsky often doesn’t take his own advice. For example, he’s paid a lot of lip service to the importance of changing one’s mind, updating one’s views based on new evidence, such as:
Does that sound like Yudkowsky to you? He needs to follow that advice more than anyone. It’s hard to think of someone who follows that advice less.
I’m very glad that Dustin Moskovitz decided to stop funding projects related to the LessWrong community. I’m glad that people from the LessWrong community who were upset that the EA community isn’t racist enough for their liking decided to leave. (I would prefer they try to unlearn their racist biases instead, but in lieu of that, I’m glad they left.) Something did go wrong with the LessWrong community, about 16 years ago. It was founded on a lie: that Eliezer Yudkowsky is by far the smartest person in the world, and he can teach you how to be smarter than anyone in the world, too.
My main hope is that people who have been ensnared by the LessWrong community will somehow find their way out. I don’t know how that will happen, but I hope somehow it does.
Edited on Friday, December 12, 2025 at 6:35pm Eastern to add:
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
“Human biodiversity (Part 2: Manifest)” (June 27, 2024)
“Human Biodiversity (Part 7: LessWrong)” (April 18, 2025)
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
I appreciate the correction on the Suez stuff.
If we’re going to criticise rationality, I think we should take the good with the bad. There are multiple adjacent cults, which I’ve said in the past. They were also early to crypto, early to AI, early to Covid. It’s sometimes hard to decide which things are from EA or Rationality, but there are a number of possible wins. If you don’t mention those, I think you’re probably fudging the numbers.
I can’t help but feel you are annoyed about this in general. But why speak to me in this tone. Have I specifically upset you?
I have never thought that Yudkowsky is the smartest person in the world, so this doesn’t really bother me deeply.
On the charges of racism, I think you’ll have to present some evidence for that.
I’ve seen you complain elsewhere that the ban times for negative karma comments are too long. I think they may be, but I guess they exist to stop behaviour exactly like this. Personally, I think it’s pretty antisocial to respond to a short message with an extremely long one that is kind of aggressive.
I think on the racism fron Yarrow is referring to the perception that the reason Moskowtiz won’t fund rationalist stuff is because either he thinks that a lot of rationalist believe Black people have lower average IQs than whites for genetic reasons, or he thinks that other people believe that and doesn’t want the hassle. I think that belief genuinely is quite common among rationalists, no? Although, there are clearly rationalists who don’t believe it, and most rationalists are not right-wing extremists as far as I can tell.
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
“Human biodiversity (Part 2: Manifest)” (June 27, 2024)
“Human Biodiversity (Part 7: LessWrong)” (April 18, 2025)
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
Sure, and do you want to stand on any of those accusations? I am not going to argue the point with 2 blogposts. What is the point you think is the strongest?
As for Moskovitz, he can do as he wishes, but I think it was an error. I do think that ugly or difficult topics should be discussed and I don’t fear that. LessWrong, and Manifest, have cut okay lines through these topics in my view. But it’s probably too early to judge.
Well, the evidence is there if you’re ever curious. You asked for it, and I gave it.
David Thorstad, who writes the Reflective Altruism blog, is a professional academic philosopher and, until recently, was a researcher at the Global Priorities Institute at Oxford. He was an editor of the recent Essays on Longtermism anthology published by Oxford University Press, which includes an essay co-authored by Will MacAskill, as well as a few other people well-known in the effective altruism community and the LessWrong community. He has a number of published academic papers on rationality, epistemology, cognition, existential risk, and AI. He’s about as deeply familiar with the effective altruist community as it’s possible for someone to be, and also has a deep familiarity with the LessWrong community.
In my opinion, David Thorstad has a deeper understanding of the EA community’s ideas and community dynamics than many in the community do, and, given the overlap between the EA community and the LessWrong community, his understanding also extends to a significant degree to the LessWrong community as well. I think people in the EA community are used to drive-by criticisms by people who paid minimal attention to EA and its ideas, but David has spent years interfacing with the community and doing both academic research and blogging related to EA. So, what he writes are not drive-by criticisms and, indeed, apparently a number of people in EA listen to him, read his blog posts and academic papers, and take him seriously. All this to say, his work isn’t something that can be dismissed out of hand. His work is the kind of scrutiny or critical appraisal that people in EA have been saying they want for years. Here it is, so folks better at least give it a chance.
To me, “ugly or difficult topics should be discussed” is an inaccurate euphemism. I don’t think the LessWrong community is particularly capable of or competent at discussing ugly or difficult topics. I think they shy away from the ugly and difficult parts, and generally don’t have the stomach or emotional stamina to sit through the discomfort. What instead is happening in the LessWrong community is people are credulously accepting ugly, wrong, evil ideas in some part due to an inability to handle the discomfort of scrutinizing them and in large part due to just an ideological trainwreck of a community that believes ridiculous stuff all the time (like the many examples I gave above) and typically has atrocious epistemic practices (e.g. just guess stuff or believe stuff based on a hunch without Googling it).
What do you think the base rate for cult formation is for a town or community of that size? Seems like LessWrong is far, far above the base rate, maybe even by orders of magnitude.
I don’t think any of these are particularly good or strong examples. A very large number of people were as early or earlier to all of these things as the LessWrong community.
For instance, many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, stores sold out of face masks in many cities in North America. (One example of many.) The oldest post on LessWrong tagged with “covid-19” is from well after this started happening. (I also searched the forum for posts containing “covid” or “coronavirus” and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described “prepper” who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don’t start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Here’s a New York Times article from February 2, 2020, entitled “Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say”, well before any of the worried posts on LessWrong:
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
The worried posts on LessWrong don’t start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDC’s guidance at the time about covid-19. It says that CFAR’s workshops “should be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.” This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
I’ve seen a few people in the LessWrong community congratulate the community on covid, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it.
Crypto and AI have obviously had many, many boosters and enthusiasts going back a long, long time.
I don’t know about the rest of the LessWrong community, but Eliezer Yudkowsky and MIRI were oddly late to the game with deep learning. I was complaining back in 2016 that none of MIRI’s research focused on machine learning. Yudkowsky’s response to me was that he didn’t think deep learning would lead to AGI. Eventually MIRI hired an intern or a junior researcher to focus on that. So, MIRI at least was late on deep learning.
Moreover, in the case of crypto and AI, or at least recent AI investment, so far these are mainly just speculative or “greater fool” investments that haven’t proved out any fundamentally profitable use case. (Picks and shovels may be profitable, speculation and gambling may be profitable for some, but the underlying technologies haven’t shown any profitable use cases for the end user/end customer to the extent that would be normally be required to justify the eye-watering valuations.) The latest Bank of America survey of professional investors found that slightly more than half of respondents think that AI is in a bubble — although, at the same time, most of them also remain heavily exposed to AI in their investments.
I think it should bother you that he thinks so. How could someone be so wrong about such a thing?
The philosopher David Thorstad has extensively documented racism in the LessWrong community. See these two posts on his blog Reflective Altruism:
“Human biodiversity (Part 2: Manifest)” (June 27, 2024)
“Human Biodiversity (Part 7: LessWrong)” (April 18, 2025)
My impression is that Dustin Moskovitz filed for divorce with the LessWrong community due to its racism because Moskovitz announced the decision in the wake of the infamous Manifest conference in 2024 and when he discussed the decision on the EA Forum, he seemed to refer to or allude to the conference as the straw that broke the camel’s back.
Your comments about the Suez Canal insinuated that you think you’re smarter than the rest of the world. But you actually just didn’t understand the situation, and didn’t bother to do even a cursory Google search. You could have very quickly found out you were wrong about that if you thought to check. But instead you assumed the whole world — the whole world — is stupid and insane, and would be so much better off with only your guiding hand, I suppose. But maybe the world actually shouldn’t let your hand — or the hand of this community, or especially the LessWrong community — anywhere near the controls.
This kind of mistake is disqualifying and discrediting for anyone who aspires to that kind of power or influence. Which is explicitly what you were advocating — the world needs at least two movements or communities that think carefully about the world. Are EA and LessWrong really the only two? And do these communities actually think carefully about things? Apparently not the Suez Canal, at least.
Probably most or all of your opinions that take this form — the world is obviously stupid, I’m smarter than the world — are equally wrong. Probably most or all of the LessWrong community’s and the EA community’s opinions that take this form are equally wrong. Because they aren’t researched, they aren’t carefully thought about, they’re just shot off half-cocked and then assumed to be right. (And the outside world’s disagreement with them is sometimes circularly taken to be further evidence that the world is stupid and the community is smart.)
People in both communities pat themselves on the back ad nauseum for being the smartest people in the world or outsmarting the world — and for having great “epistemics”, which is ironic because if you Google “epistemics” or if you have studied philosophy, you know that “epistemics” is not a word.[1] This is infuriating when people routinely make mistakes this bad. Not just here — all the time, every day, everywhere, always. The same sort of mistakes — no basic fact checking, no Googling definitions of terms or concepts, no consulting expert opinion, simple logical or reasoning errors, methodological errors, math errors, “not even wrong” errors, and so on.
Mistakes are not necessarily bad, but the rate and severity of mistakes along with the messianic level of hubris — that combination is bad, very bad. That’s not intellectual or smart, that’s cult-y. (And LessWrong has literally created multiple cults, so I don’t think that’s an unfair descriptor.)
It’s not specifically your fault, it’s your fault and everyone else’s too.
I probably could have, maybe should have, made most of this a separate post or quick take, but your comment about the Suez Canal set me off. (Your recent comment about solving the science/philosophy of both shrimp and human consciousness in time for the Anthropic IPO also seems like an example of LessWrong/EA hubris.)
It’s not a word used in philosophy. Some people mistakenly think it is. It’s jargon of LessWrong’s/the EA Forum’s creation. If you look hard, you can find one EA definition of “epistemics” and one Center for Applied Rationality (CFAR) definition, but the two definitions contradict each other. The EA definition says epistemics is about the general quality of one’s thinking. CFAR, on the other hand, says that epistemics is the “construction of formal models” about knowledge. These are the only two definitions I’ve found, and they contradict each other.
I often don’t respond to people who write far more than I do.
I may not respond to this.