Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
Say what you mean, as plainly as possible.
Try to use words and expressions that a general audience would understand.
Be more casual and less formal if you think that means more people are more likely to understand what you’re trying to say.
To illustrate abstract concepts, give examples.
Where possible, try to let go of minor details that aren’t important to the main point someone is trying to make. Everyone slightly misspeaks (or mis… writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that you’re engaging in nitpicking.
When you don’t understand what someone is trying to say, just say that. (And be polite.)
Don’t engage in passive-aggressiveness or code insults in jargon or formal language. If someone’s behaviour is annoying you, tell them it’s annoying you. (If you don’t want to do that, then you probably shouldn’t try to communicate the same idea in a coded or passive-aggressive way, either.)
If you’re using an uncommon word or using a word that also has a more common definition in an unusual way (such as “truthseeking”), please define that word as you’re using it and — if applicable — distinguish it from the more common way the word is used.
Err on the side of spelling out acronyms, abbreviations, and initialisms. You don’t have to spell out “AI” as “artificial intelligence”, but an obscure term like “full automation of labour” or “FAOL” that was made up for one paper should definitely be spelled out.
When referencing specific people or organizations, err on the side of giving a little more context, so that someone who isn’t already in the know can more easily understand who or what you’re talking about. For example, instead of just saying “MacAskill” or “Will”, say “Will MacAskill” — just using the full name once per post or comment is enough. You could also mention someone’s profession (e.g. “philosopher”, “economist”) or the organization they’re affiliated with (e.g. “Oxford University”, “Anthropic”). For organizations, when it isn’t already obvious in context, it might be helpful to give a brief description. Rather than saying, “I donated to New Harvest and still feel like this was a good choice”, you could say “I donated to New Harvest (a charity focused on cell cultured meat and similar biotech) and still feel like this was a good choice”. The point of all this is to make what you write easy for more people to understand without lots of prior knowledge or lots of Googling.
When in doubt, say it shorter.[1] In my experience, when I take something I’ve written that’s long and try to cut it down to something short, I usually end up with something a lot clearer and easier to understand than what I originally wrote.
Kindness is fundamental. Maya Angelou said, “At the end of the day people won’t remember what you said or did, they will remember how you made them feel.” Being kind is usually more important than whatever argument you’re having.
This advice comes from the psychologist Harriet Lerner’s wonderful book Why Won’t You Apologize? — given in the completely different context of close personal relationships. I think it also works here.
I used to feel so strongly about effective altruism. But my heart isn’t in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven’t been able to sustain a vegan diet for more than a short time. And so on.
But there isn’t a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.
These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years:
-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI’s takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about.
-The extent to which LessWrong culture has taken over or “colonized” effective altruism culture is such a bummer. I know there’s been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or “rationalism” and the more knowledge and experience of it I’ve gained, the more I’ve accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like.
-The stories about sexual harassment are so disgusting. They’re really, really bad and crazy. And it’s so annoying how many comments you see on EA Forum posts about sexual harassment that make exhausting, unempathetic, arrogant, and frankly ridiculous statements, if not borderline incomprehensible in some cases. You see these stories of sexual harassment in the posts and you see evidence of the culture that enables sexual harassment in the comments. Very, very, very bad. Not my idea of a community I can wholeheartedly feel I belong to.
-Kind of a similar story with sexism, racism, and transphobia. The level of underreaction I’ve seen to instances of racism has been crazymaking. It’s similar to the comments under the posts about sexual harassment. You see people justifying or downplaying clearly immoral behaviour. It’s sickening.
-A lot of the response to the Nonlinear controversy was disheartening. It was disheartening to see how many people were eager to enable, justify, excuse, downplay, etc. bad behaviour. Sometimes aggressively, arrogantly, and rudely. It was also disillusioning to see how many people were so… easily fooled.
-Nobody talks normal in this community. At least not on this forum, in blogs, and on podcasts. I hate the LessWrong lingo. To the extent the EA Forum has its own distinct lingo, I probably hate that too. The lingo is great if you want to look smart. It’s not so great if you want other people to understand what the hell you are talking about. In a few cases, it seems like it might even be deliberate obscurantism. But mostly it’s just people making poor choices around communication and writing style and word choice, maybe for some good reasons, maybe for some bad reasons, but bad choices either way. I think it’s rare that writing with a more normal diction wouldn’t enhance people’s understanding of what you’re trying to say, even if you’re only trying to communicate with people who are steeped in the effective altruist niche. I don’t think the effective altruist sublanguage is serving good thinking or good communication.
-I see a lot of interesting conjecture elevated to the level of conventional wisdom. Someone in the EA or LessWrong or rationalist subculture writes a creative, original, evocative blog post or forum post and then it becomes a meme, and those memes end up taking on a lot of influence over the discourse. Some of these ideas are probably promising. Many of them probably contain at least a grain of truth or insight. But they become conventional wisdom without enough scrutiny. Just because an idea is “homegrown” it takes on the force of a scientific idea that’s been debated and tested in peer-reviewed journals for 20 years, or a widely held precept of academic philosophy. That seems just intellectually the wrong thing to do and also weirdly self-aggrandizing.
-An attitude I could call “EA exceptionalism”, where people assert that people involved in effective altruism are exceptionally smart, exceptionally wise, exceptionally good, exceptionally selfless, etc. Not just above the average or median (however you would measure that), but part of a rare elite and maybe even superior to everyone else in the world. I see no evidence this is true. (In these sorts of discussions, you also sometimes see the lame argument that effective altruism is definitionally the correct approach to life because effective altruism means doing the most good and if something isn’t doing the most good, then it isn’t EA. The obvious implication of this argument is that what’s called “EA” might not be true EA, and maybe true EA looks nothing like “EA”. So, this argument is not a defense of the self-identified “EA” movement or community or self-identified “EA” thought.)
-There is a dark undercurrent to some EA thought, along the lines of negative utilitarianism, anti-natalism, misanthropy, and pessimism. I think there is a risk of this promoting suicidal ideation because it basically is suicidal ideation.
-Too much of the discourse seems to revolve around how to control people’s behaviours or beliefs. It’s a bit too House of Cards. I recently read about the psychologist Kurt Lewin’s study on the most effective ways to convince women to use animal organs (e.g. kidneys, livers, hearts) in their cooking during meat shortages during World War II. He found that a less paternalistic approach that showed more respect for the women’s was more effective in getting them to incorporate animal organs into their cooking. The way I think about this is: you didn’t have to be manipulated to get to the point where you are in believing what you believe or caring this much about this issue. So, instead of thinking of how to best manipulate people, think about how you got to the point where you are and try to let people in on that in an honest, straightforward way. Not only is this probably more effective, it’s also more moral and shows more epistemic humility (you might be wrong about what you believe and that’s one reason not to try to manipulate people into believing it).
-A few more things but this list is already long enough.
Put all this together and the old stuff I cared about (charity effectiveness, giving what I can, expanding my moral circle) is lost in a mess of other stuff that is antithetical to what I value and what I believe. I’m not even sure the effective altruism movement should exist anymore. The world might be better off if it closed down shop. I don’t know. It could free up a lot of creativity and focus and time and resources to work on other things that might end up being better things to work on.
I still think there is value in the version of effective altruism I knew around 2015, when the primary focus was on global poverty and the secondary focus was on animal welfare, and AGI was on the margins. That version of effective altruism is so different from what exists today — which is mostly about AGI and has mostly been taken over by the rationalist subculture — that I have to consider those two different things. Maybe the old thing will find new life in some new form. I hope so.
I’d distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/or publicly admittted to a degree of focus on AI they’ve always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Phil’s website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isn’t much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
Yes, this seems similar to how I feel: I think the major donor(s) have re-prioritized, but am not so sure how many people have switched from other causes to AI. I think EA is more left to the grassroots now, and the forum has probably increased in importance. As long as the major donors don’t make the forum all about AI—then we have to create a new forum! But as donors change towards AI, the forum will inevitable see more AI content. Maybe some functions to “balance” the forum posts so one gets representative content across all cause areas? Much like they made it possible to separate out community posts?
Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/racism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you don’t want to interact with the community anymore, I just want to share that I believe your voice is really important and I hope you continue to engage with EA! I wouldn’t want the movement to discourage anyone who shares its principles (like “let’s use our time and resources to help others the most”), but disagrees with how it’s being put into practice, from actively participating.
I don’t think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
Good point, I guess my lasting impression wasn’t entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesn’t feels discouraged from actively participating in EA.
If the people arguing that there is an AI bubble turn out to be correct and the bubble pops, to what extent would that change people’s minds about near-term AGI?
I strongly suspect there is an AI bubble because the financial expectations around AI seem to be based on AI significantly enhancing productivity and the evidence seems to show it doesn’t do that yet. This could change — and I think that’s what a lot of people in the business world are thinking and hoping. But my view is a) LLMs have fundamental weaknesses that make this unlikely and b) scaling is running out of steam.
Scaling running out of steam actually means three things:
2) Each new 10x increase in compute is getting harder to pull off because the amount of money involved is getting unwieldy.
3) There is an absolute ceiling to the amount of data LLMs can train on that they are probably approaching.
So, AI investment is dependent on financial expectations that are depending on LLMs enhancing productivity, which isn’t happening and probably won’t happen due to fundamental problems with LLMs and due to scaling becoming less valuable and less feasible. This implies an AI bubble, which implies the bubble will eventually pop.
So, if the bubble pops, will that lead people who currently have a much higher estimation than I do of LLMs’ current capabilities and near-term prospects to lower that estimation? If AI investment turns out to be a bubble, and it pops, would you change your mind about near-term AGI? Would you think it’s much less likely? Would you think AGI is probably much farther away?
Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.
This is a message I saw recently:
You aren’t just rate limited for 24 hours once you fall below the recent karma threshold (which can be triggered by one comment that is unpopular with a handful of people), you’re rate limited for as many days as it takes you to gain 25 net karma on new comments — which might take a while, since you can only leave one comment per day, and, also, people might keep downvoting your unpopular comment. (Unless you delete it — which I think I’ve seen happen, but I won’t do, myself, because I’d rather be rate limited than self-censor.)
The rate limiting system is a brilliant idea for new users or users who have less than 50 total karma — the ones who have little plant icons next to their names. It’s an elegant, automatic way to stop spam, trolling, and other abuses. But my forum account is 2.5 years old and I have over 1,000 karma. I have 24 posts published over 2 years, all with positive karma. My average karma per post/comment is +2.3 (not counting the default karma that all post/comments start with; this is just counting karma from people’s votes).
Examples of comments I’ve gotten downvoted into the net −1 karma or lower range include a methodological critique of a survey that was later accepted to be correct and led to the research report of an EA-adjacent organization getting revised. In another case, a comment was downvoted to negative karma when it was only an attempt to correct the misuse of a technical term in machine learning — a topic which anyone can confirm I’ve gotten right with a few fairly quick Google searches. People are absolutely not just downvoting comments that are poor quality or rude by any reasonable standard. They are downvoting things they disagree with or dislike for some other reason. (There are many other examples like the ones I just gave, including everything from directly answering a question to clarifying a point of disagreement to expressing a fairly anodyne and mainstream opinion that at least some prominent experts in the relevant field agree with.) Given this, karma downvoting as an automatic moderation tool with thresholds this sensitive just discourages disagreement.
One of the most important cognitive biases to look out for in a context like EA is group polarization, which is the tendency of individuals’ views to become more extreme once they join a group, even if each of the individuals had less extreme views before joining the group (i.e., they aren’t necessarily being converted by a few zealots who already had extreme views before joining). One way to mitigate group polarization is to have a high tolerance for internal disagreement and debate. I think the EA Forum does have that tolerance for certain topics and within certain windows of accepted opinions for most topics that are discussed, but not for other topics or only within a window that is quite narrow if you compare it to, say, the general population or expert opinion.
For example, 76% of AI experts believe it’s unlikely or very unlikely that LLMs will scale to AGI according to one survey, yet the opinion of EA Forum users seems to be the opposite of that. Not everyone on the EA Forum seems to consider the majority expert opinion an opinion worth considering too seriously. To me, that looks like group polarization in action. It’s one thing to disagree with expert opinion with some degree of uncertainty and epistemic humility, it’s another thing to see expert opinion as beneath serious discussion.
I don’t know what specific tweaks to the rate limiting system would be best. Maybe just turn it off altogether for users with over 500 karma (and rely on reporting posts/comments and moderator intervention to handle real problems), or as Jason suggested here, have the karma threshold trigger manual review by a moderator rather than automatic rate limiting. Jason also made some other interesting suggestions for tweaks in that comment and noted, correctly:
Strong downvoting by a committed group is the most obvious way to manipulate the system into silencing those with whom you disagree.
This actually works. I am reluctant to criticize the ideas of or express disagreement with certain organizations/books because of rate limiting, and rate limiting is the #1 thing that makes me feel like just giving up on trying to engage in intellectual debate and discussion and just quit the EA Forum.
I may be slow to reply to any comments on this quick take due to the forum’s rate limiting.
I think this highlights why some necessary design features of the karma system don’t translate well to a system that imposes soft suspensions on users. (To be clear, I find a one-comment-per-day limit based on the past 20 comments/posts to cross the line into soft suspension territory; I do not suggest that rate limits are inherently soft suspensions.)
I wrote a few days ago about why karma votes need to be anonymous and shouldn’t (at least generally) require the voter to explain their reasoning; the votes suggested general agreement on those points. But a soft suspension of an established user is a different animal, and requires greater safeguards to protect both the user and the openness of the Forum to alternative views.
I should emphasize that I don’t know who cast the downvotes that led to Yarrow’s soft suspension (which were on this post about MIRI), or why they cast their votes. I also don’t follow MIRI’s work carefully enough to have a clear opinion on the merits of any individual vote through the lights of the ordinary purposes of karma. So I do not intend to imply dodgy conduct by anyone. But: “Justice must not only be done, but must also be seen to be done.” People who are considering stating unpopular opinions shouldn’t have to trust voters to the extent they have to at present to avoid being soft suspended.
Neutrality: Because the votes were anonymous, it is possible that people who were involved in the dispute were casting votes that had the effect of soft-suspending Yarrow.
Accountability: No one has to accept responsibility and the potential for criticism for imposing a soft-suspension via karma downvotes. Not even in their own minds—since nominally all they did was downvote particular posts.
Representativeness: A relatively small number of users on a single thread—for whom there is no evidence of being representative of the Forum community as a whole—cast the votes in question. Their votes have decided for the rest of the community that we won’t be hearing much from Yarrow (on any topic) for a while.[1]
Reasoning transparency: Stating (or at least documenting) one’s reasoning serves as a check on decisions made on minimal or iffy reasoning getting through. [Moreover, even if voters had been doing so silently, they were unlikely to be reasoning about a vote to soft suspend Yarrow, which is what their votes were whether they realized it or not.]
There are good reasons to find that the virtues of accountability, representativeness, and reasoning transparency are outweighed by other considerations when it comes to karma generally. (As for neutrality, I think we have to accept that technical and practical limitations exist.) But their absence when deciding to soft suspend someone creates too high a risk of error for the affected user, too high a risk of suppressing viewpoints that are unpopular with elements of the Forum userbase, and too much chilling effect on users’ willingness to state certain viewpoints. I continue to believe that, for more established users, karma count should only trigger a moderator review to assess whether a soft suspension is warranted.
Although the mods aren’t necessarily representative in the abstract, they are more likely to not have particular views on a given issue than the group of people who actively participate on a given thread (and especially those who read the heavily downvoted comments on that thread). I also think the mods are likely to have a better understanding of their role as representatives of the community than individual voters do, which mitigates this concern.
I’ve really appreciated comments and reflections from @Yarrow Bouchard 🔸 and I think in his case at least this does feel a bit unfair. Its good to encourage new people on the forum, unless they are posting particularly egrarious thing which I don’t think he has been.
Rate limits should not apply to comments on your own quick takes
Rate limits could maybe not count negative karma below −10 or so, it seems much better to rate limit someone only when they have multiple downvoted comments
2.4:1 is not a very high karma:submission ratio. I have 10:1 even if you exclude the april fool’s day posts, though that could be because I have more popular opinions, which means that I could double my comment rate and get −1 karma on the extras and still be at 3.5
if I were Yarrow I would contextualize more or use more friendly phrasing or something, and also not be bothered too much by single downvotes
From scanning the linked comments I think that downvoters often think the comment in question has bad reasoning and detracts from effective discussion, not just that they disagree
Deliberately not opining on the echo chamber question
Can you explain what you mean by “contextualizing more”? (What a curiously recursive question...)
You definitely have more popular opinions (among the EA Forum audience), and also you seem to court controversy less, i.e. a lot of your posts are about topics that aren’t controversial on the EA Forum. For example, if you were to make a pseudonymous account and write posts/comments arguing that near-term AGI is highly unlikely, I think you would definitely get a much lower karma to submission ratio, even if you put just as much effort and care into them as the posts/comments you’ve written on the forum so far. Do you think it wouldn’t turn out that way?
I’ve been downvoted on things that are clearly correct, e.g. the standard definitions of terms in machine learning (which anyone can Google); a methodological error that the Forecasting Research Institute later acknowledged was correct and revised their research to reflect. In other cases, the claims are controversial, but they are also claims where prominent AI experts like Andrej Karpathy, Yann LeCun, or Ilya Sutskever have said exactly the same thing as I said — and, indeed, in some cases I’m literally citing them — and it would be wild to think these sort of claims are below the quality threshold for the EA Forum. I think that should make you question whether downvotes are a reliable guide to the quality of contributions.
One-off instances of one person downvoting don’t bother me that much — that literally doesn’t matter, as long as it really is one-off — what bothers me is the pattern. It isn’t just with my posts/comments, either, it’s across the board on the forum. I see it all the time with other contributors as well. I feel uneasy dragging those people into this discussion without their permission — it’s easier to talk about myself — but this is an overall pattern.
Whether reasoning is good or bad is always bound to be controversial when debating about topics that are controversial, about which there is a lot of disagreement. Just downvoting what you judge to be bad reasoning will, statistically, amount to downvoting what you disagree with. Since downvotes discourage and, in some cases, disable (through the forum’s software) disagreement, you should ask: is that the desired outcome? Personally, I rarely, pretty much never, downvote based on what I perceive to be the reasoning quality for exactly this reason.
When people on the EA Forum deeply engage with the substance of what I have to say, I’ve actually found a really high rate of them changing their minds (not necessarily from P to ¬P,but shifting along a spectrum and rethinking some details). It’s a very small sample size, only a few people, but it’s something like out of five people that I’ve had a lengthy back-and-forth with over the last two months, three of them changed their minds in some significant way. (I’m not doing rigorous statistics here, just counting examples from memory.) And in two of the three cases, the other person’s tone started out highly confident, giving me the impression they initially thought there was basically no chance I had any good points that were going to convince them. That is the counterbalance to everything else because that’s really encouraging!
I put in an effort to make my tone friendly and conciliatory, and I’m aware I probably come off as a bit testy some of the time, but I’m often responding to a much harsher delivery from the other person and underreacting in order to deescalate the tension. (For example, the person who got the ML definitions wrong started out by accusing me of “bad faith” based on their misunderstanding of the definitions. There were multiple rounds of me engaging with politeness and cordiality before I started getting a bit testy. That’s just one example, but there are others — it’s frequently a similar dynamic. Disagreeing with the majority opinion of the group is a thankless job because you have to be nicer to people than they are to you, and then that still isn’t good enough and people say you should be even nicer.)
Can you explain what you mean by “contextualizing more”? (What a curiously recursive question...)
I mean it in this sense; making people think you’re not part of the outgroup and don’t have objectionable beliefs related to the ones you actually hold, in whatever way is sensible and honest.
Maybe LW is better at using disagreement button as I find it’s pretty common for unpopular opinions to get lots of upvotes and disagree votes. One could use the API to see if the correlations are different there.
I think this is a significant reason why people downvote some, but not all, things they disagree with. Especially a member of the outgroup who makes arguments EAs have refuted before and need to reexplain, not saying it’s actually you
Claude thinks possible outgroups include the following, which is similar to what I had in mind
Based on the EA Forum’s general orientation, here are five individuals/groups whose characteristic opinions would likely face downvotes:
Effective accelerationists (e/acc) - Advocates for rapid AI development with minimal safety precautions, viewing existential risk concerns as overblown or counterproductive
TESCREAL critics (like Emile Torres, as you mentioned) - Scholars who frame longtermism/EA as ideologically dangerous, often linking it to eugenics, colonialism, or techno-utopianism
Anti-utilitarian philosophers—Strong deontologists or virtue ethicists who reject consequentialist frameworks as fundamentally misguided, particularly on issues like population ethics or AI risk trade-offs
Degrowth/anti-progress advocates—Those who argue economic/technological growth is net-negative and should be reduced, contrary to EA’s generally pro-progress orientation
Left-accelerationists and systemic change advocates—Critics who view EA as a “neoliberal” distraction from necessary revolutionary change, or who see philanthropic approaches as fundamentally illegitimate compared to state redistribution
a) I’m not sure all of those count as someone who would necessarily be an outsider to EA (e.g. Will MacAskill only assigns a 50% probability to consequentialism being correct, and he and others in EA have long emphasized pluralism about normative ethical theories; there’s been an EA system change group on Facebook since 2015 and discourse around systemic change has been happening in EA since before then)
b) Even if you do consider people in all those categories to be outsiders to EA or part of “the out-group”, us/them or in-group/out-group thinking seems like a bad idea, possibly leading to insularity, incuriosity, and overconfidence in wrong views
c) It’s especially a bad idea to not only think in in-group/out-group terms and seek to shut down perspectives of “the out-group” but also to cast suspicion on the in-group/out-group status of anyone in an EA context who you happen to disagree with about something, even something minor — that seems like a morally, subculturally, and epistemically bankrupt approach
You’re shooting the messenger. I’m not advocating for downvoting posts that smell of “the outgroup”, just saying that this happens in most communities that are centered around an ideological or even methodological framework. It’s a way you can be downvoted while still being correct, especially from the LEAST thoughtful 25% of EA forum voters
Please read the quote from Claude more carefully. MacAskill is not an “anti-utilitarian” who thinks consequentialism is “fundamentally misguided”, he’s the moral uncertainty guy. The moral parliament usually recommends actions similar to consequentialism with side constraints in practice.
I probably won’t engage more with this conversation.
I don’t know, sorry. I admittedly tend to steer clear of community debates as they make me sad, probably shouldn’t have commented in the first place...
I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.
Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, some stores sold out of face masks in several different cities in North America. (One example of many.) The oldest post on LessWrong tagged with “covid-19” is from well after this started happening. (I also searched the forum for posts containing “covid” or “coronavirus” and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described “prepper” who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don’t start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Here’s a New York Times article from February 2, 2020, entitled “Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say”, well before any of the worried posts on LessWrong:
The Wuhan coronavirus spreading from China is now likely to become a pandemic that circles the globe, according to many of the world’s leading infectious disease experts.
The prospect is daunting. A pandemic — an ongoing epidemic on two or more continents — may well have global consequences, despite the extraordinary travel restrictions and quarantines now imposed by China and other countries, including the United States.
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
It is “increasingly unlikely that the virus can be contained,” said Dr. Thomas R. Frieden, a former director of the Centers for Disease Control and Prevention who now runs Resolve to Save Lives, a nonprofit devoted to fighting epidemics.
The worried posts on LessWrong don’t start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDC’s guidance at the time about covid-19. It says that CFAR’s workshops “should be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.” This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
CFAR is based in the San Francisco Bay Area, as are Lightcone Infrastructure and MIRI, two other organizations associated with the LessWrong community. On February 25, 2020, the city of San Francisco declared a state of emergency over covid. (Nearby, Santa Clara county, where most of what people consider as Silicon Valley is located, declared a local health emergency on February 10.) At this point in time, posts on LessWrong remain overall cautious and ambivalent.
By the time the posts on LessWrong get really, really worried, in the last few days of February and the first week of March, much of the rest of the world was reacting in the same way.
From February 14 to February 25, the S&P 500 dropped about 7.5%. Around this time, financial analysts and economists issued warnings about the global economy.
Between February 21 and February 27, Italy began its first lockdowns of areas where covid outbreaks had occurred.
On February 25, 2020, the CDC warned Americans of the possibility that “disruption to everyday life may be severe”. The CDC made this bracing statement:
It’s not so much a question of if this will happen anymore, but more really a question of when it will happen — and how many people in this country will have severe illness.
Another line from the CDC:
We are asking the American public to work with us to prepare with the expectation that this could be bad.
On February 26, Canada’s Health Minister advised Canadians to stockpile food and medication.
The most prominent LessWrong post from late February warning people to prepare for covid came a few days later, on February 28. So, on this comparison, LessWrong was actually slightly behind the curve. (Oddly, that post insinuates that nobody else is telling people to prepare for covid yet, and congratulates itself on being ahead of the curve.)
In the beginning of March, the number of LessWrong posts tagged with covid-19 posts explodes, and the tone gets much more alarmed. The rest of the world was responding similarly at this time. For example, on February 29, 2020, Ohio declared a state of emergency around covid. On March 4, Governor Gavin Newsom did the same in California. The governor of Hawaii declared an emergency the same day, and over the next few days, many more states piled on.
Around the same time, the general public was becoming alarmed about covid. In the last days of February and the first days of March, many people stockpiled food and supplies. On February 29, 2020, PBS ran an article describing an example of this at a Costco in Oregon:
Worried shoppers thronged a Costco box store near Lake Oswego, emptying shelves of items including toilet paper, paper towels, bottled water, frozen berries and black beans.
“Toilet paper is golden in an apocalypse,” one Costco employee said.
Employees said the store ran out of toilet paper for the first time in its history and that it was the busiest they had ever seen, including during Christmas Eve.
A March 1, 2020 article in the Los Angeles Times reported on stores in California running out of product as shoppers stockpiled. On March 2, an article in Newsweek described the same happening in Seattle:
Speaking to Newsweek, a resident of Seattle, Jessica Seu, said: “It’s like Armageddon here. It’s a bit crazy here. All the stores are out of sanitizers and [disinfectant] wipes and alcohol solution. Costco is out of toilet paper and paper towels. Schools are sending emails about possible closures if things get worse.
In Canada, the public was responding the same way. Global News reported on March 3, 2020 that a Costco in Ontario ran out bottled water, toilet paper, and paper towels, and that the situation was similar at other stores around the country. The spike in worried posts on LessWrong coincides with the wider public’s reaction. (If anything, the posts on LessWrong are very slightly behind the news articles about stores being picked clean by shoppers stockpiling.)
On March 5, 2020, the cruise ship the Grand Princess made the news because it was stranded off the coast of California due to a covid outbreak on board. I remember this as being one seminal moment of awareness around covid. It was a big story. At this point, LessWrong posts are definitely in no way ahead of the curve, since everyone is talking about covid now.
On March 8, 2020, Italy put a quarter of its population under lockdown, then put the whole country on lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a national emergency on March 13. The same day, 15 more U.S. states closed their schools. Also on the same day, Canada’s Parliament shut down because of the pandemic. By now, everyone knows it’s a crisis.
So, did LessWrong call covid early? I see no evidence of that. The timeline of LessWrong posts about covid follow the same timeline that the world at large reacted to covid, increasing in alarm as journalists, experts, and governments increasingly rang the alarm bells. In some comparisons, LessWrong’s response was a little bit behind.
The only curated post from this period (and the post with the third-highest karma, one of only four posts with over 100 karma) tells LessWrong users to prepare for covid three days after the CDC told Americans to prepare, and two days after Canada’s Health Minister told Canadians to stockpile food and medication. It was also three days after San Francisco declared a state of emergency. When that post was published, many people were already stockpiling supplies, partly because government health officials had told them to. (The LessWrong post was originally published on a blog a day before, and based on a note in the text apparently written the day before that, but that still puts the writing of the post a day after the CDC warning and the San Francisco declaration of a state of emergency.)
Unless there is some evidence that I didn’t turn up, it seems pretty clear the self-congratulatory narrative is a myth. The self-congratulation actually started in that post published on February 28, 2020, which, again, is odd given the CDC’s warning three days before (on the same day that San Francisco declared a state of emergency), analysts’ and economists’ warnings about the global economy a bit before that, and the New York Times article warning about a probable pandemic at the beginning of the month. The post is slightly behind the curve, but it’s gloating as if it’s way ahead.
Looking at the overall LessWrong post history in early 2020, LessWrong seems to have been, if anything, slightly behind the New York Times, the S&P 500, the CDC, and enough members of the general public to clear out some stores of certain products. By the time LessWrong posting reached a frenzy in the first week of March, the world was already responding — U.S governors were declaring states of emergency, and everyone was talking about and worrying about covid.
I think people should be skeptical and even distrustful toward the claims of the LessWrong community, both on topics like pandemics and about its own track record and mythology. Obviously this myth is self-serving, and it was pretty easy for me to disprove in a short amount of time — so anyone who is curious can check and see that it’s not true. The people in the LessWrong community who believe the community called covid early probably believe that because it’s flattering. If they actually wondered if this is true or not and checked the timelines, it would become pretty clear that didn’t actually happen.
Edited to add on Monday, December 15, 2025 at 3:20pm Eastern:
I spun this quick take out as a full post here. When I submitted the full post, there was no/almost no engagement on this quick take. In the future, I’ll try to make sure to publish things only as a quick take or only as a full post, but not both. This was a fluke under unusual circumstances.
Feel free to continue commenting here, cross-post comments from here onto the full post, make new comments on the post, or do whatever you want. Thanks to everyone who engaged and left interesting comments.
YARROW: Boy, one would have to be a complete moron to think that COVID-19 would not be a big deal as late as Feb 28 2020, i.e. something that would imminently upend life-as-usual. At this point had China locked down long ago, and even Italy had started locking down. Cases in the USA were going up and up, especially when you correct for the (tiny) amount of testing they were doing. The prepper community had certainly noticed, and was out in force buying out masks and such. Many public health authorities were also sounding alarms. What kind of complete moron would not see what’s happening here? Why is lesswrong patting themselves on the back for noticing something so glaringly obvious?
MY REPLY: Yes!! Yes, this is true!! Yes, you would have to be a complete moron to not make this inference!! …But man, by that definition, there sure were an awful lot of complete morons around, i.e. most everyone. LessWrong deserves credit for rising WAY above the incredibly dismal standards set by the public-at-large in the English-speaking world, even if they didn’t particularly surpass the higher standards of many virologists, preppers, etc.
My personal experience: As someone living in normie society in Massachusetts USA but reading lesswrong and related, I was crystal clear that everything about my life was about to wrenchingly change, weeks before any of my friends or coworkers were. And they were very weirded out by my insistence on this. Some were in outright denial (e.g. “COVID = anti-Chinese racism” was a very popular take well into February, maybe even into March, and certainly the “flu kills far more than COVID” take was widespread in early March, e.g. Anderson Cooper). Others were just thinking about things in far-mode; COVID was a thing that people argued about in the news, not a real-world thing that could or should affect one’s actual day-to-day life and decisions. “They can’t possibly shut down schools, that’s crazy”, a close family member told me days before they did.
Dominic Cummings cited seeing the smoke as being very influential in jolting him to action (and thus impacting UK COVID policy), see screenshot here, which implies that this essay said something that he (and others at the tip-top of the UK gov’t) did not already see as obvious at the time.
A funny example that sticks in my memory is a tweet by Eliezer from March 11 2020. Trump had just tweeted:
So last year 37,000 Americans died from the common Flu. It averages between 27,000 and 70,000 per year. Nothing is shut down, life & the economy go on. At this moment there are 546 confirmed cases of CoronaVirus, with 22 deaths. Think about that!
Eliezer quote-tweeted that, with the commentary:
9/11 happens, and nobody puts that number into the context of car crash deaths before turning the US into a security state and invading Iraq. Nobody contextualizes school shootings. But the ONE goddamn time the disaster is a straight line on a log chart, THAT’S when… [quote-tweet Trump]
We’re in Nerd Hell, lads and ladies and others. We’re in a universe that was specifically designed to maximally annoy numerate people. This is like watching a stopped clock, waiting for it to be right, and just as the clock almost actually is right, the clock hands fall off.
YARROW: Boy, one would have to be a complete moron to think that COVID-19 would not be a big deal as late as Feb 28 2020, i.e. something that would imminently upend life-as-usual. … What kind of complete moron would not see what’s happening here? Why is lesswrong patting themselves on the back for noticing something so glaringly obvious?
Not at all accurate. That’s not what I’m saying at all. It was a situation of high uncertainty, and the appropriate response was to be at least somewhat unsure, if not very unsure — yes, take precautions, think about it, learn about it, follow the public health advice. But I don’t think on February 28 anyone knew for sure what would happen, as opposed to made an uncertain call that turned out to be correct. The February 28 post I cite gives that sort of uncertain, precautionary advice, and I think it’s more or less reasonable advice — just a general ‘do some research, be prepared’ sort of thing.
It’s just that the post goes so far in patting itself on the back for being way ahead on this, when if someone in the LessWrong community had just posted about the CDC’s warning on the same day it was issued or had posted about it when San Francisco declared a public health emergency, or had made post noting that the S&P 500 had just fallen 7.5% and maybe that is a reason to be concerned, that would have put the first urgent warning about the pandemic a few days ahead of the February 28 post.
The takeaway of that post, and the takeaway of people who congratulate the LessWrong community on calling covid early, is that this is evidence that reading Yudkowsky’s Sequences or LessWrong posts or whatever promotes superior rationality, and is a vindication of the community’s beliefs. But that is the wrong conclusion to draw if something like 10-80% of the overall North American population (these figures are loosely based on polling cited in another comment) was at least equally concerned about covid-19 at least as early. 99.999% of the millions of people who were as concerned or more as early or earlier than the LessWrong community haven’t read the Sequences and don’t know what LessWrong is. A strategy that would have worked better than reading the Sequences or LessWrong posts is: just listen to what the CDC is saying and what state and local public health authorities are saying.
It’s ridiculous to draw the conclusion that this a vindication of LessWrong’s approach.
Dominic Cummings cited seeing the smoke as being very influential in jolting him to action (and thus impacting UK COVID policy), see screenshot here.
I don’t see this as a recommendation for LessWrong, although it sure is an interesting historical footnote. Dominic Cummings doesn’t appear to be a credible person on covid-19. For example, in November 2024 he posted a long, conspiratorial tweet which included:
”The Fauci network should be rolled up & retired en masse with some JAILED. And their media supporters—i.e most of the old media—driven out of business.”
The core problem there is not that he hasn’t read LessWrong enough. (Indeed, reading LessWrong might make a person more likely to believe such things, if anything.)
Incidentally, Cummings also had a scandal in the UK around allegations that he inappropriately violated the covid-19 lockdown and subsequently wasn’t honest about it.
My personal experience: As someone living in normie society in Massachusetts USA but reading lesswrong and related, I was crystal clear that everything about my life was about to wrenchingly change, weeks before any of my friends or coworkers were. And they were very weirded out by my insistence on this.
Tens of millions if not hundreds of millions of people in North America had experiences similar to this. The level of alarm spread across the population gradually from around mid-January to mid-March 2020, so at any given time, there were a large number of people who were much more concerned than another large number of people.
I tried to convince my friends to take covid more seriously a few days before the WHO proclamation, the U.S. state of emergency declaration, and all the rest made it evident to them that it was time to worry. I don’t think I’m a genius for this — in fact, they were probably right to wait for more convincing evidence. If we were to re-run the experiment 10 times or 100 times, their approach might prove superior to mine. I don’t know.
A funny example that sticks in my memory is a tweet by Eliezer from March 11 2020. Trump had just tweeted:
This is ridiculous. Do you think these sort of snipes are at all unique to Eliezer Yudkowsky? Turn on Rachel Maddow or listen to Pod Save America, or follow any number of educated liberals (especially those with relevant expertise or journalists who cover science and medicine) on Twitter and you would see this kind of stuff all the time. It’s not an insight unique to Yudkowsky that Donald Trump says ridiculous and dangerous things about covid or many other topics.
The version of the claim I have heard is not that LW was early to suggest that there might be a pandemic but rather that they were unusually willing to do something about it because they take small-probability high-impact events seriously. Eg. I suspect that you would say that Wei Dai was “late” because their comment came after the nyt article etc, but nonetheless they made 700% betting that covid would be a big deal.
I think it can be hard to remember just how much controversy there was at the time. E.g. you say of March 13, “By now, everyone knows it’s a crisis” but sadly “everyone” did not include the California department of public health, who didn’t issue stay at home orders for another week.
[I have a distinct memory of this because I told my girlfriend I couldn’t see her anymore since she worked at the department of public health (!!) and was still getting a ton of exposure since the California public health department didn’t think covid was that big of a deal.]
The version of the claim I have heard is not that LW was early to suggest that there might be a pandemic but rather that they were unusually willing to do something about it because they take small-probability high-impact events seriously. Eg. I suspect that you would say that Wei Dai was “late” because their comment came after the nyt article etc, but nonetheless they made 700% betting that covid would be a big deal.
Is there any better evidence of this than the example you cited? That comment from Wei Dei is completely ridiculous… Making a lot of money off of a risky options trade does not discredit the efficient market hypothesis.
Even assuming Wei Dei’s math is right (which I don’t automatically trust), the market guessing on February 10, 2020 that there was a 12% chance (or whatever it is) that the covid-19 situation would be as bad as the market thought it was on February 27, 2020 doesn’t seem ridiculous or crazy or a discrediting of the efficient market hypothesis.
(Also note the bias of people only posting about the option trades they do well on, afterward, in retrospect...)
I think it can be hard to remember just how much controversy there was at the time. E.g. you say of March 13, “By now, everyone knows it’s a crisis” but sadly “everyone” did not include the California public health department, who didn’t issue stay at home orders for another week.
By what date did the majority of LessWrong users start staying at home?
I doubt that there are surveys of when people stayed home. You could maybe try to look at prediction markets but I’m not sure what you would compare them to to see if the prediction market was more accurate than some other reference group.
I think the COVID case usefully illustrates a broader issue with how “EA/rationalist prediction success” narratives are often deployed.
That said, this is exactly why I’d like to see similar audits applied to other domains where prediction success is often asserted, but rarely with much nuance. In particular: crypto, prediction markets, LVT, and more recently GPT-3 / scaling-based AI progress. I wasn’t closely following these discussions at the time, so I’m genuinely uncertain about (i) what was actually claimed ex ante, (ii) how specific those claims were, and (iii) how distinctive they were relative to non-EA communities.
This matters to me for two reasons.
First, many of these claims are invoked rhetorically rather than analytically. “EAs predicted X” is often treated as a unitary credential, when in reality predictive success varies a lot by domain, level of abstraction, and comparison class. Without disaggregation, it’s hard to tell whether we’re looking at genuine epistemic advantage, selective memory, or post-hoc narrative construction.
Second, these track-record arguments are sometimes used—explicitly or implicitly—to bolster the case for concern about AI risks. If the evidential support here rests on past forecasting success, then the strength of that support depends on how well those earlier cases actually hold up under scrutiny. If the success was mostly at the level of identifying broad structural risks (e.g. incentives, tail risks, coordination failures), that’s a very different kind of evidence than being right about timelines, concrete outcomes, or specific mechanisms.
I like this comment. This topic is always at risk to devolving into a generalized debate between rationalists and their opponents, creating a lot of heat but not light. So it’s helpful to keep a fairly tight focus on potentially action-relevant questions (of which the comment identifies one).
I’ve been around EA pretty deeply since 2015, and to some degree since around 2009. My impression is that overall it’s what you guessed it might be: “selective memory, or post-hoc narrative construction.” Particularly around AI, but also in general with such claims.
(There’s a good reason to make specific, dated predictions publicly, in advance, ideally with some clear resolution criteria.)
I don’t exactly trust you to do this in an unbiased way, but this comment seems the state-of-the-art and I love retrospectives on COVID-19. Plausibly I should look into the extent that your story checks out, plus how EA itself, the relevant parts of twitter or prediction platforms like Metaculus compared at the time (which I felt was definitely ahead).
I wrote this comment on Jan 27, indicating that it’s not just a few people worried at the time. I think most “normal” people weren’t tracking covid in January.
I think the thing to realize/people easily forget is that everything was really confusing and there was just a ton of contentious debate during the early months. So while there was apparently a fairly alarmed NYT report in early Feb, there were also many other reports in February that were less alarmed, many bad forecasts, etc.
It would be easy to find a few examples like this from any large sample of people. As I mentioned in the quick take, in late January, people were clearing out stores of surgical masks in cities like New York.
My overall objection/argument is that you appear to selectively portray data points that show one side, and selectively dismiss data points that show the opposite view. This makes your bottom-line conclusion pretty suspicious.
I also think the rationalist community overreached and their epistemics and speed in early COVID were worse compared to, say, internet people, government officials, and perhaps even the general public in Taiwan. But I don’t think the case for them being slower than Western officials or the general public in either the US or Europe is credible, and your evidence here does not update me much.
It’s clear that in late January 2020, many people in North America were at least moderately concerned about covid-19.
I already gave the example of some stores in a few cities selling out of face masks. That’s anecdotal, but a sign of enough fear among enough people to be noteworthy.
What about the U.S. government’s reaction? The CDC issued a warning about travelling to China on January 28 and on January 31, the U.S. federal government declared a public health emergency, implemented a mandatory 14-day quarantine for travelers returning to China, and implemented other travel restrictions. Both the CDC warning and the travel restrictions were covered in the press, so many people knew about it, but even before that happened, a lot of people said they were worried.
Here’s a Morning Consult poll from January 24-26, 2020:
An Ipsos poll of Canadians from January 27-28 found similar results:
Half (49%) of Canadians think the coronavirus poses a threat (17% very high/32% high) to the world today, while three in ten (30%) think it poses a threat (9% very high/21% high) to Canada. Fewer still think the coronavirus is a threat to their province (24%) or to themselves and their family (16%).
Were significantly more than 37% of LessWrong users very concerned about covid-19 around this time? Did significantly more than 16% think covid-19 posed a threat to themselves and their family?
It’s hard to make direct, apples-to-apples comparisons between the general public and the LessWrong community. We don’t have polls of the LessWrong community to compare to. But those examples you gave from January 24-January 27, 2020 don’t seem different from what we’d expect if the LessWrong community was at about the same level of concern at about the same time as the general public. Even if the examples you gave represented the worries of ~15-40% of the LessWrong community, that wouldn’t be evidence that LessWrong users were doing better than average.
I’m not claiming that the LessWrong community was clearly significantly behind. If it was behind at all, it was only by a few days or maybe a week tops (not much in the grand scheme of things), and the evidence isn’t clear or rigorous enough to definitively draw a conclusion like that. My claim is just that the LessWrong community’s claim to have called the pandemic early is pretty clearly false or at least, so far completely unsupported.
I recommend looking at the Morning Consult PDF and checking the different variations of the question to get a fuller picture. People also gave surprisingly high answers for other viruses like Ebola and Zika, but not nearly as high as for covid.
If you want a source who is biased in the opposite direction and who generally agrees with my conclusion, take a look here and here. I like this bon mot:
In some ways, I think this post isn’t “seeing the smoke,” so much as “seeing the fire and choking on the smoke.”
This is their conclusion from the second link:
If the sheer volume of conversation is our alarm bell, this site seems to have lagged behind the stock market by about a week.
Unless there is some evidence that I didn’t turn up, it seems pretty clear the self-congratulatory narrative is a myth.
This is a cool write-up! I’m curious how much/if you Zvi’s COVID round-ups you take into account? I wasn’t around LessWrong during COVID, but, if I understand correctly, those played a large role in the information flow during that time.
The first post listed there is from March 2, 2020, so that’s relatively late in the timeline we’re considering, no? That’s 3 days later than the February 28 post I discussed above as the first/best candidate for a truly urgent early warning about covid-19 on LessWrong. (2020 was a leap year, so there was a February 29.)
That first post from March 2 also seems fairly simple and not particularly different from the February 28 post (which it cites).
Following up a bit on this, @parconley. The second post in Zvi’s covid-19 series is from 6pm Eastern on March 13, 2020. Let’s remember where this is in the timeline. From my quick take above:
On March 8, 2020, Italy put a quarter of its population under lockdown, then put the whole country on lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a national emergency on March 13. The same day, 15 more U.S. states closed their schools. Also on the same day, Canada’s Parliament shut down because of the pandemic.
Zvi’s post from March 13, 2020 at 6pm is about all the school closures that happened that day. (The U.S. state of emergency was declared that morning.) It doesn’t make any specific claims or predictions about the spread of the novel coronavirus, or anything else that could be assessed in terms of its prescience. It mostly focuses on the topic of the social functions that schools play (particularly in the United States and in the state of New York specifically) other than teaching children, such as providing free meals and supervision.
This is too late into the timeline to count as calling the pandemic early, and the post doesn’t make any predictions anyway.
The third post from Zvi is on March 17, 2020 and it’s mostly a personal blog. There are a few relevant bits. For one, Zvi admits he was surprised at how bad the pandemic was at that point:
Regret I didn’t sell everything and go short, not because I had some crazy belief in efficient markets, but because I didn’t expect it to be this bad and I told myself a few years ago I was going to not be a trader anymore and just buy and hold.
He argues New York City is not locking down soon enough and San Francisco is not locking down completely enough. About San Francisco, one thing he says is:
Local responses much better. Still inadequate. San Francisco on strangely incomplete lock-down. Going on walks considered fine for some reason, very strange.
I don’t know how sound this was given what experts knew at the time. It might have been the right call. I don’t know. I will just say that, in retrospect, it seems like going outside was one of the things we originally thought wasn’t fine that we later thought was actually fine after all.
The next post after that isn’t until April 1, 2020. It’s about the viral load of covid-19 infections and the question of how much viral load matters. By this point, we’re getting into questions about the unfolding of the ongoing pandemic, rather than questions about predicting the pandemic in advance. You could potentially go and assess that prediction track record separately, but that’s beyond the scope of my quick take, which was to assess whether LessWrong called covid early.
Overall, Zvi’s posts, at least the ones included in this series, are not evidence for Zvi or LessWrong calling covid early. The posts start too late and don’t make any predictions. Zvi saying “I didn’t expect it to be this bad” is actually evidence against Zvi calling covid early. So, I think we can close the book on this one.
Still open to hearing other things people might think of as evidence that the LessWrong community called covid early.
I spun this quick take out as a full post here. When I submitted the full post, there was no/almost no engagement on this quick take. In the future, I’ll try to make sure to publish things only as a quick take or only as a full post, but not both. This was a fluke under unusual circumstances.
Feel free to continue commenting here, cross-post comments from here onto the full post, make new comments on the post, or do whatever you want. Thanks to everyone who engaged and left interesting comments.
Since my days of reading William Easterly’s Aid Watch blog back in the late 2000s and early 2010s, I’ve always thought it was a matter of both justice and efficacy to have people from globally poor countries in leadership positions at organizations working on global poverty. All else being equal, a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from Canada with an equal level of education, an equal ability to network with the right international organizations, etc.
In practice, this is probably hard to do, since it requires crossing language barriers, cultural barriers, geographical distance, and international borders. But I think it’s worth it.
So much of what effective altruism does, including around global poverty, including around the most evidence-based and quantitative work on global poverty, relies on people’s intuitions, and people’s intuitions formed from living in wealthy, Western countries with no connection to or experience of a globally poor country are going to be less accurate than people who have lived in poor countries and know a lot about them.
Simply put, first-hand experience of poor countries is a form of expertise and organizations run by people with that expertise are probably going to be a lot more competent at helping globally poor people than ones that aren’t.
I agree with most of you say here, indeed all things being equal a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from anywhere else. The problem is your caveats - things are almost never equal...
1) Education systems just aren’t nearly as good in lower income countries. This means that that education is sadly barely ever equal. Even between low income countries—a Kenyan once joked with me that “a Ugandan degree holder is like a Kenyan high school leaver”. If you look at the top echelon of NGO/Charity leaders from low-income who’s charities have grown and scaled big, most have been at least partially educated in richer countries
2) Ability to network is sadly usually so so much higher if you’re from a higher income country. Social capital is real and insanely important. If you look at the very biggest NGOs, most of them are founded not just by Westerners, but by IVY LEAGUE OR OXBRIDGE EDUCATED WESTERNERS. Paul Farmer (Partners in Health) from Harvard, Raj Panjabi (LastMile Health) from Harvard. Paul Niehaus (GiveDirectly) from Harvard. Rob Mathers (AMF) Harvard AND Cambridge. With those connections you can turn a good idea into growth so much faster even compared to super privileged people like me from New Zealand, let alone people with amazing ideas and organisations in low income countries who just don’t have access to that kind of social capital.
3) The pressures on people from low-income countries are so high to secure their futures, that their own financial security will often come first and the vast majority won’t stay the course with their charity, but will leave when they get an opportunity to further their career. And fair enough too! I’ve seen a number of of incredibly talented founders here in Northern Uganda drop their charity for a high paying USAID job (that ended poorly...), or an overseas study scholarship, or a solid government job. Here’s a telling quote from this great take here by @WillieG
“Roughly a decade ago, I spent a year in a developing country working on a project to promote human rights. We had a rotating team of about a dozen (mostly) brilliant local employees, all college-educated, working alongside us. We invested a lot of time and money into training these employees, with the expectation that they (as members of the college-educated elite) would help lead human rights reform in the country long after our project disbanded. I got nostalgic and looked up my old colleagues recently. Every single one is living in the West now. A few are still somewhat involved in human rights, but most are notably under-employed (a lawyer washing dishes in a restaurant in Virginia, for example”
I think (somewhat sadly) a good combination can be for co-founders or co-leaders to be one person from a high-income country with more funding/research connections, and one local person who like you say will be far more effective at understanding the context and leading in locally-appropriate ways. This synergy can cover important bases, and you’ll see a huge number of charities (including mine) founded along these lines.
These realities makes me uncomfortable though, and I wish it weren’t so. As @Jeff Kaufman 🔸 said “I can’t reject my privilege, I can’t give it back” so I try and use my privilege as best as possible to help lift up the poorest people. The organisation OneDay Health I co-founded has me as the only employed foreigner, and 65 other local staff.
There are two philosophies on what the key to life is.
The first philosophy is that the key to life is separate yourself from the wretched masses of humanity by finding a special group of people that is above it all and becoming part of that group.
The second philosophy is that the key to life is to see the universal in your individual experience. And this means you are always stretching yourself to include more people, find connection with more people, show compassion and empathy to more people. But this is constantly uncomfortable because, again and again, you have to face the wretched masses of humanity and say “me too, me too, me too” (and realize you are one of them).
I am a total believer in the second philosophy and a hater of the first philosophy. (Not because it’s easy, but because it’s right!) To the extent I care about effective altruism, it’s because of the second philosophy: expand the moral circle, value all lives equally, extend beyond national borders, consider non-human creatures.
When I see people in effective altruism evince the first philosophy, to me, this is a profane betrayal of the whole point of the movement.
One of the reasons (among several other important reasons) that rationalists piss me off so much is their whole worldview and subculture is based on the first philosophy. Even the word “rationalist” is about being superior to other people. If the rationalist community has one founder or leader, it would be Eliezer Yudkowsky. The way Eliezer Yudkowsky talks to and about other people, even people who are actively trying to help him or to understand him, is so hateful and so mean. He exhales contempt. And it isn’t just Eliezer — you can go on LessWrong and read horrifying accounts of how some prominent people in the community have treated their employee or their romantic partner, with the stated justification that they are separate from and superior to others. Obviously there’s a huge problem with racism, sexism, and anti-LGBT prejudice too, which are other ways of feeling separate and above.
There is no happiness to be found at the top of a hierarchy. Look at the people who think in the most hierarchical terms, who have climbed to the tops of the hierarchies they value. Are they happy? No. They’re miserable. This is a game you can’t win. It’s a con. It’s a lie.
In the beautiful words of the Franciscan friar Richard Rohr, “The great and merciful surprise is that we come to God not by doing it right but by doing it wrong!”
(Richard Rohr’s episode of You Made It Weird with Pete Holmes is wonderful if you want to hear more.)
I just want to point out that I have a degree in philosophy and have never heard the word “epistemics” used in the context of academic philosophy. The word used has always been either epistemology or epistemic as adjective in front of a noun (never on its own, always used as an adjective, not a noun, and certainly never pluralized).
From what I can tell, “epistemics” seems to be weird EA Forum/LessWrong jargon. Not sure how or why this came about, since this is not obscure philosophy knowledge, nor is it hard to look up.
If you Google “epistemics” philosophy, you get 1) sources like Wikipedia that talk about epistemology, not “epistemics”, 2) a post from the EA Forum and a page from the Forethought Foundation, which is an effective altruist organization, 3) some unrelated, miscellaneous stuff (i.e. neither EA-related or academic philosophy-related), and 4) a few genuine but fairly obscure uses of the word “epistemics” in an academic philosophy context. This confirms that the term is rarely used in academic philosophy.
I also don’t know what people in EA mean when they say “epistemics”. I think they probably mean something like epistemic practices, but I actually don’t know for sure.
I would discourage the use of the term “epistemics”, particularly as its meaning is unclear, and would advocate for a replacement such as epistemology or epistemic practices (or whatever you like, but not “epistemics”).
I agree this is just a unique rationalist use. Same with ‘agentic’ though that has possibly crossed over into the more mainstream, at least in tech-y discourse.
However I think this is often fine, especially because ‘epistemics’ sounds better than ‘epistemic practices’ and means something distinct from ‘epistemology’ (the study of knowledge).
Always good to be aware you are using jargon though!
There’s no accounting for taste, but ‘epistemics’ sounds worse to my ear than ‘epistemic practices’ because the clunky jargoniness of ‘epistemics’ is just so evident. It’s as if people said ‘democratics’ instead of ‘democracy’, or ‘biologics’ instead of ‘biology’.
I also don’t know for sure what ‘epistemics’ means. I’m just inferring that from its use and assuming it means ‘epistemic practices’, or something close to that.
‘Epistemology’ is unfortunately a bit ambiguous and primarily connotes the subfield of philosophy rather than anything you do in practice, but I think it would also be an acceptable and standard use to talk about ‘epistemology’ as what one does in practice, e.g., ‘scientific epistemology’ or ‘EA epistemology’. It’s a bit similar to ‘ethics’ in this regard, which is both an abstract field of study and something one does in practice, although the default interpretation of ‘epistemology’ is the field, not the practice, and for ‘ethics’ it’s the reverse.
It’s neither here nor there, but I think talking about personal ‘agency’ (terminology that goes back decades, long predating the rationalist community) is far more elegant than talking about a person being ‘agentic’. (For AI agents, it doesn’t matter.)
I find “epistemics” neat because it is shorter than “applied epistemology” and reminds me of “athletics” and the resulting (implied) focus on being more focused on practice. I don’t think anyone ever explained what “epistemics” refers to, and I thought it was pretty self-explanatory from the similarity to “athletics”.
I also disagree about the general notion that jargon specific to a community is necessarily bad, especially if that jargon has fewer syllables. Most subcultures, engineering disciplines, sciences invent words or abbreviations for more efficient communication, and while some of that may be due to trying to gatekeep, it’s so universal that I’d be surprised if it doesn’t carry value. There can be better and worse coinages of new terms, and three/four/five-letter abbreviations such as “TAI” or “PASTA” or “FLOP” or “ASARA” are worse than words like “epistemics” or “agentic”.
I guess ethics makes the distinction between normative ethics and applied ethics. My understanding is that epistemology is not about practical techniques, and that one can make a distinction here (just like the distinction between “methodology” and “methods”).
I tried to figure out if there’s a pair of suffixes that try to express the difference between the theoretic study of some field and the applied version, Claude suggests “-ology”/”-urgy” (as in metallurgy, dramaturgy) and “-ology”/”-iatry” (as in psychology/psychiatry), but notes no general such pattern exists.
Applied ethics is still ethical theory, it’s just that applied ethics is about specific ethical topics, e.g. vegetarianism, whereas normative ethics is about systems of ethics, e.g. utilitarianism. If you wanted to distinguish theory from practice and be absolutely clear, you’d have to say something like ethical practices.
I prefer to say epistemic practices rather than epistemics (which I dislike) or epistemology (which I like, but is more ambiguous).
I don’t think the analogy between epistemics and athletics is obvious, and I would be surprised if even 1% of the people who have ever used the term epistemics have made that connection before.
I am very wary of terms that are never defined or explained. It is easy for people to assume they know what they mean, that there’s a shared meaning everyone agrees on. I really don’t know what epistemics means and I’m only assuming it means epistemic practices.
I fear that there’s a realistic chance if I started to ask different people to define epistemics, we would quickly uncover that different people have different and incompatible definitions. For example, some people might think of it as epistemic practices and some people might think of it as epistemological theory.
I am more anti-jargon and anti-acronyms than a lot of people. Really common acronyms, like AI or LGBT, or acronyms where the acronym is far better known than the spelled-out version, like NASA or DVD, are, of course, absolutely fine. PASTA and ASARA are egregious.
I’m such an anti-acronym fanatic I even spell out artificial general intelligence (AGI) and large language model (LLM) whenever I use them for the first time in a post.
My biggest problem with jargon is that nobody knows what it means. The in-group who is supposed to know what it means also doesn’t know what it means. They think they do, but they’re just fooling themselves. Ask them probing questions, and they’ll start to disagree and fight about the definition. This isn’t always true, but it’s true often enough to make me suspicious of jargon.
Jargon can be useful, but it should be defined, and you should give examples of it. If a common word or phrase exists that is equally good or better, then you should use that instead. For example, James Herbert recently made the brilliant comment that instead of “truthseeking” — an inscrutable term that, for all I know, would turn out to have no definite meaning if I took the effort to try to get multiple people to try to define it — an older term used on effectivealtruism.org was “a scientific mindset”, which is nearly self-explanatory. Science is a well-known and well-defined concept. Truthseeking — whatever that means — is not.
This isn’t just true for a subculture like the effective altruist community, it’s also true for a field like academic philosophy (maybe philosophy is unique in this regard among academic fields). You wouldn’t believe the number of times people disagree about the basic meaning of terms. (For example, do sentience and consciousness mean the same thing, or two different things? What about autonomy and freedom?) This has made me so suspicious that shared jargon actually isn’t understood in the same way by the people who are using it.
Just avoiding jargon isn’t the whole trick (for one, it’s often impossible or undesirable), it’s got to be a multi-pronged approach.
You’ve really got to give examples of things. Examples are probably more important than definitions. Think about when you’re trying to learn a card game, a board game, or a parlour game (like charades). The instructions can be very precise and accurate, but reading the instructions out loud often makes half the table go googly-eyed and start shaking their heads. If the instructions contain even one example, or if you can watch one round of play, that’s so much more useful than a precise “definition” of the game. Examples, examples, examples.
Also, just say it simpler. Speak plainly. Instead of ASANA, why not AI doing AI? Instead of PASTA, why not AI scientists and engineers? It’s so much cleaner, and simpler, and to the point.
People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble.
Since the timeline of any bubble is extremely hard to predict and isn’t the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031).
For those who know more about forecasting than me, and especially for those who can think of good ways to financially operationalize such a prediction, I would encourage you to make a post about this.
[Edited on Nov. 17, 2025 at 3:35 PM Eastern to add: I wrote a full-fledged post about the AI bubble that can prompt a richer discussion. It doesn’t attempt to operationalize the bubble question, but gets into the expert opinions and evidence. I also do my own analysis.]
What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?
My leading view is that there will be some sort of bubble pop, but with people still using genAI tools to some degree afterwards (like how people kept using the internet after the dot com burst).
Still major uncertainty on my part because I don’t know much about financial markets, and am still highly uncertain about the level where AI progress fully stalls.
I just realized the way this poll is set up is really confusing. You’re currently at “50% 100% probability”, which when you look at it on the number line looks like 75%. Not the best tool to use for such a poll, I guess!
I don’t know exactly how you’d operationalize an AI bubble. If OpenAI were a public company, you could say its stock price goes down a certain amount. But private companies can control their own valuation (or the public perception of it) to a certain extent, e.g. by not raising more money so their last known valuation is still from their most recent funding round.
Many public companies like Microsoft, Google, and Nvidia are involved in the AI investment boom, so their stocks can be taken into consideration. You can also look at the level of investment and data centre construction.
I don’t think it would be that hard to come up with reasonable resolution criteria, it’s just that this is of course always a nitpicky thing with forecasting and I haven’t spent any time on it yet.
What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?
I’m not exactly sure about the operationalization of this question, but it seems like there’s a bubble among small AI startups at the very least. The big players might be unaffected however? My evidence for this is some mix of not seeing a revenue pathway for a lot of these companies that wouldn’t require a major pivot, few barriers to entry for larger players if their product becomes successful, and having met a few people who work in AI startups who claim to be optimistic about earnings and stuff but can’t really back that up.
I don’t know much about small AI startups. The bigger AI companies have a problem because their valuations have increased so much and the level of investment they’re making (e.g. into building datacentres) is reaching levels that feel unsustainable.
It’s to the point where the AI investment, driven primarily by the large AI companies, has significant macroeconomic effects on the United States economy. The popping of an AI bubble could be followed by a U.S. recession.
However, it’s a bit complicated, in that case, as to whether to say the popping of the bubble would have “caused” the recession, since there are a lot of factors, such as tariffs. Macroeconomics and financial markets are complicated and I know very little. I’m not nearly an expert.
I don’t think small AI startups creating successful products and then large AI companies copying them and outcompeting them would count as a bubble. That sounds like the total of amount of revenue in the industry would be about the same as if the startups succeeded, it just would flow to the bigger companies instead.
The bubble question is about the industry as a whole.
I do think there’s also a significant chance of a larger bubble, to be fair, affecting the big AI companies. But my instinct is that a sudden fall in investment into small startups and many of them going bankrupt would get called a bubble in the media, and that that investment wouldn’t necessarily just go into the big companies.
What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?
I put 30% on this possiblility, maybe 35%. I don’t have much more to say than “time horizons!”, “look how useful they’re becoming in my dayjob & personal life!”, “look at the qualitative improvement over the last six years”, “we only need to automate machine learning research, which isn’t the hardest thing to automate”.
Worlds in which we get a bubble pop are worlds in which we don’t get a software intelligence explosion, and in which either useful products come too late for the investment to sustain itself or there’s not really much many useful products after what we already have. (This is tied in with “are we getting TAI through the things LLMs make us/are able to do, without fundamental insights”.
I haven’t done the sums myself, but do we know for sure that they can’t make money without being all that useful, so long as a lot of people interact with them everyday?
Is Facebook “useful”? Not THAT much. Do people pay for it? No, it’s free. Instagram is even less useful than Facebook which at least used to actually be good for organizing parties and pub nights. Does META make money? Yes. Does equally useless TikTok make money? I presume so, yes. I think tech companies are pretty expert in monetizing things that have no user fee, and aren’t that helpful at work. There’s already a massive user base for Chat-GPT etc. Maybe they can monetize it even without it being THAT useful. Or maybe the sums just don’t work out for that, I’m not sure. But clearly the market thinks they will make money in expectation. That’s a boring reason for rejecting “it’s a bubble” claims and bubbles do happen, but beating the market in pricing shares genuinely is quite difficult I suspect.
Of course, there could also be a bubble even if SOME AI companies make a lot of money. That’s what happened with the Dot.com bubble.
This is an important point to consider. OpenAI is indeed exploring how to put ads on ChatGPT.
My main source of skepticism about this is that the marginal revenue from an online ad is extremely low, but that’s fine because the marginal cost of serving a webpage or loading a photo in an app or whatever is also extremely low. I don’t have a good sense of the actual numbers here, but since a GPT-5 query is considerably more expensive than serving a webpage, this could be a problem. (Also, that’s just the marginal cost. OpenAI, like other companies, also has to amortize all its fixed costs over all its sales, whether they’re ad sales or sales directly to consumers.)
It’s been rumoured/reported (not sure which) that OpenAI is planning to get ChatGPT to sell things to you directly. So, if you ask, “Hey, ChatGPT, what is the healthiest type of soda?”, it will respond, “Why, a nice refreshing Coca‑Cola® Zero Sugar of course!” This seems horrible. That would probably drive some people off the platform, but, who knows, it might be a net financial gain.
There are other “useless” ways companies like OpenAI could try to drive usage and try to monetize either via ads or paid subscriptions. Maybe if OpenAI leaned heavily into the whole AI “boyfriends/girlfriends” thing that would somehow pay off — I’m skeptical, but we’ve got to consider all the possibilities here.
What do you make of the fact that METR’s time horizon graph and METR’s study on AI coding assistants point in opposite directions? The graph says: exponential progress! Superhuman coders! AGI soon! Singularity! The study says: overhyped product category, useless tool, tricks people into thinking it helps them when it actually hurts them.
Yep, I wouldn’t have predicted that. I guess the standard retort is: Worst case! Existing large codebase! Experienced developers!
I know that there’s software tools I use >once a week that wouldn’t have existed without AI models. They’re not very complicated, but they’d’ve been annoying to code up myself, and I wouldn’t have done it. I wonder if there’s a slowdown in less harsh scenarios, but it’s probably not worth the value of information of running such a study.
I dunno. I’ve done a bunch ofcalibrationpractice[1], this feels like a 30%, I’m calling 30%. My probability went up recently, mostly because some subjectively judged capabilities that I was expecting didn’t start showing up.
My metaculus calibration around 30% isn’t great, I’m overconfident there, I’m trying to keep that in mind. My fatebook is slightly overconfident in that range, and who can tell with Manifold.
There’s a longer discussion of that oft-discussed METR time horizons graph that warrants a post of its own.
My problem with how people interpret the graph is that people slip quickly and wordlessly from step to step in a logical chain of inferences that I don’t think can be justified. The chain of inferences is something like:
AI model performance on a set of very limited benchmark tasks → AI model performance on software engineering in general → AI model performance on everything humans do
Ask what is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by a reliable source such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?
I haven’t thought about my exact probability too hard yet, but for now I’ll just say 90% because that feels about right.
I’m seeking second opinions on whether my contention in Edit #4 at the bottom of this post is correct or incorrect. See the edit at the bottom of the post for full details.
Brief info:
My contention is about the Forecasting Research Institute’s recent LEAP survey.
One of the headline results from the survey is about the probabilities the respondents assign to each of three scenarios.
However, the question uses an indirect framing — an intersubjective resolution or metaprediction framing.
The specific phrasing of the question is quite important.
My contention is, if respondents took the question literally, as written, they did not actually report their probabilities for each scenario, and there is no way to derive their probabilities from what they did report.
Therefore, the headline result that states the respondents’ probabilities for the three scenarios is not actually true.
If my contention is right, then it means the results of the report are being misreported in a quite significant way. If my contention is wrong, then I must make a mea culpa and apologize to the Forecasting Research Institute for my error.
So, your help requested. Am I right or wrong?
(Note: the post discusses multiple topics, but here I’m specifically asking for opinions on the intersubjective resolution/metaprediction concern raised in Edit #4.)
Self-driving cars are not close to getting solved. Don’t take my word for it. Listen to Andrej Karpathy, the lead AI researcher responsible for the development of Tesla’s Full Self-Driving software from 2017 to 2022. (Karpathy also did two stints as a researcher at OpenAI, taught a deep learning course at Stanford, and coined the term “vibe coding”.)
From Karpathy’s October 17, 2025 interview with Dwarkesh Patel:
Dwarkesh Patel01:42:55
You’ve talked about how you were at Tesla leading self-driving from 2017 to 2022. And you firsthand saw this progress from cool demos to now thousands of cars out there actually autonomously doing drives. Why did that take a decade? What was happening through that time?
Andrej Karpathy01:43:11
One thing I will almost instantly push back on is that this is not even near done, in a bunch of ways that I’m going to get to. Self-driving is very interesting because it’s definitely where I get a lot of my intuitions because I spent five years on it. It has this entire history where the first demos of self-driving go all the way to the 1980s. You can see a demo from CMU in 1986. There’s a truck that’s driving itself on roads.
Fast forward. When I was joining Tesla, I had a very early demo of Waymo. It basically gave me a perfect drive in 2014 or something like that, so a perfect Waymo drive a decade ago. It took us around Palo Alto and so on because I had a friend who worked there. I thought it was very close and then it still took a long time.
For some kinds of tasks and jobs and so on, there’s a very large demo-to-product gap where the demo is very easy, but the product is very hard. It’s especially the case in cases like self-driving where the cost of failure is too high. Many industries, tasks, and jobs maybe don’t have that property, but when you do have that property, that definitely increases the timelines.
For example, in software engineering, I do think that property does exist. For a lot of vibe coding, it doesn’t. But if you’re writing actual production-grade code, that property should exist, because any kind of mistake leads to a security vulnerability or something like that. Millions and hundreds of millions of people’s personal Social Security numbers get leaked or something like that. So in software, people should be careful, kind of like in self-driving. In self-driving, if things go wrong, you might get injured. There are worse outcomes. But in software, it’s almost unbounded how terrible something could be.
I do think that they share that property. What takes the long amount of time and the way to think about it is that it’s a march of nines. Every single nine is a constant amount of work. Every single nine is the same amount of work. When you get a demo and something works 90% of the time, that’s just the first nine. Then you need the second nine, a third nine, a fourth nine, a fifth nine. While I was at Tesla for five years or so, we went through maybe three nines or two nines. I don’t know what it is, but multiple nines of iteration. There are still more nines to go.
That’s why these things take so long. It’s definitely formative for me, seeing something that was a demo. I’m very unimpressed by demos. Whenever I see demos of anything, I’m extremely unimpressed by that. If it’s a demo that someone cooked up just to show you, it’s worse. If you can interact with it, it’s a bit better. But even then, you’re not done. You need the actual product. It’s going to face all these challenges when it comes in contact with reality and all these different pockets of behavior that need patching.
We’re going to see all this stuff play out. It’s a march of nines. Each nine is constant. Demos are encouraging. It’s still a huge amount of work to do. It is a critical safety domain, unless you’re doing vibe coding, which is all nice and fun and so on. That’s why this also enforced my timelines from that perspective.
Karpathy elaborated later in the interview:
The other aspect that I wanted to return to is that self-driving cars are nowhere near done still. The deployments are pretty minimal. Even Waymo and so on has very few cars. They’re doing that roughly speaking because they’re not economical. They’ve built something that lives in the future. They’ve had to pull back the future, but they had to make it uneconomical. There are all these costs, not just marginal costs for those cars and their operation and maintenance, but also the capex of the entire thing. Making it economical is still going to be a slog for them.
Also, when you look at these cars and there’s no one driving, I actually think it’s a little bit deceiving because there are very elaborate teleoperation centers of people kind of in a loop with these cars. I don’t have the full extent of it, but there’s more human-in-the-loop than you might expect. There are people somewhere out there beaming in from the sky. I don’t know if they’re fully in the loop with the driving. Some of the time they are, but they’re certainly involved and there are people. In some sense, we haven’t actually removed the person, we’ve moved them to somewhere where you can’t see them.
I still think there will be some work, as you mentioned, going from environment to environment. There are still challenges to make self-driving real. But I do agree that it’s definitely crossed a threshold where it kind of feels real, unless it’s really teleoperated. For example, Waymo can’t go to all the different parts of the city. My suspicion is that it’s parts of the city where you don’t get good signal. Anyway, I don’t know anything about the stack. I’m just making stuff up.
Dwarkesh Patel01:50:23
You led self-driving for five years at Tesla.
Andrej Karpathy01:50:27
Sorry, I don’t know anything about the specifics of Waymo. By the way, I love Waymo and I take it all the time. I just think that people are sometimes a little bit too naive about some of the progress and there’s still a huge amount of work. Tesla took in my mind a much more scalable approach and the team is doing extremely well. I’m kind of on the record for predicting how this thing will go. Waymo had an early start because you can package up so many sensors. But I do think Tesla is taking the more scalable strategy and it’s going to look a lot more like that. So this will still have to play out and hasn’t. But I don’t want to talk about self-driving as something that took a decade because it didn’t take it yet, if that makes sense.
Dwarkesh Patel01:51:08
Because one, the start is at 1980 and not 10 years ago, and then two, the end is not here yet.
Andrej Karpathy01:51:14
The end is not near yet because when we’re talking about self-driving, usually in my mind it’s self-driving at scale. People don’t have to get a driver’s license, etc.
I hope the implication for discussions around AGI timelines is clear.
[Personal blog] I’m taking a long-term, indefinite hiatus from the EA Forum.
I’ve written enough in posts, quick takes, and comments over the last two months to explain the deep frustrations I have with the effective altruist movement/community as it exists today. (For one, I think the AGI discourse is completely broken and far off-base. For another, I think people fail to be kind to others in ordinary, important ways.)
But the strongest reason for me to step away is that participating in the EA Forum is just too unpleasant. I’ve had fun writing stuff on the EA Forum. I thank the people who have been warm to me, who have had good humour, and who have said interesting, constructive things.
But negativity bias being what it is (and maybe “bias” is too biased a word for it; maybe we should call it “negativity preference”), the few people who have been really nasty to me have ruined the whole experience. I find myself trying to remember names, to remember who’s who, so I can avoid clicking on reply notifications from the people who have been nasty. And this is a sign it’s time to stop.
Psychological safety is such a vital part of online discussion, or any discussion. Open, public forums can be a wonderful thing, but psychological safety is hard to provide on an open, public forum. I still have some faith in open, public forums, but I tend to think the best safety tool is giving authors the ability to determine who is and isn’t allowed to interact with their posts. There is some risk of people censoring disagreement, sure. But nastiness online is a major threat to everything good. It causes people to self-censor (e.g. by quitting the discussion platform or by withholding opinions) and it has terrible effects on discourse and on people’s minds.
And private discussions are important too. One of the most precious things you can find in this life is someone you can have good conversations with who will maintain psychological safety, keep your confidences, “yes, and” you, and be constructive. Those are the kind of conversations that loving relationships are built on. If you end up cooking something that the world needs to know about, you can turn it into a blog post or a paper or a podcast or a forum post. (I’ve done it before!) But you don’t have to do the whole process leading up to that end product in public.
The EA Forum is unusually good in some important respects, which is kind of sad, because it shows us a glimpse of what maybe could exist on the Internet, without itself realizing that promise.
If anyone wants to contact me for some reason, you can send me a message via the forum and I should get it as an email. Please put your email address in the message so I can respond to you by email without logging back into the forum.
Have Will MacAskill, Nick Beckstead, or Holden Karnofsky responded to the reporting by Time that they were warned about Sam Bankman-Fried’s behaviour years before the FTX collapse?
What AI model does SummaryBot use? And does whoever runs SummaryBot use any special tricks on top of that model? It could just be bias, but SummaryBot seems better at summarizing stuff then GPT-5 Thinking, o3, or Gemini 2.5 Pro, so I’m wondering if it’s a different model or maybe just good prompting or something else.
@Toby Tremlett🔹, are you SummaryBot’s keeper? Or did you just manage its evil twin?
It used to run on Claude, but I’ve since moved it to a ChatGPT project using GPT-5. I update the system instructions quarterly based on feedback, which probably explains the difference you’re seeing. You can read more in this doc on posting SummaryBot comments.
Thank you very much for the info! It’s probably down to your prompting, then. Squeezing things into 6 bullet points might be just a helpful format for ChatGPT or for summaries (even human-written ones) in general. Maybe I will try that myself when I want to ask ChatGPT to summarize something.
I also think there’s an element of “magic”/illusion to it, though, since I just noticed a couple mistakes SummaryBot made and now its powers seem less mysterious.
Here is the situation we’re in with regard to near-term prospects for artificial general intelligence (AGI). This is why I’m extremely skeptical of predictions that we’ll see AGI within 5 years.
-Current large language models (LLMs) have extremely limited capabilities. For example, they can’t score above 5% on the ARC-AGI-2 benchmark, they can’t automate any significant amount of human labour,[1] and they can only augment human productivity in minor ways in limited contexts.[2] They make ridiculous mistakes all the time, like saying something that happened in 2025 caused something that happened in 2024, while listing the dates of the events. They struggle with things that are easy for humans, like playing hangman.
-The capabilities of LLMs have been improving slowly. There is only a modest overall difference between GPT-3.5 (the original ChatGPT model), which came out in November 2022, and newer models like GPT-4o, o4-mini, and Gemini 2.5 Pro.
-There are signs that there are diminishing returns to scaling for LLMs. Increasing the size of models and the size of the pre-training data doesn’t seem to be producing the desired results anymore. LLM companies have turned to scaling test-time compute to eke out more performance gains, but how far can that go?
-There may be certain limits to scaling that are hard or impossible to overcome. For example once you’ve trained a model on all the text that exists in the world, you can’t keep training on exponentially[3] more text every year. Current LLMs might be fairly close to running out of exponentially[4] more text to train on, if they haven’t run out already.[5]
-A survey of 475 AI experts found that 76% think it’s “unlikely” or “very unlikely” that “scaling up current AI approaches” will lead to AGI. So, we should be skeptical of the idea that just scaling up LLMs will lead to AGI, even if LLM companies manage to keep scaling them up and improving their performance by doing so.
-Few people have any concrete plan for how to build AGI (beyond just scaling up LLMs). The few people who do have a concrete plan disagree fundamentally on what the plan should be. All of these plans are in the early-stage research phase. (I listed some examples in a comment here.)
-Some of the scenarios people are imagining where we get to AGI in the near future involve strange, exotic, hypothetical process wherein a sub-AGI AI system can automate the R&D that gets us from a sub-AGI AI system to AGI. This requires two things to be true: 1) that doing the R&D needed to create AGI is not a task that would require AGI or human-level AI and 2) that, in the near term, AI systems somehow advance to the point where they’re able to do meaningful R&D autonomously. Given that I can’t even coax o4-mini or Gemini 2.5 Pro into playing hangman properly, and given the slow improvement of LLMs and the signs of diminishing returns to scaling I mentioned, I don’t see how (2) could be true. The arguments for (1) feel very speculative and handwavy.
Given all this, I genuinely can’t understand why some people think there’s a high chance of AGI within 5 years. I guess the answer is they probably disagree on most or all of these individual points.
Maybe they think the conventional written question and answer benchmarks for LLMs are fair apples-to-apples comparisons of machine intelligence and human intelligence. Maybe they are really impressed with the last 2 to 2.5 years of progress in LLMs. Making they are confident no limits to scaling or diminishing returns to scaling will stop progress anytime soon. Maybe they are confident that scaling up LLMs is a path to AGI. Or maybe they think LLMs will soon be able to take over the jobs of researchers at OpenAI, Anthropic, and Google DeepMind.
I have a hunch (just a hunch) that it’s not a coincidence many people’s predictions are converging (or herding) around 2030, give or take a few years, and that 2029 has been the prophesied year for AGI since Ray Kurzweil’s book The Age of Spiritual Machines in 1999. It could be a coincidence. But I have a sense that there has been a lot of pent-up energy around AGI for a long time and ChatGPT was like a match in a powder keg. I don’t get the sense that people formed their opinions about AGI timelines in 2023 and 2024 from a blank slate.
I think many people have been primed for years by people like Ray Kurzweil and Eliezer Yudkowsky and by the transhumanist and rationalist subcultures to look for any evidence that AGI is coming soon and to treat that evidence as confirmation of their pre-existing beliefs. You don’t have to be directly influenced by these people or by these subcultures to be influenced. If enough people are influenced by them or a few prominent people are influenced, then you end up getting influenced all the same. And when it comes to making predictions, people seem to have a bias toward herding, i.e., making their predictions more similar to the predictions they’ve heard, even if that ends up making their predictions less accurate.
The process by which people come up with the year they think AGI will happen seems especially susceptible to herding bias. You ask yourself when you think AGI will happen. A number pops into your head that feels right. How does this happen? Who knows.
If you try to build a model to predict when AGI will happen, you still can’t get around it. Some of your key inputs to the model will require you to ask yourself a question and wait a moment for a number to pop into your head that feels right. The process by which this happens will still be mysterious. So, the model is ultimately no better than pure intuition because it is pure intuition.
I understand that, in principle, it’s possible to make more rigorous predictions about the future than this. But I don’t think that applies to predicting the development of a hypothetical technology where there is no expert agreement on the fundamental science underlying that technology, and not much in the way of fundamental science in that area at all. That seems beyond the realm of ordinary forecasting.
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
These results are consistent with the idea that generative AI tools may function by exposing lower-skill workers to the best practices of higher-skill workers. Lower-skill workers benefit because AI assistance provides new solutions, whereas the best performers may see little benefit from being exposed to their own best practices. Indeed, the negative effects along measures of chat quality—RR [resolution rate] and customer satisfaction—suggest that AI recommendations may distract top performers or lead them to choose the faster or less cognitively taxing option (following suggestions) rather than taking the time to come up with their own responses.
I’m using “exponentially” colloquially to mean every year the LLM’s training dataset grows by 2x or 5x or 10x — something along those lines. Technically, if the training dataset increased by 1% a year, that would be exponential, but let’s not get bogged down in unimportant technicalities.
Epoch AI published a paper in June 2024 that predicts LLMs will exhaust the Internet’s supply of publicly available human-written text between 2026 and 2032.
Slight update to the odds I’ve been giving to the creation of artificial general intelligence (AGI) before the end of 2032. I’ve been anchoring the numerical odds of this to the odds of a third-party candidate like Jill Stein or Gary Johnson winning a U.S. presidential election. That’s something I think is significantly more probable than AGI by the end of 2032. Previously, I’d been using 0.1% or 1 in 1,000 as the odds for this, but I was aware that these odds were probably rounded.
I took a bit of time to refine this. I found that in 2016, FiveThirtyEight put the odds on Evan McMullin — who was running as an independent, not for a third party, but close enough — becoming president at 1 in 5,000 or 0.02%. Even these odds are quasi-arbitrary, since McMullin only became president in simulations where neither of the two major party candidates won a majority of Electoral College votes. In such scenarios, Nate Silver arbitrarily put the odds at 10% that the House would vote to appoint McMullin as the president.
So, for now, it is more accurate for me to say: the probability of the creation of AGI before the end of 2032 is significantly less than 1 in 5,000 or 0.02%.
I can also expand the window of time from the end of 2032 to the end of 2034. That’s a small enough expansion it doesn’t affect the probability much. Extending the window to the end of 2034 covers the latest dates that have appeared on Metaculus since the big dip in its timeline that happened in the month following the launch of GPT-4. By the end of 2034, I still put the odds of AGI significantly below 1 in 5,000 or 0.02%.
My confidence interval is over 95%.[Edited Nov. 28, 2025 at 3:06pm Eastern. See comments below.]
I will continue to try to find other events to anchor my probability to. It’s difficult to find good examples. An imperfect point of comparison is an individual’s annual risk of being struck by lightning, which is 1 in 1.22 million. Over 9 years, the risk is in 1 in 135,000. Since the creation of AGI within 9 years seems less likely to me than that I’ll be struck by lightning, I could also say the odds of AGI’s creation within that timeframe is less than 1 in 135,000 or less than 0.0007%.
It seems like once you get significantly below 0.1%, though, it becomes hard to intuitively grasp the probability of events or find good examples to anchor off of.
I don’t think this should be downvoted. It’s a perfectly fine example of reasoning transparency. I happen to disagree, but the disagree-vote button is there for a reason.
Thank you. Karma downvotes have ceased to mean anything to me.
People downvote for no discernible reason, at least not reasons that are obvious to me, nor that they explain. I’m left to surmise what the reasons might be, including (in some cases) possibly disagreement, pique, or spite.
Neutrally informative things get downvoted, factual/straightforward logical corrections get downvoted, respectful expressions of mainstream expert opinion get downvoted — everything, anything. The content is irrelevant and the tone/delivery is irrelevant. So, I’ve stopped interpreting downvotes as information.
I don’t think this sort of anchoring is a useful thing to do. There is no logical reason for third party presidency success and AGI success to be linked mathematically. It seems like the third party thing is based on much greater empirical grounding.
You linked them because your vague impression of the likelihood of one was roughly equal to the vague impression of the likliehood of the other: If your vague impression of the third party thing changes, it shouldn’t change your opinion of the other thing. You think that AGI is 5 times less likely than you previously thought because you got more precise odds about one guy winning the presidency ten years ago?
My (perhaps controversial) view is that forecasting AGI is in the realm of speculation where quantification like this is more likely to obscure understanding than to help it.
I don’t think AGI is five times less likely than I did a week ago, I realized the number I had been translating my qualitative, subjective intuition into was five times too high. I also didn’t change my qualitative, subjective intuition of the probability of a third-party candidate winning a U.S. presidential election. What changed was just the numerical estimate of that probability — from an arbitrarily rounded 0.1% figure to a still quasi-arbitrary but at least somewhat more rigorously derived 0.02%. The two outcomes remain logically disconnected.
I agree that forecasting AGI is an area where any sense of precision is an illusion. The level of irreducible uncertainty is incredibly high. As far as I’m aware, the research literature on forecasting long-term or major developments in technology has found that nobody (not forecasters and not experts in a field) can do it with any accuracy. With something as fundamentally novel as AGI, there is an interesting argument that it’s impossible, in principle, to predict, since the requisite knowledge to predict AGI includes the requisite knowledge to build it, which we don’t have — or at least I don’t think we do.
The purpose of putting a number on it is to communicate a subjective and qualitative sense of probability in terms that are clear, that other people can understand. Otherwise, its hard to put things in perspective. You can use terms like extremely unlikely, but what does that mean? Is something that has a 5% chance of happening extremely unlikely? So, rolling a natural 20 is extremely unlikely? (There are guides to determining the meaning of such terms, but they rely on assigning numbers to the terms, so we’re back to square one.)
Something that works just as well is comparing the probability of one outcome to the probability of another outcome. So, just saying that the probability of near-term AGI is less than the probability of Jill Stein winning the next presidential election does the trick. I don’t know why I always think of things involving U.S. presidents, but my point of comparison for the likelihood of widely deployed superintelligence by the end of 2030 was that I thought it was more likely the JFK assassination turned out to be a hoax, and that JFK was still alive.[1]
I initially resisted putting any definite odds on near-term AGI, but I realized a lack of specificity was hurting my attempts to get my message across.
This approach doesn’t work perfectly, either, because what if different people have different opinions/intuitions on the probability of outcomes like Jill Stein winning? But putting low probabilities (well below 1%) into numbers has a counterpart problem in that you don’t know if you have the same intuitive understanding as someone else of what a 1 in 1,000 chance, a 1 in 10,000 chance, or a 1 in 100,000 chance means with regard to highly irreducibly uncertain events that are rare (e.g. recent U.S. presidential elections), unprecedented (e.g. AGI), or one-off (e.g. Russia ending the current war against Ukraine), and which can’t be statistically or mechanically predicted.
When NASA models the chance of an asteroid hitting Earth as 1 in 25,000 or the U.S. National Weather Service calculates the annual individual risk of being hit by lightning as 1 in 1.22 million, I trust that has some objective, concrete meaning. If someone subjectively guesses that Jill Stein has a 1 in 25,000 chance of winning in 2028, I don’t know if someone with a very similar gut intuition about her odds would also say 1 in 25,000, or if they’d say a number 100x higher or lower.
Possibly forecasters and statisticians have a good intuitive sense of this, but most regular people do not.
Maybe this is a misapplication of the concept of confidence intervals — math is not my strong suit, nor is forecasting, so let me know — but what I had in mind is that I’m forecasting a 0.00% to 0.02% probability range for AGI by the end of 2034, and that if I were to make 100 predictions of a similar kind, more than 95 of them would have the “correct” probability range (whatever that ends up meaning).
But now that I’m thinking about it more and doing a cursory search, I think with a range of probabilities for a given date (e.g. 0.00% to 0.02% by end of 2034) as opposed to a range of years (e.g. 5 to 20 years) or another definite quantity, the probability itself is supposed to represent all the uncertainty and the confidence interval is redundant.
I’m forecasting a 0.00% to 0.02% probability range for AGI by the end of 2034, and that if I were to make 100 predictions of a similar kind, more than 95 of them would have the “correct” probability range
I kinda get what you’re saying but I think this is double-counting in a weird way. A 0.01% probability means that if you make 10,000 predictions of that kind, then about one of them should come true. So your 95% confidence interval sounds like something like “20 times, I make 10,000 predictions that each have a probability between 0.00% and 0.02%; and 19 out of 20 times, about one out of the 10,000 predictions comes true.”
You could reduce this to a single point probability. The math is a bit complicated but I think you’d end up with a point probability on the order of 0.001% (~10x lower than the original probability). But if I understand correctly, you aren’t actually claiming to have a 0.001% credence.
I think there are other meaningful statements you could make. You could say something like, “I’m 95% confident that if I spend 10x longer studying this question, then I would end up with a probability between 0.00% and 0.02%.”
You could reduce this to a single point probability. The math is a bit complicated but I think you’d end up with a point probability on the order of 0.001% (~10x lower than the original probability). But if I understand correctly, you aren’t actually claiming to have a 0.001% credence.
Yeah, I’m saying the probability is significantly less than 0.02% without saying exactly how much less — that’s much harder to pin down, and there are diminishing returns to exactitude here — so that means it’s a range from 0.00% to <0.02%. Or just <0.02%.
The simplest solution, and the correct/generally recommended solution, seems to be to simply express the probability, unqualified.
Yann LeCun (a Turing Award-winning pioneer of deep learning) leaving Meta AI — and probably, I would surmise, being nudged out by Mark Zuckerberg (or another senior Meta executive) — is a microcosm for everything wrong with AI research today.
LeCun is the rare researcher working on fundamental new ideas to push AI forward on a paradigm level. Zuckerberg et al. seem to be abandoning that kind of work to focus on a mad dash to AGI via LLMs, on the view that enough scaling and enough incremental engineering and R&D will push current LLMs all the way to AGI, or at least very powerful, very economically transformative AI.
I predict that in five years or so, this will be seen in retrospect (by many people, if not by everyone) as an incredibly wasteful mistake by Zuckerberg, and also by other executives at other companies (and the investors in them) making similar decisions. The amount of capital being spent on LLMs is eye-watering and could fund a lot of fundamental research, some of which could have turned up some ideas that would actually lead to useful, economically and socially beneficial technology.
LeCun is also probably one of the top people to have worsened the AI safety outlook this decade, and from that perspective perhaps his departure is a good thing for the survival of the world, and thus also Meta’s shareholders?
I couldn’t disagree more strongly. LeCun makes strong points about AGI, AGI alignment, LLMs, and so on. He’s most likely right. I think the probability of AGI by the end of 2032 is significantly less than 1 in 1,000 and the probability of LLMs scaling to AGI is even less than that. There’s more explanation in a few of my posts. In order of importance: 1, 2, 3, 4, and 5.
The core ideas that Eliezer Yudkowsky, Nick Bostrom, and others came up with about AGI alignment/control/friendliness/safety were developed long before the deep learning revolution kicked off in 2012. Some of Yudkowsky’s and Bostrom’s key early writings about these topics are from as far back as the early 2000s. To quote Clara Collier writing in Asterisk:
We’ve learned a lot since 2008. The models Yudkowsky describes in those old posts on LessWrong and Overcoming Bias were hand-coded, each one running on its own bespoke internal architecture. Like mainstream AI researchers at the time, he didn’t think deep learning had much potential, and for years he was highly skeptical of neural networks. (To his credit, he’s admitted that that was a mistake.) But If Anyone Builds It, Everyone Dies very much is about deep learning-based neural networks. The authors discuss these systems extensively — and come to the exact same conclusions they always have. The fundamental architecture, training methods and requirements for progress for modern AI systems are all completely different from the technology Yudkowsky imagined in 2008, yet nothing about the core MIRI story has changed.
So, regardless of the timeline of AGI, that’s dubious.
LessWrong’s intellectual approach has produced about half a dozen cults, but despite many years of effort, millions of dollars in funding, and the hard work of many people across various projects, and despite many advantages, such as connections that can open doors, it has produced nothing of objective, uncontroversial, externally confirmable intellectual, economic, scientific, technical, or social value. The perceived value of anything it has produced is solely dependent on whether you agree or disagree with its worldview — I disagree. LessWrong claims to have innovated a superior form of human thought, and yet has nothing to show for it. The only explanation that makes any sense is that they’re wrong, and are just fooling themselves. Otherwise, to quote Eliezer Yudkowsky, they’d be “smiling from on top of a giant heap of utility.”
Yudkowsky’s and LessWrong’s views on AGI are correctly seen by many experts, such as LeCun, as unserious and not credible, and, in turn, the typical LessWrong response to LeCun is unacceptably intellectually bad and doesn’t understand his views on a basic level, let alone respond to them convincingly.
Why would any rational person take that seriously?
Just calling yourself rational doesn’t make you more rational. In fact, hyping yourself up about how you and your in-group are more rational than other people is a recipe for being overconfidently wrong.
Getting ideas right takes humility and curiosity about what other people think. Some people pay lip service to the idea of being open to changing their mind, but then, in practice, it feels like they would rather die than admit they were wrong.
This is tied to the idea of humiliation. If disagreement is a humiliation contest, changing one’s mind can feel emotionally unbearable. Because it feels as if to change your mind is to accept that you deserve to be humiliated, that it’s morally appropriate. Conversely, if you humiliated others (or attempted to), to admit you were wrong about the idea is to admit you wronged these people, and did something immoral. That too can feel unbearable.
So, a few practical recommendations:
-Don’t call yourself rational or anything similar -Try to practice humility when people disagree with you -Try to be curious about what other people think -Be kind to people when you disagree so it’s easier to admit if they were right -Avoid people who aren’t kind to you when you disagree so it’s easier to admit if you were wrong
Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.
I’m sorry you encountered this, and I don’t want to minimise your personal experience
I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention.
On the whole though, i’ve found the EA community (both online and those I’ve met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartiality and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal)
I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don’t think those mean that the community as a whole, or even the sub-section, are ‘anti-LGBT’ and ‘anti-trans’, and I think there are historical and multifacted reasons why there’s some emnity between ‘progressive’ and ‘EA’ camps/perspectives.
Nevertheless, I’m sorry that you experience this sentiment, and I hope you’re feeling ok.
The progressive and/or leftist perspective on LGB and trans people offers the most forthright argument for LGB and trans equality and rights. The liberal and/or centre-left perspective tends to be more milquetoast, more mealy-mouthed, more fence-sitting.
The context for what I’m discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here.
Warning: This is a polemic that uses harsh language. I still completely, sincerely mean everything I say here and I consciously endorse it.[1]
It has never stopped shocking and disgusting me that the EA Forum is a place where someone can write a post arguing that Black Africans need Western-funded programs to edit their genomes to increase their intelligence in order to overcome global poverty and can cite overtly racist and white supremacist sources to support this argument (even a source with significant connections to the 1930s and 1940s Nazi Party in Germany and the American Nazi Party, a neo-Nazi party) and that post can receive a significant amount of approval and defense from people in EA, even after the thin disguise over top of the racism is removed by perceptive readers. That is such a bonkers thing and such a morally repugnant thing, I keep struggling to find words to express my exasperation and disbelief. Effective altruism as a movement probably deserves to fail for that, if it can’t correct it.[2]
My loose, general impression is that people who got involved in EA because of global poverty and animal welfare tend to be broadly liberal or centre-left and tend to be at least sympathetic toward arguments about social justice and anti-racism. Conversely, my impression of LessWrong and the online/Bay Area rationalist community is that they don’t like social justice, anti-racism, or socially/culturally progressive views. One of the most bewildering things I ever read on LessWrong was one of the site admins (an employee of Lightcone Infrastructure) arguing that closeted gay people probably tend to have low moral integrity because being closeted is a form of deception. I mean, what?! This is the “rationalist” community?? What are you talking about?! As I recall based on votes, a majority of forum users who voted on the comment agreed.[3]
Overall, LessWrong users seem broadly sympathetic to racist arguments and views.[4] Same for sexist or anti-feminist views, and extremely so for anti-LGBT (especially anti-trans) views. Personally, I find it to be the most unpleasant website I’ve spent more than ten hours reading. When I think of LessWrong, I picture a dark, dingy corner of a house. I truly find it to be awful.
The more I’ve thought about it, the more truth I find in the blogger Ozy Brennan’s interpretation of the LessWrong and the rationalist community through the concept of the “cultic milieu” and a comparison to new religious movements (not cults in the more usual sense connoting high-control groups). Ozy Brennan self-identifies as a rationalist and is active in the community, which makes this analysis far more believable than if it came from an outsider. The way I’d interpret Ozy’s blog post, which Ozy may not agree with, is that rationalists are in some sense fundamentally devoted to being incorrect, since they’re fundamentally devoted to being against consensus or majority views on many major topics — regardless of whether those views are correct or incorrect — and inevitably that will lead to having a lot of incorrect views.
I see very loose, very limited analogies between LessWrong and online communities devoted discussing conspiracy theories like QAnon or to online incel communities. Conspiracy theories because LessWrong has a suspicious, distrustful, at least somewhat paranoid or hypervigilant view on people and the world, this impulse to turn over rocks to find where the bad stuff is. Also, there’s the impulse to connect too much. To subsume too much under one theory or worldview. And too much reliance on one’s own fringe community to explain the world and interpret everything. Both, in a sense, are communities built around esoteric knowledge. And, indeed, I’ve seen some typical sort of conspiracy theory-seeming stuff on LessWrong related to American intelligence agencies and so on.
Incel communities because the atmosphere of LessWrong feels rather bitter, resentful, angry, unhappy, isolated, desperate, arrogant, and hateful, and in its own way is also a sort of self-help or commiseration community for young men who feel left out of the normal social world. But rather than encouraging healthy, adaptive responses to that experience, instead both communities encourage anti-social behaviour, leaning into distorted thinking, resentment, and disdainful views of other people.
I just noticed that Ozy recently published a much longer article in Asterisk Magazine on the topic of actual high-control groups or high-demand groups with some connection to the rationalist community. It will take me a while to properly read the whole thing and to think about it. But at a glance, there are some aspects of the article that are relevant to what I’m discussing here, such as this quote:
Nevertheless, some groups within the community have wound up wildly dysfunctional–a term I’m using to sidestep definitional arguments about what is and isn’t a cult. And some of the blame can be put on the rationalist community’s marketing.
The Sequences make certain implicit promises. There is an art of thinking better, and we’ve figured it out. If you learn it, you can solve all your problems, become brilliant and hardworking and successful and happy, and be one of the small elite shaping not only society but the entire future of humanity.
This is, not to put too fine a point on it, not true.
Multiple interviewees remarked that the Sequences create the raw material for a cult. To his credit, their author, Eliezer Yudkowsky, shows little interest in running one.
And this quote:
But people who are drawn to the rationalist community by the Sequences often want to be in a cult. To be sure, no one wants to be exploited or traumatized. But they want some trustworthy authority to change the way they think until they become perfect, and then to assign them to their role in the grand plan to save humanity. They’re disappointed to discover a community made of mere mortals, with no brain tricks you can’t get from Statistics 101 and a good CBT workbook, whose approach to world problems involves a lot fewer grand plans and a lot more muddling through.
And this one:
Jessica Taylor, an AI researcher who knew both Zizians and participants in Leverage Research, put it bluntly. “There’s this belief [among rationalists],” she said, “that society has these really bad behaviors, like developing self-improving AI, or that mainstream epistemology is really bad–not just religion, but also normal ‘trust-the-experts’ science. That can lead to the idea that we should figure it out ourselves. And what can show up is that some people aren’t actually smart enough to form very good conclusions once they start thinking for themselves.”
One way that thinking for yourself goes wrong is that you realize your society is wrong about something, don’t realize that you can’t outperform it, and wind up even wronger. But another potential failure is that, knowing both that your society is wrong and that you can’t do better, you start looking for someone even more right. Paradoxically, the desire to ignore the experts can make rationalists more vulnerable to a charismatic leader.
Or, as Jessica Taylor said, “They do outsource their thinking to others, but not to the typical authorities.”
In principle, you could have the view that the typical or median person is benefitted by the Sequences or by LessWrong or the rationalist community, and it’s just an unfortunate but uncommon side-effect for people to slip into cults or high-control groups. It sounds like that’s what Ozy believes. My view is much harsher: by and large, the influence that LessWrong/the rationalist community has on people is bad, and people who take these ideas and this subculture to an extreme are just experiencing a more extreme version of the bad that happens to pretty much everyone who is influenced by these ideas and this subculture. (There might be truly minor exceptions to this, but I still see this as the overall trend.)
Obviously, there is now a lot of overlap between the EA Forum and LessWrong and between EA and the rationalist community. I think to the extent that LessWrong and the rationalist community have influenced EA, EA has become something much worse. It’s become something repelling to me. I don’t want any of this cult stuff. I don’t want any of this racist stuff. Or conspiracy theory stuff. Or harmful self-help stuff for isolated young men. I’m happy to agree with the consensus view most of the time because I care about being correct much more than I care about being counter-consensus. I am extremely skeptical toward esoteric knowledge and I think it’s virtually always either nonsense or prosaic stuff repackaged to look esoteric. I don’t buy these promises of unlocking powerful secrets through obscure websites.
There was always a little bit of overlap between EA and the rationalist community, starting very early on, but it wasn’t a ruinous amount. And it’s not like EA didn’t independently have its own problems before the rationalist community’s influence increased a lot, but those problems seemed more manageable. The situation now feels like the rationalist community is unloading more and more of its cargo onto the boat that is EA, and EA is just sinking deeper and deeper into the water over time. I feel sour and queasy about this because EA was once something I loved and it’s becoming increasingly laden with things I oppose in the strongest possible terms, like racism, interpersonal cruelty, and extremely irrational thinking patterns. How can people who were in EA because of global poverty and animal welfare, who had no previous affiliation with the rationalist community, stand this? Are they all gone already? Have they opted to recede from public arguments and just focus on their own particular niches? What gives?
And to the extent that the racism in EA is independently EA’s problem and has nothing to do with the influence of the rationalist community (which obviously has to be more than nothing), then that is 100% EA’s problem, but I can’t imagine racism in EA could be satisfactorily addressed without significant conflict with and alienation of many people who overlap between the rationalist community and EA and who either endorse or strongly sympathize with racist views. (For example, in January 2023, when the Centre for Effective Altruism published a brief statement that affirmed the equality of Black people in response to the publication of a racist email by the philosopher Nick Bostrom, the most upvoted comment was from a prominent rationalist that started, “I feel really quite bad about this post,” and argued at length that universal human equality is not a tenet of effective altruism. This is unbelievably foolish and unbelievably morally wrong. Independently, that person has said and done things that seem to indicate either support or sympathy for racist views, which makes me think it was probably not just a big misunderstanding.)[5] That’s why the diversion from talking about racism on the EA Forum into discussing the rationalist community’s influence on EA.
Racism is paradigmatically evil and there is no moral or rational justification for it. Don’t lose sight of something so fundamental and clear. Don’t let EA drown under the racism and all the other bad stuff people want to bring to it. (Hey, now EA is the drowning child! Talk about irony!)
Incidentally, LessWrong and the rationalist community are dead wrong about near-term AGI as well — specifically, the probability of AGI before January 1, 2033 is significantly less than 0.1% and the MIRI worldview on alignment is most likely either just generally wrong/misguided or at least not applicable to deep learning-based systems — and that poses its own big problem for EA to the extent that EA has been influenced to accept LessWrong/rationalists’ views about near-term AGI. So, the influence of the rationalist community on EA has been damaging in multiple respects. (Although, again, EA bears responsibility for its part in all of it, both allowing the influence and for whatever portion of the mistake it would have made without that influence.)
About a day after posting this quick take, I changed the first sentence of this quick take from just italicized to a heading to make the links to the Reflective Altruism post more prominent and harder to miss. The sentence was always there.
Edited on October 25, 2025 at 11:22 PM Eastern to add: If you don’t know about the incident I’m referring to here, the context for what I’m discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here. The links to these Reflective Altruism posts were always in the first sentence of this quick take, but I’m adding this footnote to make those links harder to miss.
Edited on October 25, 2025 at 10:52 PM Eastern to add: I purposely omitted a link to this comment because I didn’t want to make this quick take a confrontation against the person who wrote it. But if you don’t believe me and you want to see the comment for yourself, send me a private message and I’ll send you the link.
Edited on October 25, 2025 at 11:25 PM Eastern to add: This is extensively documented in a different Reflective Altruism post from the two I have already linked, which you can find here.
Edited on October 25, 2025 at 11:19 PM Eastern to add: I’m referring here to the Manifest 2024 conference, which was held at Lighthaven in Berkely, California, a venue owned by Lightcone Infrastructure, the same organization that owns and operates LessWrong. I’m also referring to the discussions that happened after the event. There have been many posts about this event on the EA Forum. One post I found interesting was from a pseudonymous self-described member of the rationalist community that was critical of some aspects of the event and of some aspects of the rationalist community. You can read that post here.
Edited on October 26, 2025 at 11:57 PM Eastern to add: See also the philosopher David Thorstad’s post about Manifest 2024 on his blog Reflective Altruism. David’s posts are nearly encyclopedic in their thoroughness and have an incredibly high information density.
A possible explanation for why this post is heavily downvoted:
It makes serious, inflammatory, accusative, broad claims in a way that does not promote civil discussion
It rarely cites specific examples and facts that would serve to justify these claims
You linked to an article by Reflective Altruism, but I think it would have been beneficial to put links to specific examples directly in your text.
Two of the specific examples you use do not seem to be presented accurately:
About the post about genetically editing Africans to overcome poverty: “That post can receive a significant amount of approval and defense from people in EA. [...] Effective altruism as a movement probably deserves to fail for that, if it can’t correct it”
You’re talking about that post. However, you’re failing to mention that it currently has negative karma and twice as much disagreement as agreement. If anything, it is representative of something that EA (as a whole) does not support.
“When the CEA published a brief statement that affirmed the equality of Black people in response to the publication of a racist email by the philosopher Nick Bostrom, the most upvoted comment was from a prominent rationalist that started, ‘I feel really quite bad about this post,’ and argued at length that universal human equality is not a tenet of effective altruism. This is unbelievably foolish and unbelievably morally wrong.”
Your claim implies that the commenter said that because of racist reasons. However, they say in that very comment that “I value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality.” And much of their disagreement centred on the form of the statement.
I am a strong believer in civility and kindness, and although my quick take used harsh language, I think that is appropriate. I think, in a way, it can even be more respectful to speak plainly, directly, and honestly, as opposed to being passive-aggressive and dressing up insults in formal language.
I am expecting people to know the context or else learn what it is. It would not be economical for me to simply recreate the work already done on Reflective Altruism in my quick take.
That post only has negative karma because I strong downvoted it. If I remove my strong downvote, it has 1 karma. 8 agrees and 14 disagrees is more disagrees than agrees, but this is not a good ratio. Also, this is about more than just the scores on the post, it’s also about the comments defending the post, both on that post itself and elsewhere, and the scores on those comments.
I don’t think racist ideas should have 1 karma when you exclude my vote.
I think if someone says in response to a racist email about Black people from someone in our community that Black people have equal value to everyone else, your response should not be to argue that Black people have “approximately” as much value as white people. Normally, I would extend much more benefit of the doubt and try to interpret the comment more charitably, but subsequent evidence — namely, the commenter’s association with and defense of people with extreme racist views — has made me interpret that comment much less charitably than I otherwise would. In any case, even on the most charitable interpretation, it is foolish and morally wrong.
I think the issue is that, from my standpoint, there is a combination of harsh language, many broad claims about EA and LessWrong, which are both very negative and vague, and a lack of specific evidence in the text.
I expect few people here to be swayed by this kind of communication, since you may simply be overreacting and having an extremely low threshold to use terms like “racism”. It’s the discourse I tend to see on Twitter.
As an example of what I’d call an overreaction, when you say that someone did something “unbelievably foolish and unbelievably morally wrong,” I am thinking of very bad stuff, like doing fraud with charity money.
I am not thinking about a comment where someone said that “I value people approximately equally in impact estimates” (instead of “absolutely equally”). The lack of evidence means I can’t base myself on the commenter’s specific intentions.
There is a lot of context to fill people in on and I’ll leave that to the Reflective Altruism posts. I also added some footnotes that provide a bit more context. I wasn’t even really thinking about explaining everything to people who don’t already know the background.
I may be overreacting or you may be underreacting. Who’s to say? The only way to find out is to read the Reflective Altruism posts I cited and get the background knowledge that my quick take presumes.
I agree that discourse on Twitter is unbelievably terrible, but one of the ways that I believe using Twitter harms your mind is you just hear the most terrible points and arguments all the time, so you come to discount things that sound facially similar in non-Twitter contexts. I advocate that people completely quit Twitter (and other microblogging platforms like Bluesky) because I think it gets people into the habit of thinking in tweets, and thinking in tweets is ridiculous. When Twitter started, it was delightfully inane. The idea of trying to say anything in such a short space was whimsical. That it’s been elevated to a platform for serious discourse is absurd.
Again, the key context for that comment is that an extremely racist email by the philosopher Nick Bostrom was published that used the N word and said Black people are stupid. The Centre for Effective Altruism (CEA) released a very short, very simple statement that said all people are equal, i.e., in this context, Black people are equal to everyone else.
The commenter responded harshly against CEA’s statement and argued a point of view that, in context, reads as the view that Black people have less moral value than white people. And since then that commenter was involved in a controversy around racism, i.e., the Manifest 2024 conference. If you’re unfamiliar can read about that conference on Reflective Altruism here.
In that post, there’s a quote from Shakeel Hasim, who previously was the Head of Communications at the Centre for Effective Altruism (CEA):
By far the most dismaying part of my work at CEA was the increasing realisation that a big chunk of the rationalist community is just straight up racist. EA != rationalism, but I do think major EA orgs need to do a better job at cutting ties with them. Fixing the rationalism community seems beyond hope for me — prominent leaders are some of the worst offenders here, and it’s hard to see them going away. And the entire “truth seeking” approach is often a thinly veiled disguise for indulging in racism.
So, don’t take my word for it.
The hazard of speaking too dispassionately or understating things is it gives people a misleading impression. Underreacting is dangerous, just as overreacting is. This is why the harsh language is necessary.
Yes, knowing the context is vital to understanding where the harsh language is coming from, but I wasn’t really writing for people who don’t have the context (or who won’t go and find out what it is). People who don’t know the context can dismiss it, or they can become curious and want to find out more.
But colder, calmer, more understated language can also be easily dismissed, and is not guaranteed to elicit curiosity, either. And the danger there is that people tend to assume if you’re not speaking harshly and passionately, then what you’re talking about isn’t a big deal. (Also, why should people not just say what they really mean?)
Thanks for the answer; it explains things better for me.
I’ll just point out that another element that bugged me about the post was the lack of balance. It felt that things were made with an attitude that tries to judge everything in a negative light, which doesn’t make it trustworthy in my opinion.
Two examples:
The key context for that comment is that an extremely racist email by the philosopher Nick Bostrom was published that used the N word and said Black people are stupid
The email was indeed racist, but Nick Bostrom said this in an email that was 26 years old, for which he apologised since (the apology itself may be discussed, but this is still important context missing).
The commenter responded harshly against CEA’s statement and argued a point of view that, in context, reads as the view that Black people have less moral value than white people.
The comment literally states the opposite, and I did provide quotes. It really feels like you are trying to interpret things uncharitably.
So far, I feel like the examples provided are mostly debatable. I’d expect more convincing stuff before concluding there is a deep systemic issue to fix.
The CEA’s former head of communications’ quote is more relevant evidence, I must admit, though I don’t know how widespread or accurate their perception is (it doesn’t really match what I’ve seen).
I’d also appreciate some balance by highlighting all the positive elements EA brings to the table, such as literally saving the lives of thousands of Black people in Africa.
I think the overall theme of your complaints is that I don’t provide enough context for what I’m talking about, which is fair if you’re reading the post without context, but a lot of posts on the EA Forum are “inside baseball” that assume the reader has a lot of context. So, maybe this is an instance of context collapse, where something written with one audience in mind is interpreted differently because another audience with less context or a different context.
I don’t think it’s wrong for you to have the issues you’re having. If I were in your shoes, I would probably have the same issues.
But I don’t know how you could avoid these issues and still have “inside baseball” discussion on the EA Forum. This is a reason the “community” tag exists on the forum. It’s so people can separate posts that are interesting and accessible to a general audience from posts that only make sense if your head has already been immersed in the community stuff for a while.
The email was indeed racist, but Nick Bostrom said this in an email that was 26 years old, for which he apologised since (the apology itself may be discussed, but this is still important context missing).
I agree this is important context, but this is the sort of “inside baseball” stuff where I generally assume the kind of people interested in reading EA Forum community posts are already well aware of what happened and now I’m only providing more context because you’re directly asking me about it. Reflective Altruism is excellent because the author of that blog, David Thorstad, writes like encyclopedia articles of context for these sort of things. So, I just refer you to the relevant Reflective Altruism posts about the topics you’re interested in. (There is a post on the Bostrom email, for example.)
The comment literally states the opposite, and I did provide quotes.
The comment says that people are approximately equally valuable, not that they are equally valuable, and it’s hard to know what exactly this means to the author. But the context is CEA is saying that Black people are equally valuable and that commenter is saying he disagrees, feels bad about what the CEA is saying, and harshly criticizes the CEA for saying this. And, subsequently, the author has organized a conference that was friendly to people with extreme racist views such as white nationalism. The subsequent discussion of that conference did not allay the concerns of people who find that concerning.
What we’re talking about here is stuff like people defending slavery, defending colonialism, defending white nationalism, defending segregation, defending the Nazi regime in Germany, and so on. I am not exaggerating. This is literally the kind of things these people say. And the defenses about why people who say such things should be welcomed into the effective altruist community are not satisfactory.
For me, this is a case where, at multiple steps, I have left a more charitable interpretation open, but, at multiple turns, the subsequent evidence has pointed to the conclusion that Shakeel Hasim (the former Head of Communications at the CEA) came to, that this is just straight-up racism.
I refer you to the following Reflective Altruism posts: Human biodiversity (Part 2: Manifest), about the Manifest 2024 conference and the ensuing controversy around racism, and Human Biodiversity (Part 7: LessWrong). The post on LessWrong has survey data that supports Shakeel Hasim’s comment about racism in the rationalist community.
I’d also appreciate some balance by highlighting all the positive elements EA brings to the table, such as literally saving the lives of thousands of Black people in Africa.
I’ve had an intense interest in and affinity for effective altruism since before it was called effective altruism. I think it must have been in 2008 when I joined a Facebook group called Giving What We Can created by the philosopher Toby Ord. As I recall, it had just a few hundred members, maybe around 200. The website for Giving What We Can was still under construction and I don’t think the organization had been legally incorporated at that point. So, this has been a journey of 17 years for me, which is more than my entire adult life. Effective altruism has been an important part of my life story. Some of my best memories of my time at university was with my friends I made through my university effective altruism group. That’s a time in my life I will always treasure and bittersweetly reminisce on, sweetly because it was so beautiful, bitterly because it’s over.
If I thought there was nothing good about EA, I wouldn’t be on the EA Forum, and I wouldn’t be writing things about how to diagnose and fix EA’s problems. I would just disavow it and disassociate myself from it, as sadly many people have already done by now. I love the effective altruism I knew in the decade from 2008 to 2018, and it would be sad to me if that’s no longer on the Earth. For instance, I do think saving the lives of people living in poverty in sub-Saharan Africa is a worthy cause and a worthy achievement. This is precisely why I don’t like EA both abandoning global poverty as a cause area and allowing the encroachment of the old colonialist, racist ideas that the people I admire in international development like the economist William Easterly (author of the book The White Man’s Burden and the old blog Aid Watch) warned us so insistently we needed to avoid in contemporary international aid work.
Can you imagine a worse corruption, a worse twisting of this than to allow talk about why Black people are more genetically suited to slavery than white people, or how Europe did Africa a favour by colonizing it, or how Western countries should embrace white nationalism? That’s fucking insanity. That is evil. If this is what effective altruism is becoming, then as much as I love what effective altruism once was, effective altruism should die. It has betrayed what it once was and, on the values of the old effective altruism, the right decision would be to oppose the new effective altruism. It really couldn’t be more clear.
Thanks, I understand better the context and where you’re coming from. The stylé is easier for me to read and I appreciate that.
I won’t have much more time for this conversation, but just two points:
This is precisely why I don’t like EA both abandoning global poverty as a cause area
Is this actually true? To me global poverty was still number one in terms of donations, Give well is doing great, and most of the charity entrepreneurship charities are on this topic.
Can you imagine a worse corruption, a worse twisting of this than to allow talk about why Black people are more genetically suited to slavery than white people, or how Europe did Africa a favour by colonizing it, or how Western countries should embrace white nationalism?
Oh, yes, that would be awful. But I’d expect that virtually everybody in the EA forum would be against that.
And so far, in the examples you’ve given, you don’t show that even a sizeable minority of people would agree with these claims. For instance, for Manifold, you pointed to the fact that some EAs work with a forecasting organisation from the rationalist community who did a conference that invited many speakers to speak on forecasting and some of these speakers previously wrote some racist stuff on a topic unrelated to the conference (and even then that lead to quite a debate).
My understanding might be inaccurate, of course, but that’s such a long chain that I would consider this as quite far from a prevalent issue which currently has large negative consequences.
Another issue, and why the comment is getting downvoted heavily (including by myself) is because you seem to conflate the is-ought distinction with this post, and without the is-ought distinction being conflated, this post would not exist.
You routinely leap from “a person has moral views that are offensive to you” to “they are wrong about the facts of the matter”, and your evidence for this is paper thin at best.
Being able to separate moral views from beliefs on factual claims is one of the things that is expected if you are in EA/LW spaces.
This is not mutually exclusive with the issues CBhasfound.
Another issue, and why the comment is getting downvoted heavily (including by myself) is because you seem to conflate the is-ought distinction with this post, and without the is-ought distinction being conflated, this post would not exist.
You routinely leap from “a person has moral views that are offensive to you” to “they are wrong about the facts of the matter”, and your evidence for this is paper thin at best.
Being able to separate moral views from beliefs on factual claims is one of the things that is expected if you are in EA/LW spaces.
I don’t agree with this evaluation and, as stated, it’s just an unsupported assertion. So, there is nothing really here for me to respond to except to say I disagree.
It would help to have an example of what you mean by this. I imagine, if you gave an example, I would probably say that I think your characterization is simply wrong, and I find your wording obnoxious. This comes across as trying to insult me personally rather than trying to make a substantive argument that could conceivably be persuasive to me or to any outside person who’s on the fence about this topic.
I’m guessing you may have wrongly inferred that I reject certain factual claims on moral grounds, when really I reject them on factual grounds and part of what I’m criticizing is the ignorance or poor reasoning that I strain to imagine must be required to believe such plainly false and obviously ridiculous things. Yet it is also fair to criticize such epistemic mistakes for their moral ramifications. For example, if someone thinks world affairs are orchestrated by a global Jewish conspiracy, that’s just an unbelievably stupid thing to think and they can be rightly criticized for believing something so stupid. They can also rightly be criticized for this mistake because it also implies immoral conduct, namely, unjustifiable discrimination and hatred against Jewish people. If someone thinks this is a failure to decouple or a failure to appreciate the is/ought distinction, they don’t know what they’re talking about. In that case, they should study philosophy and not make up nonsense.[1]
But I will caveat that I actually have no idea what you meant, specifically, because you didn’t say. And maybe what you intended to say was actually correct and well-reasoned. Maybe if you explained your logic, I would accept it and agree. I don’t know.
I don’t know what you meant by your comment specifically, but, in general, I have sometimes found arguments about decoupling to be just unbelievably poorly reasoned because they don’t account for the most basic considerations. (The problem is not with the concept of decoupling in principle, in the abstract, it’s that people try to apply this concept in ways that make no sense.)[2] They are woefully incurious about what the opposing case might be and often contradict plain facts. For example, they might fail to distinguish between the concept of a boycott of an organization with morally objectionable views that is intended to have a causal impact on the world vs. the concept of acknowledging both positive and negative facts about that organization. For example:
Person A:I don’t want to buy products from Corporation Inc. because they fund lobbying for evil policies.
Person B:But Corporation Inc. makes good products! Learn to decouple!
(This is based on a real example. Yes, this is ridiculous, and yet something very similar to this was actually said.)
People don’t understand the basic concepts being discussed — e.g., the concept of a boycott and the rationale for boycotts — and then they say, “tut, tut, be rational!” but anyone could say “tut, tut, be rational” when anyone disagrees with them about anything (even in the cases they happen to be dead wrong and say things that don’t make sense), so what on Earth is the point of saying that?
This kind of “tut, tut” comes across to me as epistemically sloppy. The more you scold someone who disagrees with you, the more you lose face if you have to admit you made an embarrassing reasoning mistake, so the less likely you will be to admit such mistakes and the more you’ll double down on silly arguments because losing face is so uncomfortable. So, a good way to hold wrong views indefinitely is to say “tut, tut” as much as possible.
But, that’s only generally speaking, and I don’t know what you meant specifically. Maybe what you meant to say actually made sense. I’ll give you the benefit of the doubt, and an opportunity to elaborate, if you want.
This also obviously applies to prudential cases, in addition to moral cases. If you make a stupid mistake like putting the cereal in the fridge and the milk in the cupboard, you can laugh about that because the stakes are low. If you make a stupid mistake that is also dangerous to you, such as mixing cleaning products that contain bleach and ammonia (which produces chlorine gas), then you can criticize this mistake on prudential grounds as well as epistemic grounds. (To criticize a mistake on prudential or moral grounds is only valid if it is indeed a mistake, obviously.) And no one should assert this criticism is based on some kind of basic logical error where you’re failing to distinguish prudential considerations from epistemic ones — anyone saying that would not know what they’re talking about and should take a philosophy class.
In general, a common sort of reasoning error I observe is that people invoke a correct principle and apply it incorrectly. When they are pressed on the incorrect application, they fall back to defending the principle in the abstract, which is obviously not the point. By analogy, if someone you knew was talking about investing 100% of their savings in GameStop, it would be exasperating if they defended this decision by citing — very strong, quite plausibly completely correct — research about how it’s optimal to have an all-equity portfolio. It would be infuriating if they accused you of not understanding the rationale for investing in equities simply because you think a 100% GameStop portfolio is reckless. The simple lesson of this analogy: applying correct principles does not lead to correct conclusions if the principle is applied incorrectly! It’s obvious to spot when I deliberately make the example obvious to illustrate the point, but often less obvious to spot in practice — which is why so many people make errors of this kind so often.
An example here is this quote, which straddles dangerously close to “these people have morality that you find to be offensive, therefore they are wrong on the actual facts of the matter” (Otherwise you would make the Nazi source allegations less central to your criticism here):
(I don’t hold the moral views of what the quote is saying, to be clear).
It has never stopped shocking and disgusting me that the EA Forum is a place where someone can write a post arguing that Black Africans need Western-funded programs to edit their genomes to increase their intelligence in order to overcome global poverty and can cite overtly racist and white supremacist sources to support this argument (even a source with significant connections to the 1930s and 1940s Nazi Party in Germany and the American Nazi Party, a neo-Nazi party) and that post can receive a significant amount of approval and defense from people in EA, even after the thin disguise over top of the racism is removed by perceptive readers. That is such a bonkers thing and such a morally repugnant thing, I keep struggling to find words to express my exasperation and disbelief. Effective altruism as a movement probably deserves to fail for that, if it can’t correct it.[2]
It’s really quite something that you wrote almost 2000 words and didn’t include a single primary citation to support any of those claims. Even given that most of them are transparently false to anyone who’s spent 5 minutes reading either LW or the EA Forum, I think I’d be able to dig up something superficially plausible with which to smear them.
And if anyone is curious about why Yarrow might have an axe to grind, they’re welcome to examine this post, along with the associated comment thread.
Edit: changed the link to an archive.org copy, since the post was moved to draft after I posted this.
Edit2: I was incorrect about when it was moved back to a draft, see this comment.
The sources are cited in quite literally the first sentence of the quick take.
To my knowledge, every specific factual claim I made is true and none are false. If you want to challenge one specific factual claim, I would be willing to provide sources for that one claim. But I don’t want to be here all day.
Since I guess you have access to LessWrong’s logs given your bio, are you able to check when and by whom that LessWrong post was moved to drafts, i.e., if it was indeed moved to drafts after your comment and not before, and if it was, whether it was moved to drafts by the user who posted it rather than by a site admin or moderator?
And, indeed, this seems to show your accusation that there was an attempt to hide the post after you brought it up was false. An apology wouldn’t hurt!
The other false accusation was that I didn’t cite any sources, when in fact I did in the very first sentence of my quick take. Apart from that, I also directly linked to an EA Forum post in my quick take. So, however you slice it, that accusation is wrong. Here, too, an apology wouldn’t hurt if you want to signal good faith.
My offer is still open to provide sources for any one factual claim in the quick take if you want to challenge one of them. (But, as I said, I don’t want to be here all day, so please keep it to one.)
Incidentally, in my opinion, that post supports my argument about anti-LGBT attitudes on LessWrong. I don’t think I could have much success persuading LessWrong users of that, however, and that was not the intention of this quick take.
Yes, indeed, there was only an attempt to hide the post three weeks ago. I regret the sloppiness in the details of my accusation.
The other false accusation was that I didn’t cite any sources
I did not say that you did not cite any sources. Perhaps the thing I said was confusingly worded? You did not include any links to any of the incidents that you describe.
Huh? Why not just admit your mistake? Why double down on an error?
By the way, who do you think saved that post in the Wayback Machine on the exact same date it was moved to drafts? A remarkable coincidence, wouldn’t you say?
Your initial comment insinuated that the incidents I described were made up. But the incidents were not made up. They really happened. And I linked both to extensive documentation on Reflective Altruism and directly to a post on the EA Forum so that anyone could verify that the incidents I described occurred.
There was one incident I described that I chose not to include a link to out of consideration for your coworker. I wanted to avoid presenting the quick take as a personal attack on them. (That was not the point of what I wrote.) I still think that is the right call. But I can privately provide the link to anyone who requests it if there is any doubt this incident actually occurred.
But, in any case, I very much doubt we are going to have a constructive conversation at this point. Even though I strongly disagree with your views and I still think you owe me an apology, I sincerely wish you happiness.
I think it may be illuminating to conceptualise that EA has several “attractor failure modes” that it can coalesce into if insufficient attention is paid to methods of making EA community spaces not do that. You’ve noted some of these attractor failures in your post, and they are often related to other things that overlap with EA. They include (but are not limited to):
The cultic self-help conspiratorial milieu (probably from rationalism)
Racism and eugenicist ideas
Doomspirals (many versions depending on cause area, but “AI will kill us all P(doom) = 95%” is definitely one of them)
The question, then, is how does one balance community moderation to both promote the environment of individual truth seeking necessary to support EA as a philosophical concept, while also striving to avoid these, given a documented history within EA of them leading to things that don’t work out so well? I wonder what CEA’s community health team have said on the matter.
I’m very glad of Reflective Altruism’s work and I’m sorry to see the downvotes on this post. Would you consider a repost as a main post with dialed down emotive language in order to better reach people? I’d be happy to give you feedback on a draft.
Thanks. I’ll think about the idea of doing a post, but, honestly, what I wrote was what I wanted to write. I don’t see the emotion or the intensity of the writing as a failure or an indulgence, but as me saying what I really mean, and saying what needs to be said. What good’s sugar-coating it?
Something that anyone can do (David Thorstad has given permission in comments I’ve seen) is simply repost the Reflective Altruism posts about LessWrong and about the EA Forum here, on the EA Forum. Those posts are extremely dry, extremely factual, and not particularly opinionated. They’re more investigative than argumentative.
I have thought about what, practically, to do about these problems in EA, but I don’t think I have particularly clear thoughts or good thoughts on that. An option that would feel deeply regrettable and unfortunate to me would be for the subset of the EA movement that shares my discomfort to try to distinguish itself under some label such as effective giving. (Someone could probably come up with a better label if they thought about it for a while.)
I hope that there is a way for people like me to save what they love about this movement. I would be curious to hear ideas about this from people who feel similarly.
Here are my rules of thumb for improving communication on the EA Forum and in similar spaces online:
Say what you mean, as plainly as possible.
Try to use words and expressions that a general audience would understand.
Be more casual and less formal if you think that means more people are more likely to understand what you’re trying to say.
To illustrate abstract concepts, give examples.
Where possible, try to let go of minor details that aren’t important to the main point someone is trying to make. Everyone slightly misspeaks (or mis… writes?) all the time. Attempts to correct minor details often turn into time-consuming debates that ultimately have little importance. If you really want to correct a minor detail, do so politely, and acknowledge that you’re engaging in nitpicking.
When you don’t understand what someone is trying to say, just say that. (And be polite.)
Don’t engage in passive-aggressiveness or code insults in jargon or formal language. If someone’s behaviour is annoying you, tell them it’s annoying you. (If you don’t want to do that, then you probably shouldn’t try to communicate the same idea in a coded or passive-aggressive way, either.)
If you’re using an uncommon word or using a word that also has a more common definition in an unusual way (such as “truthseeking”), please define that word as you’re using it and — if applicable — distinguish it from the more common way the word is used.
Err on the side of spelling out acronyms, abbreviations, and initialisms. You don’t have to spell out “AI” as “artificial intelligence”, but an obscure term like “full automation of labour” or “FAOL” that was made up for one paper should definitely be spelled out.
When referencing specific people or organizations, err on the side of giving a little more context, so that someone who isn’t already in the know can more easily understand who or what you’re talking about. For example, instead of just saying “MacAskill” or “Will”, say “Will MacAskill” — just using the full name once per post or comment is enough. You could also mention someone’s profession (e.g. “philosopher”, “economist”) or the organization they’re affiliated with (e.g. “Oxford University”, “Anthropic”). For organizations, when it isn’t already obvious in context, it might be helpful to give a brief description. Rather than saying, “I donated to New Harvest and still feel like this was a good choice”, you could say “I donated to New Harvest (a charity focused on cell cultured meat and similar biotech) and still feel like this was a good choice”. The point of all this is to make what you write easy for more people to understand without lots of prior knowledge or lots of Googling.
When in doubt, say it shorter.[1] In my experience, when I take something I’ve written that’s long and try to cut it down to something short, I usually end up with something a lot clearer and easier to understand than what I originally wrote.
Kindness is fundamental. Maya Angelou said, “At the end of the day people won’t remember what you said or did, they will remember how you made them feel.” Being kind is usually more important than whatever argument you’re having.
Feel free to add your own rules of thumb.
This advice comes from the psychologist Harriet Lerner’s wonderful book Why Won’t You Apologize? — given in the completely different context of close personal relationships. I think it also works here.
I used to feel so strongly about effective altruism. But my heart isn’t in it anymore.
I still care about the same old stuff I used to care about, like donating what I can to important charities and trying to pick the charities that are the most cost-effective. Or caring about animals and trying to figure out how to do right by them, even though I haven’t been able to sustain a vegan diet for more than a short time. And so on.
But there isn’t a community or a movement anymore where I want to talk about these sorts of things with people. That community and movement existed, at least in my local area and at least to a limited extent in some online spaces, from about 2015 to 2017 or 2018.
These are the reasons for my feelings about the effective altruist community/movement, especially over the last one or two years:
-The AGI thing has gotten completely out of hand. I wrote a brief post here about why I strongly disagree with near-term AGI predictions. I wrote a long comment here about how AGI’s takeover of effective altruism has left me disappointed, disturbed, and alienated. 80,000 Hours and Will MacAskill have both pivoted to focusing exclusively or almost exclusively on AGI. AGI talk has dominated the EA Forum for a while. It feels like AGI is what the movement is mostly about now, so now I just disagree with most of what effective altruism is about.
-The extent to which LessWrong culture has taken over or “colonized” effective altruism culture is such a bummer. I know there’s been at least a bit of overlap for a long time, but ten years ago it felt like effective altruism had its own, unique culture and nowadays it feels like the LessWrong culture has almost completely taken over. I have never felt good about LessWrong or “rationalism” and the more knowledge and experience of it I’ve gained, the more I’ve accumulated a sense of repugnance, horror, and anger toward that culture and ideology. I hate to see that become what effective altruism is like.
-The stories about sexual harassment are so disgusting. They’re really, really bad and crazy. And it’s so annoying how many comments you see on EA Forum posts about sexual harassment that make exhausting, unempathetic, arrogant, and frankly ridiculous statements, if not borderline incomprehensible in some cases. You see these stories of sexual harassment in the posts and you see evidence of the culture that enables sexual harassment in the comments. Very, very, very bad. Not my idea of a community I can wholeheartedly feel I belong to.
-Kind of a similar story with sexism, racism, and transphobia. The level of underreaction I’ve seen to instances of racism has been crazymaking. It’s similar to the comments under the posts about sexual harassment. You see people justifying or downplaying clearly immoral behaviour. It’s sickening.
-A lot of the response to the Nonlinear controversy was disheartening. It was disheartening to see how many people were eager to enable, justify, excuse, downplay, etc. bad behaviour. Sometimes aggressively, arrogantly, and rudely. It was also disillusioning to see how many people were so… easily fooled.
-Nobody talks normal in this community. At least not on this forum, in blogs, and on podcasts. I hate the LessWrong lingo. To the extent the EA Forum has its own distinct lingo, I probably hate that too. The lingo is great if you want to look smart. It’s not so great if you want other people to understand what the hell you are talking about. In a few cases, it seems like it might even be deliberate obscurantism. But mostly it’s just people making poor choices around communication and writing style and word choice, maybe for some good reasons, maybe for some bad reasons, but bad choices either way. I think it’s rare that writing with a more normal diction wouldn’t enhance people’s understanding of what you’re trying to say, even if you’re only trying to communicate with people who are steeped in the effective altruist niche. I don’t think the effective altruist sublanguage is serving good thinking or good communication.
-I see a lot of interesting conjecture elevated to the level of conventional wisdom. Someone in the EA or LessWrong or rationalist subculture writes a creative, original, evocative blog post or forum post and then it becomes a meme, and those memes end up taking on a lot of influence over the discourse. Some of these ideas are probably promising. Many of them probably contain at least a grain of truth or insight. But they become conventional wisdom without enough scrutiny. Just because an idea is “homegrown” it takes on the force of a scientific idea that’s been debated and tested in peer-reviewed journals for 20 years, or a widely held precept of academic philosophy. That seems just intellectually the wrong thing to do and also weirdly self-aggrandizing.
-An attitude I could call “EA exceptionalism”, where people assert that people involved in effective altruism are exceptionally smart, exceptionally wise, exceptionally good, exceptionally selfless, etc. Not just above the average or median (however you would measure that), but part of a rare elite and maybe even superior to everyone else in the world. I see no evidence this is true. (In these sorts of discussions, you also sometimes see the lame argument that effective altruism is definitionally the correct approach to life because effective altruism means doing the most good and if something isn’t doing the most good, then it isn’t EA. The obvious implication of this argument is that what’s called “EA” might not be true EA, and maybe true EA looks nothing like “EA”. So, this argument is not a defense of the self-identified “EA” movement or community or self-identified “EA” thought.)
-There is a dark undercurrent to some EA thought, along the lines of negative utilitarianism, anti-natalism, misanthropy, and pessimism. I think there is a risk of this promoting suicidal ideation because it basically is suicidal ideation.
-Too much of the discourse seems to revolve around how to control people’s behaviours or beliefs. It’s a bit too House of Cards. I recently read about the psychologist Kurt Lewin’s study on the most effective ways to convince women to use animal organs (e.g. kidneys, livers, hearts) in their cooking during meat shortages during World War II. He found that a less paternalistic approach that showed more respect for the women’s was more effective in getting them to incorporate animal organs into their cooking. The way I think about this is: you didn’t have to be manipulated to get to the point where you are in believing what you believe or caring this much about this issue. So, instead of thinking of how to best manipulate people, think about how you got to the point where you are and try to let people in on that in an honest, straightforward way. Not only is this probably more effective, it’s also more moral and shows more epistemic humility (you might be wrong about what you believe and that’s one reason not to try to manipulate people into believing it).
-A few more things but this list is already long enough.
Put all this together and the old stuff I cared about (charity effectiveness, giving what I can, expanding my moral circle) is lost in a mess of other stuff that is antithetical to what I value and what I believe. I’m not even sure the effective altruism movement should exist anymore. The world might be better off if it closed down shop. I don’t know. It could free up a lot of creativity and focus and time and resources to work on other things that might end up being better things to work on.
I still think there is value in the version of effective altruism I knew around 2015, when the primary focus was on global poverty and the secondary focus was on animal welfare, and AGI was on the margins. That version of effective altruism is so different from what exists today — which is mostly about AGI and has mostly been taken over by the rationalist subculture — that I have to consider those two different things. Maybe the old thing will find new life in some new form. I hope so.
I’d distinguish here between the community and actual EA work. The community, and especially its leaders, have undoubtedly gotten more AI-focused (and/or publicly admittted to a degree of focus on AI they’ve always had) and rationalist-ish. But in terms of actual altruistic activity, I am very uncertain whether there is less money being spent by EAs on animal welfare or global health and development in 2025 than there was in 2015 or 2018. (I looked on Open Phil’s website and so far this year it seems well down from 2018 but also well up from 2015, but also 2 months isn’t much of a sample.) Not that that means your not allowed to feel sad about the loss of community, but I am not sure we are actually doing less good in these areas than we used to.
Yes, this seems similar to how I feel: I think the major donor(s) have re-prioritized, but am not so sure how many people have switched from other causes to AI. I think EA is more left to the grassroots now, and the forum has probably increased in importance. As long as the major donors don’t make the forum all about AI—then we have to create a new forum! But as donors change towards AI, the forum will inevitable see more AI content. Maybe some functions to “balance” the forum posts so one gets representative content across all cause areas? Much like they made it possible to separate out community posts?
On cause prioritization, is there a more recent breakdown of how more and less engaged EAs prioritize? Like an update of this? I looked for this from the 2024 survey but could not find it easily: https://forum.effectivealtruism.org/posts/sK5TDD8sCBsga5XYg/ea-survey-cause-prioritization
Thanks for sharing this, while I personally believe the shift in focus on AI is justified (I also believe working on animal welfare is more impactful than global poverty), I can definitely sympathize with many of the other concerns you shared and agree with many of them (especially LessWrong lingo taking over, the underreaction to sexism/racism, and the Nonlinear controversy not being taken seriously enough). While I would completely understand in your situation if you don’t want to interact with the community anymore, I just want to share that I believe your voice is really important and I hope you continue to engage with EA! I wouldn’t want the movement to discourage anyone who shares its principles (like “let’s use our time and resources to help others the most”), but disagrees with how it’s being put into practice, from actively participating.
My memory is a large number of people to the NL controversy seriously, and the original threads on it were long and full of hostile comments to NL, and only after someone posted a long piece in defence of NL did some sympathy shift back to them. But even then there are like 90-something to 30-something agree votes and 200 karma on Yarrow’s comment saying NL still seem bad: https://forum.effectivealtruism.org/posts/H4DYehKLxZ5NpQdBC/nonlinear-s-evidence-debunking-false-and-misleading-claims?commentId=7YxPKCW3nCwWn2swb
I don’t think people dropped the ball here really, people were struggling honestly to take accusations of bad behaviour seriously without getting into witch hunt dynamics.
Good point, I guess my lasting impression wasn’t entirely fair to how things played out. In any case, the most important part of my message is that I hope he doesn’t feels discouraged from actively participating in EA.
If the people arguing that there is an AI bubble turn out to be correct and the bubble pops, to what extent would that change people’s minds about near-term AGI?
I strongly suspect there is an AI bubble because the financial expectations around AI seem to be based on AI significantly enhancing productivity and the evidence seems to show it doesn’t do that yet. This could change — and I think that’s what a lot of people in the business world are thinking and hoping. But my view is a) LLMs have fundamental weaknesses that make this unlikely and b) scaling is running out of steam.
Scaling running out of steam actually means three things:
1) Each new 10x increase in compute is less practically or qualitatively valuable than previous 10x increases in compute.
2) Each new 10x increase in compute is getting harder to pull off because the amount of money involved is getting unwieldy.
3) There is an absolute ceiling to the amount of data LLMs can train on that they are probably approaching.
So, AI investment is dependent on financial expectations that are depending on LLMs enhancing productivity, which isn’t happening and probably won’t happen due to fundamental problems with LLMs and due to scaling becoming less valuable and less feasible. This implies an AI bubble, which implies the bubble will eventually pop.
So, if the bubble pops, will that lead people who currently have a much higher estimation than I do of LLMs’ current capabilities and near-term prospects to lower that estimation? If AI investment turns out to be a bubble, and it pops, would you change your mind about near-term AGI? Would you think it’s much less likely? Would you think AGI is probably much farther away?
I’m really curious what people think about this, so I posted it as a question here. Hopefully I’ll get some responses.
Rate limiting on the EA Forum is too strict. Given that people karma downvote because of disagreement, rather than because of quality or civility — or they judge quality and/or civility largely on the basis of what they agree or disagree with — there is a huge disincentive against expressing unpopular or controversial opinions (relative to the views of active EA Forum users, not necessarily relative to the general public or relevant expert communities) on certain topics.
This is a message I saw recently:
You aren’t just rate limited for 24 hours once you fall below the recent karma threshold (which can be triggered by one comment that is unpopular with a handful of people), you’re rate limited for as many days as it takes you to gain 25 net karma on new comments — which might take a while, since you can only leave one comment per day, and, also, people might keep downvoting your unpopular comment. (Unless you delete it — which I think I’ve seen happen, but I won’t do, myself, because I’d rather be rate limited than self-censor.)
The rate limiting system is a brilliant idea for new users or users who have less than 50 total karma — the ones who have little plant icons next to their names. It’s an elegant, automatic way to stop spam, trolling, and other abuses. But my forum account is 2.5 years old and I have over 1,000 karma. I have 24 posts published over 2 years, all with positive karma. My average karma per post/comment is +2.3 (not counting the default karma that all post/comments start with; this is just counting karma from people’s votes).
Examples of comments I’ve gotten downvoted into the net −1 karma or lower range include a methodological critique of a survey that was later accepted to be correct and led to the research report of an EA-adjacent organization getting revised. In another case, a comment was downvoted to negative karma when it was only an attempt to correct the misuse of a technical term in machine learning — a topic which anyone can confirm I’ve gotten right with a few fairly quick Google searches. People are absolutely not just downvoting comments that are poor quality or rude by any reasonable standard. They are downvoting things they disagree with or dislike for some other reason. (There are many other examples like the ones I just gave, including everything from directly answering a question to clarifying a point of disagreement to expressing a fairly anodyne and mainstream opinion that at least some prominent experts in the relevant field agree with.) Given this, karma downvoting as an automatic moderation tool with thresholds this sensitive just discourages disagreement.
One of the most important cognitive biases to look out for in a context like EA is group polarization, which is the tendency of individuals’ views to become more extreme once they join a group, even if each of the individuals had less extreme views before joining the group (i.e., they aren’t necessarily being converted by a few zealots who already had extreme views before joining). One way to mitigate group polarization is to have a high tolerance for internal disagreement and debate. I think the EA Forum does have that tolerance for certain topics and within certain windows of accepted opinions for most topics that are discussed, but not for other topics or only within a window that is quite narrow if you compare it to, say, the general population or expert opinion.
For example, 76% of AI experts believe it’s unlikely or very unlikely that LLMs will scale to AGI according to one survey, yet the opinion of EA Forum users seems to be the opposite of that. Not everyone on the EA Forum seems to consider the majority expert opinion an opinion worth considering too seriously. To me, that looks like group polarization in action. It’s one thing to disagree with expert opinion with some degree of uncertainty and epistemic humility, it’s another thing to see expert opinion as beneath serious discussion.
I don’t know what specific tweaks to the rate limiting system would be best. Maybe just turn it off altogether for users with over 500 karma (and rely on reporting posts/comments and moderator intervention to handle real problems), or as Jason suggested here, have the karma threshold trigger manual review by a moderator rather than automatic rate limiting. Jason also made some other interesting suggestions for tweaks in that comment and noted, correctly:
This actually works. I am reluctant to criticize the ideas of or express disagreement with certain organizations/books because of rate limiting, and rate limiting is the #1 thing that makes me feel like just giving up on trying to engage in intellectual debate and discussion and just quit the EA Forum.
I may be slow to reply to any comments on this quick take due to the forum’s rate limiting.
I think this highlights why some necessary design features of the karma system don’t translate well to a system that imposes soft suspensions on users. (To be clear, I find a one-comment-per-day limit based on the past 20 comments/posts to cross the line into soft suspension territory; I do not suggest that rate limits are inherently soft suspensions.)
I wrote a few days ago about why karma votes need to be anonymous and shouldn’t (at least generally) require the voter to explain their reasoning; the votes suggested general agreement on those points. But a soft suspension of an established user is a different animal, and requires greater safeguards to protect both the user and the openness of the Forum to alternative views.
I should emphasize that I don’t know who cast the downvotes that led to Yarrow’s soft suspension (which were on this post about MIRI), or why they cast their votes. I also don’t follow MIRI’s work carefully enough to have a clear opinion on the merits of any individual vote through the lights of the ordinary purposes of karma. So I do not intend to imply dodgy conduct by anyone. But: “Justice must not only be done, but must also be seen to be done.” People who are considering stating unpopular opinions shouldn’t have to trust voters to the extent they have to at present to avoid being soft suspended.
Neutrality: Because the votes were anonymous, it is possible that people who were involved in the dispute were casting votes that had the effect of soft-suspending Yarrow.
Accountability: No one has to accept responsibility and the potential for criticism for imposing a soft-suspension via karma downvotes. Not even in their own minds—since nominally all they did was downvote particular posts.
Representativeness: A relatively small number of users on a single thread—for whom there is no evidence of being representative of the Forum community as a whole—cast the votes in question. Their votes have decided for the rest of the community that we won’t be hearing much from Yarrow (on any topic) for a while.[1]
Reasoning transparency: Stating (or at least documenting) one’s reasoning serves as a check on decisions made on minimal or iffy reasoning getting through. [Moreover, even if voters had been doing so silently, they were unlikely to be reasoning about a vote to soft suspend Yarrow, which is what their votes were whether they realized it or not.]
There are good reasons to find that the virtues of accountability, representativeness, and reasoning transparency are outweighed by other considerations when it comes to karma generally. (As for neutrality, I think we have to accept that technical and practical limitations exist.) But their absence when deciding to soft suspend someone creates too high a risk of error for the affected user, too high a risk of suppressing viewpoints that are unpopular with elements of the Forum userbase, and too much chilling effect on users’ willingness to state certain viewpoints. I continue to believe that, for more established users, karma count should only trigger a moderator review to assess whether a soft suspension is warranted.
Although the mods aren’t necessarily representative in the abstract, they are more likely to not have particular views on a given issue than the group of people who actively participate on a given thread (and especially those who read the heavily downvoted comments on that thread). I also think the mods are likely to have a better understanding of their role as representatives of the community than individual voters do, which mitigates this concern.
I’ve really appreciated comments and reflections from @Yarrow Bouchard 🔸 and I think in his case at least this does feel a bit unfair. Its good to encourage new people on the forum, unless they are posting particularly egrarious thing which I don’t think he has been.
She, but thank you!
Assorted thoughts
Rate limits should not apply to comments on your own quick takes
Rate limits could maybe not count negative karma below −10 or so, it seems much better to rate limit someone only when they have multiple downvoted comments
2.4:1 is not a very high karma:submission ratio. I have 10:1 even if you exclude the april fool’s day posts, though that could be because I have more popular opinions, which means that I could double my comment rate and get −1 karma on the extras and still be at 3.5
if I were Yarrow I would contextualize more or use more friendly phrasing or something, and also not be bothered too much by single downvotes
From scanning the linked comments I think that downvoters often think the comment in question has bad reasoning and detracts from effective discussion, not just that they disagree
Deliberately not opining on the echo chamber question
Can you explain what you mean by “contextualizing more”? (What a curiously recursive question...)
You definitely have more popular opinions (among the EA Forum audience), and also you seem to court controversy less, i.e. a lot of your posts are about topics that aren’t controversial on the EA Forum. For example, if you were to make a pseudonymous account and write posts/comments arguing that near-term AGI is highly unlikely, I think you would definitely get a much lower karma to submission ratio, even if you put just as much effort and care into them as the posts/comments you’ve written on the forum so far. Do you think it wouldn’t turn out that way?
I’ve been downvoted on things that are clearly correct, e.g. the standard definitions of terms in machine learning (which anyone can Google); a methodological error that the Forecasting Research Institute later acknowledged was correct and revised their research to reflect. In other cases, the claims are controversial, but they are also claims where prominent AI experts like Andrej Karpathy, Yann LeCun, or Ilya Sutskever have said exactly the same thing as I said — and, indeed, in some cases I’m literally citing them — and it would be wild to think these sort of claims are below the quality threshold for the EA Forum. I think that should make you question whether downvotes are a reliable guide to the quality of contributions.
One-off instances of one person downvoting don’t bother me that much — that literally doesn’t matter, as long as it really is one-off — what bothers me is the pattern. It isn’t just with my posts/comments, either, it’s across the board on the forum. I see it all the time with other contributors as well. I feel uneasy dragging those people into this discussion without their permission — it’s easier to talk about myself — but this is an overall pattern.
Whether reasoning is good or bad is always bound to be controversial when debating about topics that are controversial, about which there is a lot of disagreement. Just downvoting what you judge to be bad reasoning will, statistically, amount to downvoting what you disagree with. Since downvotes discourage and, in some cases, disable (through the forum’s software) disagreement, you should ask: is that the desired outcome? Personally, I rarely, pretty much never, downvote based on what I perceive to be the reasoning quality for exactly this reason.
When people on the EA Forum deeply engage with the substance of what I have to say, I’ve actually found a really high rate of them changing their minds (not necessarily from P to ¬P, but shifting along a spectrum and rethinking some details). It’s a very small sample size, only a few people, but it’s something like out of five people that I’ve had a lengthy back-and-forth with over the last two months, three of them changed their minds in some significant way. (I’m not doing rigorous statistics here, just counting examples from memory.) And in two of the three cases, the other person’s tone started out highly confident, giving me the impression they initially thought there was basically no chance I had any good points that were going to convince them. That is the counterbalance to everything else because that’s really encouraging!
I put in an effort to make my tone friendly and conciliatory, and I’m aware I probably come off as a bit testy some of the time, but I’m often responding to a much harsher delivery from the other person and underreacting in order to deescalate the tension. (For example, the person who got the ML definitions wrong started out by accusing me of “bad faith” based on their misunderstanding of the definitions. There were multiple rounds of me engaging with politeness and cordiality before I started getting a bit testy. That’s just one example, but there are others — it’s frequently a similar dynamic. Disagreeing with the majority opinion of the group is a thankless job because you have to be nicer to people than they are to you, and then that still isn’t good enough and people say you should be even nicer.)
I mean it in this sense; making people think you’re not part of the outgroup and don’t have objectionable beliefs related to the ones you actually hold, in whatever way is sensible and honest.
Maybe LW is better at using disagreement button as I find it’s pretty common for unpopular opinions to get lots of upvotes and disagree votes. One could use the API to see if the correlations are different there.
Huh? Why would it matter whether or not I’m part of “the outgroup”...? What does that mean?
I think this is a significant reason why people downvote some, but not all, things they disagree with. Especially a member of the outgroup who makes arguments EAs have refuted before and need to reexplain, not saying it’s actually you
What is “the outgroup”?
Claude thinks possible outgroups include the following, which is similar to what I had in mind
a) I’m not sure all of those count as someone who would necessarily be an outsider to EA (e.g. Will MacAskill only assigns a 50% probability to consequentialism being correct, and he and others in EA have long emphasized pluralism about normative ethical theories; there’s been an EA system change group on Facebook since 2015 and discourse around systemic change has been happening in EA since before then)
b) Even if you do consider people in all those categories to be outsiders to EA or part of “the out-group”, us/them or in-group/out-group thinking seems like a bad idea, possibly leading to insularity, incuriosity, and overconfidence in wrong views
c) It’s especially a bad idea to not only think in in-group/out-group terms and seek to shut down perspectives of “the out-group” but also to cast suspicion on the in-group/out-group status of anyone in an EA context who you happen to disagree with about something, even something minor — that seems like a morally, subculturally, and epistemically bankrupt approach
You’re shooting the messenger. I’m not advocating for downvoting posts that smell of “the outgroup”, just saying that this happens in most communities that are centered around an ideological or even methodological framework. It’s a way you can be downvoted while still being correct, especially from the LEAST thoughtful 25% of EA forum voters
Please read the quote from Claude more carefully. MacAskill is not an “anti-utilitarian” who thinks consequentialism is “fundamentally misguided”, he’s the moral uncertainty guy. The moral parliament usually recommends actions similar to consequentialism with side constraints in practice.
I probably won’t engage more with this conversation.
I don’t know what he meant, but my guess FWIW is this 2014 essay.
I understand the general concept of ingroup/outgroup, but what specifically does that mean in this context?
I don’t know, sorry. I admittedly tend to steer clear of community debates as they make me sad, probably shouldn’t have commented in the first place...
I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.
Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, some stores sold out of face masks in several different cities in North America. (One example of many.) The oldest post on LessWrong tagged with “covid-19” is from well after this started happening. (I also searched the forum for posts containing “covid” or “coronavirus” and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described “prepper” who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don’t start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Here’s a New York Times article from February 2, 2020, entitled “Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say”, well before any of the worried posts on LessWrong:
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
The worried posts on LessWrong don’t start until weeks after this article was published. On a February 25, 2020 post asking when CFAR should cancel its in-person workshop, the top answer cites the CDC’s guidance at the time about covid-19. It says that CFAR’s workshops “should be canceled once U.S. spread is confirmed and mitigation measures such as social distancing and school closures start to be announced.” This is about 2-3 weeks out from that stuff happening. So, what exactly is being called early here?
CFAR is based in the San Francisco Bay Area, as are Lightcone Infrastructure and MIRI, two other organizations associated with the LessWrong community. On February 25, 2020, the city of San Francisco declared a state of emergency over covid. (Nearby, Santa Clara county, where most of what people consider as Silicon Valley is located, declared a local health emergency on February 10.) At this point in time, posts on LessWrong remain overall cautious and ambivalent.
By the time the posts on LessWrong get really, really worried, in the last few days of February and the first week of March, much of the rest of the world was reacting in the same way.
From February 14 to February 25, the S&P 500 dropped about 7.5%. Around this time, financial analysts and economists issued warnings about the global economy.
Between February 21 and February 27, Italy began its first lockdowns of areas where covid outbreaks had occurred.
On February 25, 2020, the CDC warned Americans of the possibility that “disruption to everyday life may be severe”. The CDC made this bracing statement:
Another line from the CDC:
On February 26, Canada’s Health Minister advised Canadians to stockpile food and medication.
The most prominent LessWrong post from late February warning people to prepare for covid came a few days later, on February 28. So, on this comparison, LessWrong was actually slightly behind the curve. (Oddly, that post insinuates that nobody else is telling people to prepare for covid yet, and congratulates itself on being ahead of the curve.)
In the beginning of March, the number of LessWrong posts tagged with covid-19 posts explodes, and the tone gets much more alarmed. The rest of the world was responding similarly at this time. For example, on February 29, 2020, Ohio declared a state of emergency around covid. On March 4, Governor Gavin Newsom did the same in California. The governor of Hawaii declared an emergency the same day, and over the next few days, many more states piled on.
Around the same time, the general public was becoming alarmed about covid. In the last days of February and the first days of March, many people stockpiled food and supplies. On February 29, 2020, PBS ran an article describing an example of this at a Costco in Oregon:
A March 1, 2020 article in the Los Angeles Times reported on stores in California running out of product as shoppers stockpiled. On March 2, an article in Newsweek described the same happening in Seattle:
In Canada, the public was responding the same way. Global News reported on March 3, 2020 that a Costco in Ontario ran out bottled water, toilet paper, and paper towels, and that the situation was similar at other stores around the country. The spike in worried posts on LessWrong coincides with the wider public’s reaction. (If anything, the posts on LessWrong are very slightly behind the news articles about stores being picked clean by shoppers stockpiling.)
On March 5, 2020, the cruise ship the Grand Princess made the news because it was stranded off the coast of California due to a covid outbreak on board. I remember this as being one seminal moment of awareness around covid. It was a big story. At this point, LessWrong posts are definitely in no way ahead of the curve, since everyone is talking about covid now.
On March 8, 2020, Italy put a quarter of its population under lockdown, then put the whole country on lockdown on March 10. On March 11, the World Health Organization declared covid-19 a global pandemic. (The same day, the NBA suspended the season and Tom Hanks publicly disclosed he had covid.) On March 12, Ohio closed its schools statewide. The U.S. declared a national emergency on March 13. The same day, 15 more U.S. states closed their schools. Also on the same day, Canada’s Parliament shut down because of the pandemic. By now, everyone knows it’s a crisis.
So, did LessWrong call covid early? I see no evidence of that. The timeline of LessWrong posts about covid follow the same timeline that the world at large reacted to covid, increasing in alarm as journalists, experts, and governments increasingly rang the alarm bells. In some comparisons, LessWrong’s response was a little bit behind.
The only curated post from this period (and the post with the third-highest karma, one of only four posts with over 100 karma) tells LessWrong users to prepare for covid three days after the CDC told Americans to prepare, and two days after Canada’s Health Minister told Canadians to stockpile food and medication. It was also three days after San Francisco declared a state of emergency. When that post was published, many people were already stockpiling supplies, partly because government health officials had told them to. (The LessWrong post was originally published on a blog a day before, and based on a note in the text apparently written the day before that, but that still puts the writing of the post a day after the CDC warning and the San Francisco declaration of a state of emergency.)
Unless there is some evidence that I didn’t turn up, it seems pretty clear the self-congratulatory narrative is a myth. The self-congratulation actually started in that post published on February 28, 2020, which, again, is odd given the CDC’s warning three days before (on the same day that San Francisco declared a state of emergency), analysts’ and economists’ warnings about the global economy a bit before that, and the New York Times article warning about a probable pandemic at the beginning of the month. The post is slightly behind the curve, but it’s gloating as if it’s way ahead.
Looking at the overall LessWrong post history in early 2020, LessWrong seems to have been, if anything, slightly behind the New York Times, the S&P 500, the CDC, and enough members of the general public to clear out some stores of certain products. By the time LessWrong posting reached a frenzy in the first week of March, the world was already responding — U.S governors were declaring states of emergency, and everyone was talking about and worrying about covid.
I think people should be skeptical and even distrustful toward the claims of the LessWrong community, both on topics like pandemics and about its own track record and mythology. Obviously this myth is self-serving, and it was pretty easy for me to disprove in a short amount of time — so anyone who is curious can check and see that it’s not true. The people in the LessWrong community who believe the community called covid early probably believe that because it’s flattering. If they actually wondered if this is true or not and checked the timelines, it would become pretty clear that didn’t actually happen.
Edited to add on Monday, December 15, 2025 at 3:20pm Eastern:
I spun this quick take out as a full post here. When I submitted the full post, there was no/almost no engagement on this quick take. In the future, I’ll try to make sure to publish things only as a quick take or only as a full post, but not both. This was a fluke under unusual circumstances.
Feel free to continue commenting here, cross-post comments from here onto the full post, make new comments on the post, or do whatever you want. Thanks to everyone who engaged and left interesting comments.
My gloss on this situation is:
YARROW: Boy, one would have to be a complete moron to think that COVID-19 would not be a big deal as late as Feb 28 2020, i.e. something that would imminently upend life-as-usual. At this point had China locked down long ago, and even Italy had started locking down. Cases in the USA were going up and up, especially when you correct for the (tiny) amount of testing they were doing. The prepper community had certainly noticed, and was out in force buying out masks and such. Many public health authorities were also sounding alarms. What kind of complete moron would not see what’s happening here? Why is lesswrong patting themselves on the back for noticing something so glaringly obvious?
MY REPLY: Yes!! Yes, this is true!! Yes, you would have to be a complete moron to not make this inference!! …But man, by that definition, there sure were an awful lot of complete morons around, i.e. most everyone. LessWrong deserves credit for rising WAY above the incredibly dismal standards set by the public-at-large in the English-speaking world, even if they didn’t particularly surpass the higher standards of many virologists, preppers, etc.
My personal experience: As someone living in normie society in Massachusetts USA but reading lesswrong and related, I was crystal clear that everything about my life was about to wrenchingly change, weeks before any of my friends or coworkers were. And they were very weirded out by my insistence on this. Some were in outright denial (e.g. “COVID = anti-Chinese racism” was a very popular take well into February, maybe even into March, and certainly the “flu kills far more than COVID” take was widespread in early March, e.g. Anderson Cooper). Others were just thinking about things in far-mode; COVID was a thing that people argued about in the news, not a real-world thing that could or should affect one’s actual day-to-day life and decisions. “They can’t possibly shut down schools, that’s crazy”, a close family member told me days before they did.
Dominic Cummings cited seeing the smoke as being very influential in jolting him to action (and thus impacting UK COVID policy), see screenshot here, which implies that this essay said something that he (and others at the tip-top of the UK gov’t) did not already see as obvious at the time.
A funny example that sticks in my memory is a tweet by Eliezer from March 11 2020. Trump had just tweeted:
Eliezer quote-tweeted that, with the commentary:
Not at all accurate. That’s not what I’m saying at all. It was a situation of high uncertainty, and the appropriate response was to be at least somewhat unsure, if not very unsure — yes, take precautions, think about it, learn about it, follow the public health advice. But I don’t think on February 28 anyone knew for sure what would happen, as opposed to made an uncertain call that turned out to be correct. The February 28 post I cite gives that sort of uncertain, precautionary advice, and I think it’s more or less reasonable advice — just a general ‘do some research, be prepared’ sort of thing.
It’s just that the post goes so far in patting itself on the back for being way ahead on this, when if someone in the LessWrong community had just posted about the CDC’s warning on the same day it was issued or had posted about it when San Francisco declared a public health emergency, or had made post noting that the S&P 500 had just fallen 7.5% and maybe that is a reason to be concerned, that would have put the first urgent warning about the pandemic a few days ahead of the February 28 post.
The takeaway of that post, and the takeaway of people who congratulate the LessWrong community on calling covid early, is that this is evidence that reading Yudkowsky’s Sequences or LessWrong posts or whatever promotes superior rationality, and is a vindication of the community’s beliefs. But that is the wrong conclusion to draw if something like 10-80% of the overall North American population (these figures are loosely based on polling cited in another comment) was at least equally concerned about covid-19 at least as early. 99.999% of the millions of people who were as concerned or more as early or earlier than the LessWrong community haven’t read the Sequences and don’t know what LessWrong is. A strategy that would have worked better than reading the Sequences or LessWrong posts is: just listen to what the CDC is saying and what state and local public health authorities are saying.
It’s ridiculous to draw the conclusion that this a vindication of LessWrong’s approach.
I don’t see this as a recommendation for LessWrong, although it sure is an interesting historical footnote. Dominic Cummings doesn’t appear to be a credible person on covid-19. For example, in November 2024 he posted a long, conspiratorial tweet which included:
”The Fauci network should be rolled up & retired en masse with some JAILED.
And their media supporters—i.e most of the old media—driven out of business.”
The core problem there is not that he hasn’t read LessWrong enough. (Indeed, reading LessWrong might make a person more likely to believe such things, if anything.)
Incidentally, Cummings also had a scandal in the UK around allegations that he inappropriately violated the covid-19 lockdown and subsequently wasn’t honest about it.
Tens of millions if not hundreds of millions of people in North America had experiences similar to this. The level of alarm spread across the population gradually from around mid-January to mid-March 2020, so at any given time, there were a large number of people who were much more concerned than another large number of people.
I tried to convince my friends to take covid more seriously a few days before the WHO proclamation, the U.S. state of emergency declaration, and all the rest made it evident to them that it was time to worry. I don’t think I’m a genius for this — in fact, they were probably right to wait for more convincing evidence. If we were to re-run the experiment 10 times or 100 times, their approach might prove superior to mine. I don’t know.
This is ridiculous. Do you think these sort of snipes are at all unique to Eliezer Yudkowsky? Turn on Rachel Maddow or listen to Pod Save America, or follow any number of educated liberals (especially those with relevant expertise or journalists who cover science and medicine) on Twitter and you would see this kind of stuff all the time. It’s not an insight unique to Yudkowsky that Donald Trump says ridiculous and dangerous things about covid or many other topics.
Thanks for collecting this timeline!
The version of the claim I have heard is not that LW was early to suggest that there might be a pandemic but rather that they were unusually willing to do something about it because they take small-probability high-impact events seriously. Eg. I suspect that you would say that Wei Dai was “late” because their comment came after the nyt article etc, but nonetheless they made 700% betting that covid would be a big deal.
I think it can be hard to remember just how much controversy there was at the time. E.g. you say of March 13, “By now, everyone knows it’s a crisis” but sadly “everyone” did not include the California department of public health, who didn’t issue stay at home orders for another week.
[I have a distinct memory of this because I told my girlfriend I couldn’t see her anymore since she worked at the department of public health (!!) and was still getting a ton of exposure since the California public health department didn’t think covid was that big of a deal.]
Is there any better evidence of this than the example you cited? That comment from Wei Dei is completely ridiculous… Making a lot of money off of a risky options trade does not discredit the efficient market hypothesis.
Even assuming Wei Dei’s math is right (which I don’t automatically trust), the market guessing on February 10, 2020 that there was a 12% chance (or whatever it is) that the covid-19 situation would be as bad as the market thought it was on February 27, 2020 doesn’t seem ridiculous or crazy or a discrediting of the efficient market hypothesis.
(Also note the bias of people only posting about the option trades they do well on, afterward, in retrospect...)
By what date did the majority of LessWrong users start staying at home?
I doubt that there are surveys of when people stayed home. You could maybe try to look at prediction markets but I’m not sure what you would compare them to to see if the prediction market was more accurate than some other reference group.
That seems like the crux of the matter!
I think the COVID case usefully illustrates a broader issue with how “EA/rationalist prediction success” narratives are often deployed.
That said, this is exactly why I’d like to see similar audits applied to other domains where prediction success is often asserted, but rarely with much nuance. In particular: crypto, prediction markets, LVT, and more recently GPT-3 / scaling-based AI progress. I wasn’t closely following these discussions at the time, so I’m genuinely uncertain about (i) what was actually claimed ex ante, (ii) how specific those claims were, and (iii) how distinctive they were relative to non-EA communities.
This matters to me for two reasons.
First, many of these claims are invoked rhetorically rather than analytically. “EAs predicted X” is often treated as a unitary credential, when in reality predictive success varies a lot by domain, level of abstraction, and comparison class. Without disaggregation, it’s hard to tell whether we’re looking at genuine epistemic advantage, selective memory, or post-hoc narrative construction.
Second, these track-record arguments are sometimes used—explicitly or implicitly—to bolster the case for concern about AI risks. If the evidential support here rests on past forecasting success, then the strength of that support depends on how well those earlier cases actually hold up under scrutiny. If the success was mostly at the level of identifying broad structural risks (e.g. incentives, tail risks, coordination failures), that’s a very different kind of evidence than being right about timelines, concrete outcomes, or specific mechanisms.
I like this comment. This topic is always at risk to devolving into a generalized debate between rationalists and their opponents, creating a lot of heat but not light. So it’s helpful to keep a fairly tight focus on potentially action-relevant questions (of which the comment identifies one).
I’ve been around EA pretty deeply since 2015, and to some degree since around 2009. My impression is that overall it’s what you guessed it might be: “selective memory, or post-hoc narrative construction.” Particularly around AI, but also in general with such claims.
(There’s a good reason to make specific, dated predictions publicly, in advance, ideally with some clear resolution criteria.)
Thank you, this is very good. Strong upvoted.
I don’t exactly trust you to do this in an unbiased way, but this comment seems the state-of-the-art and I love retrospectives on COVID-19. Plausibly I should look into the extent that your story checks out, plus how EA itself, the relevant parts of twitter or prediction platforms like Metaculus compared at the time (which I felt was definitely ahead).
See eg traviswfisher’s prediction on Jan 24:
https://x.com/metaculus/status/1248966351508692992
Or this post on this very forum from Jan 26:
https://forum.effectivealtruism.org/posts/g2F5BBfhTNESR5PJJ/concerning-the-recent-2019-novel-coronavirus-outbreak
I wrote this comment on Jan 27, indicating that it’s not just a few people worried at the time. I think most “normal” people weren’t tracking covid in January.
I think the thing to realize/people easily forget is that everything was really confusing and there was just a ton of contentious debate during the early months. So while there was apparently a fairly alarmed NYT report in early Feb, there were also many other reports in February that were less alarmed, many bad forecasts, etc.
It would be easy to find a few examples like this from any large sample of people. As I mentioned in the quick take, in late January, people were clearing out stores of surgical masks in cities like New York.
Why does this not apply to your original point citing a single NYT article?
It might, but I cited a number of data points to try to give an overall picture. What’s your specific objection/argument?
My overall objection/argument is that you appear to selectively portray data points that show one side, and selectively dismiss data points that show the opposite view. This makes your bottom-line conclusion pretty suspicious.
I also think the rationalist community overreached and their epistemics and speed in early COVID were worse compared to, say, internet people, government officials, and perhaps even the general public in Taiwan. But I don’t think the case for them being slower than Western officials or the general public in either the US or Europe is credible, and your evidence here does not update me much.
Let’s look at the data a bit more thoroughly.
It’s clear that in late January 2020, many people in North America were at least moderately concerned about covid-19.
I already gave the example of some stores in a few cities selling out of face masks. That’s anecdotal, but a sign of enough fear among enough people to be noteworthy.
What about the U.S. government’s reaction? The CDC issued a warning about travelling to China on January 28 and on January 31, the U.S. federal government declared a public health emergency, implemented a mandatory 14-day quarantine for travelers returning to China, and implemented other travel restrictions. Both the CDC warning and the travel restrictions were covered in the press, so many people knew about it, but even before that happened, a lot of people said they were worried.
Here’s a Morning Consult poll from January 24-26, 2020:
An Ipsos poll of Canadians from January 27-28 found similar results:
Were significantly more than 37% of LessWrong users very concerned about covid-19 around this time? Did significantly more than 16% think covid-19 posed a threat to themselves and their family?
It’s hard to make direct, apples-to-apples comparisons between the general public and the LessWrong community. We don’t have polls of the LessWrong community to compare to. But those examples you gave from January 24-January 27, 2020 don’t seem different from what we’d expect if the LessWrong community was at about the same level of concern at about the same time as the general public. Even if the examples you gave represented the worries of ~15-40% of the LessWrong community, that wouldn’t be evidence that LessWrong users were doing better than average.
I’m not claiming that the LessWrong community was clearly significantly behind. If it was behind at all, it was only by a few days or maybe a week tops (not much in the grand scheme of things), and the evidence isn’t clear or rigorous enough to definitively draw a conclusion like that. My claim is just that the LessWrong community’s claim to have called the pandemic early is pretty clearly false or at least, so far completely unsupported.
Thanks, I find the polls to be much stronger evidence than the other things you’ve said.
I recommend looking at the Morning Consult PDF and checking the different variations of the question to get a fuller picture. People also gave surprisingly high answers for other viruses like Ebola and Zika, but not nearly as high as for covid.
If you want a source who is biased in the opposite direction and who generally agrees with my conclusion, take a look here and here. I like this bon mot:
This is their conclusion from the second link:
This is a cool write-up! I’m curious how much/if you Zvi’s COVID round-ups you take into account? I wasn’t around LessWrong during COVID, but, if I understand correctly, those played a large role in the information flow during that time.
I haven’t looked into it, but any and all new information that can give a fuller picture is welcome.
Yeah! This is the series that I am referring to: https://www.lesswrong.com/s/rencyawwfr4rfwt5C.
As I understand it, Zvi was quite ahead of the curve with COVID and moved out of New York before others. I could be wrong, though.
The first post listed there is from March 2, 2020, so that’s relatively late in the timeline we’re considering, no? That’s 3 days later than the February 28 post I discussed above as the first/best candidate for a truly urgent early warning about covid-19 on LessWrong. (2020 was a leap year, so there was a February 29.)
That first post from March 2 also seems fairly simple and not particularly different from the February 28 post (which it cites).
Following up a bit on this, @parconley. The second post in Zvi’s covid-19 series is from 6pm Eastern on March 13, 2020. Let’s remember where this is in the timeline. From my quick take above:
Zvi’s post from March 13, 2020 at 6pm is about all the school closures that happened that day. (The U.S. state of emergency was declared that morning.) It doesn’t make any specific claims or predictions about the spread of the novel coronavirus, or anything else that could be assessed in terms of its prescience. It mostly focuses on the topic of the social functions that schools play (particularly in the United States and in the state of New York specifically) other than teaching children, such as providing free meals and supervision.
This is too late into the timeline to count as calling the pandemic early, and the post doesn’t make any predictions anyway.
The third post from Zvi is on March 17, 2020 and it’s mostly a personal blog. There are a few relevant bits. For one, Zvi admits he was surprised at how bad the pandemic was at that point:
He argues New York City is not locking down soon enough and San Francisco is not locking down completely enough. About San Francisco, one thing he says is:
I don’t know how sound this was given what experts knew at the time. It might have been the right call. I don’t know. I will just say that, in retrospect, it seems like going outside was one of the things we originally thought wasn’t fine that we later thought was actually fine after all.
The next post after that isn’t until April 1, 2020. It’s about the viral load of covid-19 infections and the question of how much viral load matters. By this point, we’re getting into questions about the unfolding of the ongoing pandemic, rather than questions about predicting the pandemic in advance. You could potentially go and assess that prediction track record separately, but that’s beyond the scope of my quick take, which was to assess whether LessWrong called covid early.
Overall, Zvi’s posts, at least the ones included in this series, are not evidence for Zvi or LessWrong calling covid early. The posts start too late and don’t make any predictions. Zvi saying “I didn’t expect it to be this bad” is actually evidence against Zvi calling covid early. So, I think we can close the book on this one.
Still open to hearing other things people might think of as evidence that the LessWrong community called covid early.
I spun this quick take out as a full post here. When I submitted the full post, there was no/almost no engagement on this quick take. In the future, I’ll try to make sure to publish things only as a quick take or only as a full post, but not both. This was a fluke under unusual circumstances.
Feel free to continue commenting here, cross-post comments from here onto the full post, make new comments on the post, or do whatever you want. Thanks to everyone who engaged and left interesting comments.
The NPR podcast Planet Money just released an episode on GiveWell.
The Ezra Klein Show (one of my favourite podcasts) just released an episode with GiveWell CEO Elie Hassenfeld!
Since my days of reading William Easterly’s Aid Watch blog back in the late 2000s and early 2010s, I’ve always thought it was a matter of both justice and efficacy to have people from globally poor countries in leadership positions at organizations working on global poverty. All else being equal, a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from Canada with an equal level of education, an equal ability to network with the right international organizations, etc.
In practice, this is probably hard to do, since it requires crossing language barriers, cultural barriers, geographical distance, and international borders. But I think it’s worth it.
So much of what effective altruism does, including around global poverty, including around the most evidence-based and quantitative work on global poverty, relies on people’s intuitions, and people’s intuitions formed from living in wealthy, Western countries with no connection to or experience of a globally poor country are going to be less accurate than people who have lived in poor countries and know a lot about them.
Simply put, first-hand experience of poor countries is a form of expertise and organizations run by people with that expertise are probably going to be a lot more competent at helping globally poor people than ones that aren’t.
I agree with most of you say here, indeed all things being equal a person from Kenya is going to be far more effective at doing anti-poverty work in Kenya than someone from anywhere else. The problem is your caveats - things are almost never equal...
1) Education systems just aren’t nearly as good in lower income countries. This means that that education is sadly barely ever equal. Even between low income countries—a Kenyan once joked with me that “a Ugandan degree holder is like a Kenyan high school leaver”. If you look at the top echelon of NGO/Charity leaders from low-income who’s charities have grown and scaled big, most have been at least partially educated in richer countries
2) Ability to network is sadly usually so so much higher if you’re from a higher income country. Social capital is real and insanely important. If you look at the very biggest NGOs, most of them are founded not just by Westerners, but by IVY LEAGUE OR OXBRIDGE EDUCATED WESTERNERS. Paul Farmer (Partners in Health) from Harvard, Raj Panjabi (LastMile Health) from Harvard. Paul Niehaus (GiveDirectly) from Harvard. Rob Mathers (AMF) Harvard AND Cambridge. With those connections you can turn a good idea into growth so much faster even compared to super privileged people like me from New Zealand, let alone people with amazing ideas and organisations in low income countries who just don’t have access to that kind of social capital.
3) The pressures on people from low-income countries are so high to secure their futures, that their own financial security will often come first and the vast majority won’t stay the course with their charity, but will leave when they get an opportunity to further their career. And fair enough too! I’ve seen a number of of incredibly talented founders here in Northern Uganda drop their charity for a high paying USAID job (that ended poorly...), or an overseas study scholarship, or a solid government job. Here’s a telling quote from this great take here by @WillieG
“Roughly a decade ago, I spent a year in a developing country working on a project to promote human rights. We had a rotating team of about a dozen (mostly) brilliant local employees, all college-educated, working alongside us. We invested a lot of time and money into training these employees, with the expectation that they (as members of the college-educated elite) would help lead human rights reform in the country long after our project disbanded. I got nostalgic and looked up my old colleagues recently. Every single one is living in the West now. A few are still somewhat involved in human rights, but most are notably under-employed (a lawyer washing dishes in a restaurant in Virginia, for example”
https://forum.effectivealtruism.org/posts/tKNqpoDfbxRdBQcEg/?commentId=trWaZYHRzkzpY9rjx
I think (somewhat sadly) a good combination can be for co-founders or co-leaders to be one person from a high-income country with more funding/research connections, and one local person who like you say will be far more effective at understanding the context and leading in locally-appropriate ways. This synergy can cover important bases, and you’ll see a huge number of charities (including mine) founded along these lines.
These realities makes me uncomfortable though, and I wish it weren’t so. As @Jeff Kaufman 🔸 said “I can’t reject my privilege, I can’t give it back” so I try and use my privilege as best as possible to help lift up the poorest people. The organisation OneDay Health I co-founded has me as the only employed foreigner, and 65 other local staff.
There are two philosophies on what the key to life is.
The first philosophy is that the key to life is separate yourself from the wretched masses of humanity by finding a special group of people that is above it all and becoming part of that group.
The second philosophy is that the key to life is to see the universal in your individual experience. And this means you are always stretching yourself to include more people, find connection with more people, show compassion and empathy to more people. But this is constantly uncomfortable because, again and again, you have to face the wretched masses of humanity and say “me too, me too, me too” (and realize you are one of them).
I am a total believer in the second philosophy and a hater of the first philosophy. (Not because it’s easy, but because it’s right!) To the extent I care about effective altruism, it’s because of the second philosophy: expand the moral circle, value all lives equally, extend beyond national borders, consider non-human creatures.
When I see people in effective altruism evince the first philosophy, to me, this is a profane betrayal of the whole point of the movement.
One of the reasons (among several other important reasons) that rationalists piss me off so much is their whole worldview and subculture is based on the first philosophy. Even the word “rationalist” is about being superior to other people. If the rationalist community has one founder or leader, it would be Eliezer Yudkowsky. The way Eliezer Yudkowsky talks to and about other people, even people who are actively trying to help him or to understand him, is so hateful and so mean. He exhales contempt. And it isn’t just Eliezer — you can go on LessWrong and read horrifying accounts of how some prominent people in the community have treated their employee or their romantic partner, with the stated justification that they are separate from and superior to others. Obviously there’s a huge problem with racism, sexism, and anti-LGBT prejudice too, which are other ways of feeling separate and above.
There is no happiness to be found at the top of a hierarchy. Look at the people who think in the most hierarchical terms, who have climbed to the tops of the hierarchies they value. Are they happy? No. They’re miserable. This is a game you can’t win. It’s a con. It’s a lie.
In the beautiful words of the Franciscan friar Richard Rohr, “The great and merciful surprise is that we come to God not by doing it right but by doing it wrong!”
(Richard Rohr’s episode of You Made It Weird with Pete Holmes is wonderful if you want to hear more.)
A number of podcasts are doing a fundraiser for GiveDirectly: https://www.givedirectly.org/happinesslab2025/
Podcast about the fundraiser: https://pca.st/bbz3num9
I just want to point out that I have a degree in philosophy and have never heard the word “epistemics” used in the context of academic philosophy. The word used has always been either epistemology or epistemic as adjective in front of a noun (never on its own, always used as an adjective, not a noun, and certainly never pluralized).
From what I can tell, “epistemics” seems to be weird EA Forum/LessWrong jargon. Not sure how or why this came about, since this is not obscure philosophy knowledge, nor is it hard to look up.
If you Google “epistemics” philosophy, you get 1) sources like Wikipedia that talk about epistemology, not “epistemics”, 2) a post from the EA Forum and a page from the Forethought Foundation, which is an effective altruist organization, 3) some unrelated, miscellaneous stuff (i.e. neither EA-related or academic philosophy-related), and 4) a few genuine but fairly obscure uses of the word “epistemics” in an academic philosophy context. This confirms that the term is rarely used in academic philosophy.
I also don’t know what people in EA mean when they say “epistemics”. I think they probably mean something like epistemic practices, but I actually don’t know for sure.
I would discourage the use of the term “epistemics”, particularly as its meaning is unclear, and would advocate for a replacement such as epistemology or epistemic practices (or whatever you like, but not “epistemics”).
I agree this is just a unique rationalist use. Same with ‘agentic’ though that has possibly crossed over into the more mainstream, at least in tech-y discourse.
However I think this is often fine, especially because ‘epistemics’ sounds better than ‘epistemic practices’ and means something distinct from ‘epistemology’ (the study of knowledge).
Always good to be aware you are using jargon though!
There’s no accounting for taste, but ‘epistemics’ sounds worse to my ear than ‘epistemic practices’ because the clunky jargoniness of ‘epistemics’ is just so evident. It’s as if people said ‘democratics’ instead of ‘democracy’, or ‘biologics’ instead of ‘biology’.
I also don’t know for sure what ‘epistemics’ means. I’m just inferring that from its use and assuming it means ‘epistemic practices’, or something close to that.
‘Epistemology’ is unfortunately a bit ambiguous and primarily connotes the subfield of philosophy rather than anything you do in practice, but I think it would also be an acceptable and standard use to talk about ‘epistemology’ as what one does in practice, e.g., ‘scientific epistemology’ or ‘EA epistemology’. It’s a bit similar to ‘ethics’ in this regard, which is both an abstract field of study and something one does in practice, although the default interpretation of ‘epistemology’ is the field, not the practice, and for ‘ethics’ it’s the reverse.
It’s neither here nor there, but I think talking about personal ‘agency’ (terminology that goes back decades, long predating the rationalist community) is far more elegant than talking about a person being ‘agentic’. (For AI agents, it doesn’t matter.)
I find “epistemics” neat because it is shorter than “applied epistemology” and reminds me of “athletics” and the resulting (implied) focus on being more focused on practice. I don’t think anyone ever explained what “epistemics” refers to, and I thought it was pretty self-explanatory from the similarity to “athletics”.
I also disagree about the general notion that jargon specific to a community is necessarily bad, especially if that jargon has fewer syllables. Most subcultures, engineering disciplines, sciences invent words or abbreviations for more efficient communication, and while some of that may be due to trying to gatekeep, it’s so universal that I’d be surprised if it doesn’t carry value. There can be better and worse coinages of new terms, and three/four/five-letter abbreviations such as “TAI” or “PASTA” or “FLOP” or “ASARA” are worse than words like “epistemics” or “agentic”.
I guess ethics makes the distinction between normative ethics and applied ethics. My understanding is that epistemology is not about practical techniques, and that one can make a distinction here (just like the distinction between “methodology” and “methods”).
I tried to figure out if there’s a pair of suffixes that try to express the difference between the theoretic study of some field and the applied version, Claude suggests “-ology”/”-urgy” (as in metallurgy, dramaturgy) and “-ology”/”-iatry” (as in psychology/psychiatry), but notes no general such pattern exists.
Applied ethics is still ethical theory, it’s just that applied ethics is about specific ethical topics, e.g. vegetarianism, whereas normative ethics is about systems of ethics, e.g. utilitarianism. If you wanted to distinguish theory from practice and be absolutely clear, you’d have to say something like ethical practices.
I prefer to say epistemic practices rather than epistemics (which I dislike) or epistemology (which I like, but is more ambiguous).
I don’t think the analogy between epistemics and athletics is obvious, and I would be surprised if even 1% of the people who have ever used the term epistemics have made that connection before.
I am very wary of terms that are never defined or explained. It is easy for people to assume they know what they mean, that there’s a shared meaning everyone agrees on. I really don’t know what epistemics means and I’m only assuming it means epistemic practices.
I fear that there’s a realistic chance if I started to ask different people to define epistemics, we would quickly uncover that different people have different and incompatible definitions. For example, some people might think of it as epistemic practices and some people might think of it as epistemological theory.
I am more anti-jargon and anti-acronyms than a lot of people. Really common acronyms, like AI or LGBT, or acronyms where the acronym is far better known than the spelled-out version, like NASA or DVD, are, of course, absolutely fine. PASTA and ASARA are egregious.
I’m such an anti-acronym fanatic I even spell out artificial general intelligence (AGI) and large language model (LLM) whenever I use them for the first time in a post.
My biggest problem with jargon is that nobody knows what it means. The in-group who is supposed to know what it means also doesn’t know what it means. They think they do, but they’re just fooling themselves. Ask them probing questions, and they’ll start to disagree and fight about the definition. This isn’t always true, but it’s true often enough to make me suspicious of jargon.
Jargon can be useful, but it should be defined, and you should give examples of it. If a common word or phrase exists that is equally good or better, then you should use that instead. For example, James Herbert recently made the brilliant comment that instead of “truthseeking” — an inscrutable term that, for all I know, would turn out to have no definite meaning if I took the effort to try to get multiple people to try to define it — an older term used on effectivealtruism.org was “a scientific mindset”, which is nearly self-explanatory. Science is a well-known and well-defined concept. Truthseeking — whatever that means — is not.
This isn’t just true for a subculture like the effective altruist community, it’s also true for a field like academic philosophy (maybe philosophy is unique in this regard among academic fields). You wouldn’t believe the number of times people disagree about the basic meaning of terms. (For example, do sentience and consciousness mean the same thing, or two different things? What about autonomy and freedom?) This has made me so suspicious that shared jargon actually isn’t understood in the same way by the people who are using it.
Just avoiding jargon isn’t the whole trick (for one, it’s often impossible or undesirable), it’s got to be a multi-pronged approach.
You’ve really got to give examples of things. Examples are probably more important than definitions. Think about when you’re trying to learn a card game, a board game, or a parlour game (like charades). The instructions can be very precise and accurate, but reading the instructions out loud often makes half the table go googly-eyed and start shaking their heads. If the instructions contain even one example, or if you can watch one round of play, that’s so much more useful than a precise “definition” of the game. Examples, examples, examples.
Also, just say it simpler. Speak plainly. Instead of ASANA, why not AI doing AI? Instead of PASTA, why not AI scientists and engineers? It’s so much cleaner, and simpler, and to the point.
People in effective altruism or adjacent to it should make some public predictions or forecasts about whether AI is in a bubble.
Since the timeline of any bubble is extremely hard to predict and isn’t the core issue, the time horizon for the bubble prediction could be quite long, say, 5 years. The point would not be to worry about the exact timeline but to get at the question of whether there is a bubble that will pop (say, before January 1, 2031).
For those who know more about forecasting than me, and especially for those who can think of good ways to financially operationalize such a prediction, I would encourage you to make a post about this.
[Edited on Nov. 17, 2025 at 3:35 PM Eastern to add: I wrote a full-fledged post about the AI bubble that can prompt a richer discussion. It doesn’t attempt to operationalize the bubble question, but gets into the expert opinions and evidence. I also do my own analysis.]
For now, an informal poll:
My leading view is that there will be some sort of bubble pop, but with people still using genAI tools to some degree afterwards (like how people kept using the internet after the dot com burst).
Still major uncertainty on my part because I don’t know much about financial markets, and am still highly uncertain about the level where AI progress fully stalls.
I just realized the way this poll is set up is really confusing. You’re currently at “50% 100% probability”, which when you look at it on the number line looks like 75%. Not the best tool to use for such a poll, I guess!
Oh, sure. People will keep using LLMs.
I don’t know exactly how you’d operationalize an AI bubble. If OpenAI were a public company, you could say its stock price goes down a certain amount. But private companies can control their own valuation (or the public perception of it) to a certain extent, e.g. by not raising more money so their last known valuation is still from their most recent funding round.
Many public companies like Microsoft, Google, and Nvidia are involved in the AI investment boom, so their stocks can be taken into consideration. You can also look at the level of investment and data centre construction.
I don’t think it would be that hard to come up with reasonable resolution criteria, it’s just that this is of course always a nitpicky thing with forecasting and I haven’t spent any time on it yet.
I’m not exactly sure about the operationalization of this question, but it seems like there’s a bubble among small AI startups at the very least. The big players might be unaffected however? My evidence for this is some mix of not seeing a revenue pathway for a lot of these companies that wouldn’t require a major pivot, few barriers to entry for larger players if their product becomes successful, and having met a few people who work in AI startups who claim to be optimistic about earnings and stuff but can’t really back that up.
I don’t know much about small AI startups. The bigger AI companies have a problem because their valuations have increased so much and the level of investment they’re making (e.g. into building datacentres) is reaching levels that feel unsustainable.
It’s to the point where the AI investment, driven primarily by the large AI companies, has significant macroeconomic effects on the United States economy. The popping of an AI bubble could be followed by a U.S. recession.
However, it’s a bit complicated, in that case, as to whether to say the popping of the bubble would have “caused” the recession, since there are a lot of factors, such as tariffs. Macroeconomics and financial markets are complicated and I know very little. I’m not nearly an expert.
I don’t think small AI startups creating successful products and then large AI companies copying them and outcompeting them would count as a bubble. That sounds like the total of amount of revenue in the industry would be about the same as if the startups succeeded, it just would flow to the bigger companies instead.
The bubble question is about the industry as a whole.
I do think there’s also a significant chance of a larger bubble, to be fair, affecting the big AI companies. But my instinct is that a sudden fall in investment into small startups and many of them going bankrupt would get called a bubble in the media, and that that investment wouldn’t necessarily just go into the big companies.
I put 30% on this possiblility, maybe 35%. I don’t have much more to say than “time horizons!”, “look how useful they’re becoming in my dayjob & personal life!”, “look at the qualitative improvement over the last six years”, “we only need to automate machine learning research, which isn’t the hardest thing to automate”.
Worlds in which we get a bubble pop are worlds in which we don’t get a software intelligence explosion, and in which either useful products come too late for the investment to sustain itself or there’s not really much many useful products after what we already have. (This is tied in with “are we getting TAI through the things LLMs make us/are able to do, without fundamental insights”.
I haven’t done the sums myself, but do we know for sure that they can’t make money without being all that useful, so long as a lot of people interact with them everyday?
Is Facebook “useful”? Not THAT much. Do people pay for it? No, it’s free. Instagram is even less useful than Facebook which at least used to actually be good for organizing parties and pub nights. Does META make money? Yes. Does equally useless TikTok make money? I presume so, yes. I think tech companies are pretty expert in monetizing things that have no user fee, and aren’t that helpful at work. There’s already a massive user base for Chat-GPT etc. Maybe they can monetize it even without it being THAT useful. Or maybe the sums just don’t work out for that, I’m not sure. But clearly the market thinks they will make money in expectation. That’s a boring reason for rejecting “it’s a bubble” claims and bubbles do happen, but beating the market in pricing shares genuinely is quite difficult I suspect.
Of course, there could also be a bubble even if SOME AI companies make a lot of money. That’s what happened with the Dot.com bubble.
This is an important point to consider. OpenAI is indeed exploring how to put ads on ChatGPT.
My main source of skepticism about this is that the marginal revenue from an online ad is extremely low, but that’s fine because the marginal cost of serving a webpage or loading a photo in an app or whatever is also extremely low. I don’t have a good sense of the actual numbers here, but since a GPT-5 query is considerably more expensive than serving a webpage, this could be a problem. (Also, that’s just the marginal cost. OpenAI, like other companies, also has to amortize all its fixed costs over all its sales, whether they’re ad sales or sales directly to consumers.)
It’s been rumoured/reported (not sure which) that OpenAI is planning to get ChatGPT to sell things to you directly. So, if you ask, “Hey, ChatGPT, what is the healthiest type of soda?”, it will respond, “Why, a nice refreshing Coca‑Cola® Zero Sugar of course!” This seems horrible. That would probably drive some people off the platform, but, who knows, it might be a net financial gain.
There are other “useless” ways companies like OpenAI could try to drive usage and try to monetize either via ads or paid subscriptions. Maybe if OpenAI leaned heavily into the whole AI “boyfriends/girlfriends” thing that would somehow pay off — I’m skeptical, but we’ve got to consider all the possibilities here.
What do you make of the fact that METR’s time horizon graph and METR’s study on AI coding assistants point in opposite directions? The graph says: exponential progress! Superhuman coders! AGI soon! Singularity! The study says: overhyped product category, useless tool, tricks people into thinking it helps them when it actually hurts them.
Pretty interesting, no?
Yep, I wouldn’t have predicted that. I guess the standard retort is: Worst case! Existing large codebase! Experienced developers!
I know that there’s software tools I use >once a week that wouldn’t have existed without AI models. They’re not very complicated, but they’d’ve been annoying to code up myself, and I wouldn’t have done it. I wonder if there’s a slowdown in less harsh scenarios, but it’s probably not worth the value of information of running such a study.
I dunno. I’ve done a bunch of calibration practice[1], this feels like a 30%, I’m calling 30%. My probability went up recently, mostly because some subjectively judged capabilities that I was expecting didn’t start showing up.
My metaculus calibration around 30% isn’t great, I’m overconfident there, I’m trying to keep that in mind. My fatebook is slightly overconfident in that range, and who can tell with Manifold.
There’s a longer discussion of that oft-discussed METR time horizons graph that warrants a post of its own.
My problem with how people interpret the graph is that people slip quickly and wordlessly from step to step in a logical chain of inferences that I don’t think can be justified. The chain of inferences is something like:
AI model performance on a set of very limited benchmark tasks → AI model performance on software engineering in general → AI model performance on everything humans do
I don’t think these inferences are justifiable.
I haven’t thought about my exact probability too hard yet, but for now I’ll just say 90% because that feels about right.
Your help requested:
I’m seeking second opinions on whether my contention in Edit #4 at the bottom of this post is correct or incorrect. See the edit at the bottom of the post for full details.
Brief info:
My contention is about the Forecasting Research Institute’s recent LEAP survey.
One of the headline results from the survey is about the probabilities the respondents assign to each of three scenarios.
However, the question uses an indirect framing — an intersubjective resolution or metaprediction framing.
The specific phrasing of the question is quite important.
My contention is, if respondents took the question literally, as written, they did not actually report their probabilities for each scenario, and there is no way to derive their probabilities from what they did report.
Therefore, the headline result that states the respondents’ probabilities for the three scenarios is not actually true.
If my contention is right, then it means the results of the report are being misreported in a quite significant way. If my contention is wrong, then I must make a mea culpa and apologize to the Forecasting Research Institute for my error.
So, your help requested. Am I right or wrong?
(Note: the post discusses multiple topics, but here I’m specifically asking for opinions on the intersubjective resolution/metaprediction concern raised in Edit #4.)
I believe you are correct, and will probably write up a post explaining why in detail at some point.
Thank you for your time and attention! I appreciate it!
Self-driving cars are not close to getting solved. Don’t take my word for it. Listen to Andrej Karpathy, the lead AI researcher responsible for the development of Tesla’s Full Self-Driving software from 2017 to 2022. (Karpathy also did two stints as a researcher at OpenAI, taught a deep learning course at Stanford, and coined the term “vibe coding”.)
From Karpathy’s October 17, 2025 interview with Dwarkesh Patel:
Karpathy elaborated later in the interview:
I hope the implication for discussions around AGI timelines is clear.
[Personal blog] I’m taking a long-term, indefinite hiatus from the EA Forum.
I’ve written enough in posts, quick takes, and comments over the last two months to explain the deep frustrations I have with the effective altruist movement/community as it exists today. (For one, I think the AGI discourse is completely broken and far off-base. For another, I think people fail to be kind to others in ordinary, important ways.)
But the strongest reason for me to step away is that participating in the EA Forum is just too unpleasant. I’ve had fun writing stuff on the EA Forum. I thank the people who have been warm to me, who have had good humour, and who have said interesting, constructive things.
But negativity bias being what it is (and maybe “bias” is too biased a word for it; maybe we should call it “negativity preference”), the few people who have been really nasty to me have ruined the whole experience. I find myself trying to remember names, to remember who’s who, so I can avoid clicking on reply notifications from the people who have been nasty. And this is a sign it’s time to stop.
Psychological safety is such a vital part of online discussion, or any discussion. Open, public forums can be a wonderful thing, but psychological safety is hard to provide on an open, public forum. I still have some faith in open, public forums, but I tend to think the best safety tool is giving authors the ability to determine who is and isn’t allowed to interact with their posts. There is some risk of people censoring disagreement, sure. But nastiness online is a major threat to everything good. It causes people to self-censor (e.g. by quitting the discussion platform or by withholding opinions) and it has terrible effects on discourse and on people’s minds.
And private discussions are important too. One of the most precious things you can find in this life is someone you can have good conversations with who will maintain psychological safety, keep your confidences, “yes, and” you, and be constructive. Those are the kind of conversations that loving relationships are built on. If you end up cooking something that the world needs to know about, you can turn it into a blog post or a paper or a podcast or a forum post. (I’ve done it before!) But you don’t have to do the whole process leading up to that end product in public.
The EA Forum is unusually good in some important respects, which is kind of sad, because it shows us a glimpse of what maybe could exist on the Internet, without itself realizing that promise.
If anyone wants to contact me for some reason, you can send me a message via the forum and I should get it as an email. Please put your email address in the message so I can respond to you by email without logging back into the forum.
Take care, everyone.
Have Will MacAskill, Nick Beckstead, or Holden Karnofsky responded to the reporting by Time that they were warned about Sam Bankman-Fried’s behaviour years before the FTX collapse?
Will responded here.
What AI model does SummaryBot use? And does whoever runs SummaryBot use any special tricks on top of that model? It could just be bias, but SummaryBot seems better at summarizing stuff then GPT-5 Thinking, o3, or Gemini 2.5 Pro, so I’m wondering if it’s a different model or maybe just good prompting or something else.
@Toby Tremlett🔹, are you SummaryBot’s keeper? Or did you just manage its evil twin?
Hey! @Dane Valerie runs SummaryBot, maybe she’d like to comment.
Thanks, Toby!
It used to run on Claude, but I’ve since moved it to a ChatGPT project using GPT-5. I update the system instructions quarterly based on feedback, which probably explains the difference you’re seeing. You can read more in this doc on posting SummaryBot comments.
Thank you very much for the info! It’s probably down to your prompting, then. Squeezing things into 6 bullet points might be just a helpful format for ChatGPT or for summaries (even human-written ones) in general. Maybe I will try that myself when I want to ask ChatGPT to summarize something.
I also think there’s an element of “magic”/illusion to it, though, since I just noticed a couple mistakes SummaryBot made and now its powers seem less mysterious.
Here is the situation we’re in with regard to near-term prospects for artificial general intelligence (AGI). This is why I’m extremely skeptical of predictions that we’ll see AGI within 5 years.
-Current large language models (LLMs) have extremely limited capabilities. For example, they can’t score above 5% on the ARC-AGI-2 benchmark, they can’t automate any significant amount of human labour,[1] and they can only augment human productivity in minor ways in limited contexts.[2] They make ridiculous mistakes all the time, like saying something that happened in 2025 caused something that happened in 2024, while listing the dates of the events. They struggle with things that are easy for humans, like playing hangman.
-The capabilities of LLMs have been improving slowly. There is only a modest overall difference between GPT-3.5 (the original ChatGPT model), which came out in November 2022, and newer models like GPT-4o, o4-mini, and Gemini 2.5 Pro.
-There are signs that there are diminishing returns to scaling for LLMs. Increasing the size of models and the size of the pre-training data doesn’t seem to be producing the desired results anymore. LLM companies have turned to scaling test-time compute to eke out more performance gains, but how far can that go?
-There may be certain limits to scaling that are hard or impossible to overcome. For example once you’ve trained a model on all the text that exists in the world, you can’t keep training on exponentially[3] more text every year. Current LLMs might be fairly close to running out of exponentially[4] more text to train on, if they haven’t run out already.[5]
-A survey of 475 AI experts found that 76% think it’s “unlikely” or “very unlikely” that “scaling up current AI approaches” will lead to AGI. So, we should be skeptical of the idea that just scaling up LLMs will lead to AGI, even if LLM companies manage to keep scaling them up and improving their performance by doing so.
-Few people have any concrete plan for how to build AGI (beyond just scaling up LLMs). The few people who do have a concrete plan disagree fundamentally on what the plan should be. All of these plans are in the early-stage research phase. (I listed some examples in a comment here.)
-Some of the scenarios people are imagining where we get to AGI in the near future involve strange, exotic, hypothetical process wherein a sub-AGI AI system can automate the R&D that gets us from a sub-AGI AI system to AGI. This requires two things to be true: 1) that doing the R&D needed to create AGI is not a task that would require AGI or human-level AI and 2) that, in the near term, AI systems somehow advance to the point where they’re able to do meaningful R&D autonomously. Given that I can’t even coax o4-mini or Gemini 2.5 Pro into playing hangman properly, and given the slow improvement of LLMs and the signs of diminishing returns to scaling I mentioned, I don’t see how (2) could be true. The arguments for (1) feel very speculative and handwavy.
Given all this, I genuinely can’t understand why some people think there’s a high chance of AGI within 5 years. I guess the answer is they probably disagree on most or all of these individual points.
Maybe they think the conventional written question and answer benchmarks for LLMs are fair apples-to-apples comparisons of machine intelligence and human intelligence. Maybe they are really impressed with the last 2 to 2.5 years of progress in LLMs. Making they are confident no limits to scaling or diminishing returns to scaling will stop progress anytime soon. Maybe they are confident that scaling up LLMs is a path to AGI. Or maybe they think LLMs will soon be able to take over the jobs of researchers at OpenAI, Anthropic, and Google DeepMind.
I have a hunch (just a hunch) that it’s not a coincidence many people’s predictions are converging (or herding) around 2030, give or take a few years, and that 2029 has been the prophesied year for AGI since Ray Kurzweil’s book The Age of Spiritual Machines in 1999. It could be a coincidence. But I have a sense that there has been a lot of pent-up energy around AGI for a long time and ChatGPT was like a match in a powder keg. I don’t get the sense that people formed their opinions about AGI timelines in 2023 and 2024 from a blank slate.
I think many people have been primed for years by people like Ray Kurzweil and Eliezer Yudkowsky and by the transhumanist and rationalist subcultures to look for any evidence that AGI is coming soon and to treat that evidence as confirmation of their pre-existing beliefs. You don’t have to be directly influenced by these people or by these subcultures to be influenced. If enough people are influenced by them or a few prominent people are influenced, then you end up getting influenced all the same. And when it comes to making predictions, people seem to have a bias toward herding, i.e., making their predictions more similar to the predictions they’ve heard, even if that ends up making their predictions less accurate.
The process by which people come up with the year they think AGI will happen seems especially susceptible to herding bias. You ask yourself when you think AGI will happen. A number pops into your head that feels right. How does this happen? Who knows.
If you try to build a model to predict when AGI will happen, you still can’t get around it. Some of your key inputs to the model will require you to ask yourself a question and wait a moment for a number to pop into your head that feels right. The process by which this happens will still be mysterious. So, the model is ultimately no better than pure intuition because it is pure intuition.
I understand that, in principle, it’s possible to make more rigorous predictions about the future than this. But I don’t think that applies to predicting the development of a hypothetical technology where there is no expert agreement on the fundamental science underlying that technology, and not much in the way of fundamental science in that area at all. That seems beyond the realm of ordinary forecasting.
This post discusses LLMs and labour automation in the section “Real-World Adoption”.
One study I found had mixed results. It looked at the use of LLMs to aid people working in customer support, which seems like it should be one of the easiest kinds of jobs to automate using LLMs. The study found that the LLMs increased productivity for new, inexperienced employees but decreased productivity for experienced employees who already knew the ins and outs of the job:
I’m using “exponentially” colloquially to mean every year the LLM’s training dataset grows by 2x or 5x or 10x — something along those lines. Technically, if the training dataset increased by 1% a year, that would be exponential, but let’s not get bogged down in unimportant technicalities.
Yup, still using it colloquially.
Epoch AI published a paper in June 2024 that predicts LLMs will exhaust the Internet’s supply of publicly available human-written text between 2026 and 2032.
Slight update to the odds I’ve been giving to the creation of artificial general intelligence (AGI) before the end of 2032. I’ve been anchoring the numerical odds of this to the odds of a third-party candidate like Jill Stein or Gary Johnson winning a U.S. presidential election. That’s something I think is significantly more probable than AGI by the end of 2032. Previously, I’d been using 0.1% or 1 in 1,000 as the odds for this, but I was aware that these odds were probably rounded.
I took a bit of time to refine this. I found that in 2016, FiveThirtyEight put the odds on Evan McMullin — who was running as an independent, not for a third party, but close enough — becoming president at 1 in 5,000 or 0.02%. Even these odds are quasi-arbitrary, since McMullin only became president in simulations where neither of the two major party candidates won a majority of Electoral College votes. In such scenarios, Nate Silver arbitrarily put the odds at 10% that the House would vote to appoint McMullin as the president.
So, for now, it is more accurate for me to say: the probability of the creation of AGI before the end of 2032 is significantly less than 1 in 5,000 or 0.02%.
I can also expand the window of time from the end of 2032 to the end of 2034. That’s a small enough expansion it doesn’t affect the probability much. Extending the window to the end of 2034 covers the latest dates that have appeared on Metaculus since the big dip in its timeline that happened in the month following the launch of GPT-4. By the end of 2034, I still put the odds of AGI significantly below 1 in 5,000 or 0.02%.
My confidence interval is over 95%.[Edited Nov. 28, 2025 at 3:06pm Eastern. See comments below.]I will continue to try to find other events to anchor my probability to. It’s difficult to find good examples. An imperfect point of comparison is an individual’s annual risk of being struck by lightning, which is 1 in 1.22 million. Over 9 years, the risk is in 1 in 135,000. Since the creation of AGI within 9 years seems less likely to me than that I’ll be struck by lightning, I could also say the odds of AGI’s creation within that timeframe is less than 1 in 135,000 or less than 0.0007%.
It seems like once you get significantly below 0.1%, though, it becomes hard to intuitively grasp the probability of events or find good examples to anchor off of.
I don’t think this should be downvoted. It’s a perfectly fine example of reasoning transparency. I happen to disagree, but the disagree-vote button is there for a reason.
Thank you. Karma downvotes have ceased to mean anything to me.
People downvote for no discernible reason, at least not reasons that are obvious to me, nor that they explain. I’m left to surmise what the reasons might be, including (in some cases) possibly disagreement, pique, or spite.
Neutrally informative things get downvoted, factual/straightforward logical corrections get downvoted, respectful expressions of mainstream expert opinion get downvoted — everything, anything. The content is irrelevant and the tone/delivery is irrelevant. So, I’ve stopped interpreting downvotes as information.
I don’t think this sort of anchoring is a useful thing to do. There is no logical reason for third party presidency success and AGI success to be linked mathematically. It seems like the third party thing is based on much greater empirical grounding.
You linked them because your vague impression of the likelihood of one was roughly equal to the vague impression of the likliehood of the other: If your vague impression of the third party thing changes, it shouldn’t change your opinion of the other thing. You think that AGI is 5 times less likely than you previously thought because you got more precise odds about one guy winning the presidency ten years ago?
My (perhaps controversial) view is that forecasting AGI is in the realm of speculation where quantification like this is more likely to obscure understanding than to help it.
I don’t think AGI is five times less likely than I did a week ago, I realized the number I had been translating my qualitative, subjective intuition into was five times too high. I also didn’t change my qualitative, subjective intuition of the probability of a third-party candidate winning a U.S. presidential election. What changed was just the numerical estimate of that probability — from an arbitrarily rounded 0.1% figure to a still quasi-arbitrary but at least somewhat more rigorously derived 0.02%. The two outcomes remain logically disconnected.
I agree that forecasting AGI is an area where any sense of precision is an illusion. The level of irreducible uncertainty is incredibly high. As far as I’m aware, the research literature on forecasting long-term or major developments in technology has found that nobody (not forecasters and not experts in a field) can do it with any accuracy. With something as fundamentally novel as AGI, there is an interesting argument that it’s impossible, in principle, to predict, since the requisite knowledge to predict AGI includes the requisite knowledge to build it, which we don’t have — or at least I don’t think we do.
The purpose of putting a number on it is to communicate a subjective and qualitative sense of probability in terms that are clear, that other people can understand. Otherwise, its hard to put things in perspective. You can use terms like extremely unlikely, but what does that mean? Is something that has a 5% chance of happening extremely unlikely? So, rolling a natural 20 is extremely unlikely? (There are guides to determining the meaning of such terms, but they rely on assigning numbers to the terms, so we’re back to square one.)
Something that works just as well is comparing the probability of one outcome to the probability of another outcome. So, just saying that the probability of near-term AGI is less than the probability of Jill Stein winning the next presidential election does the trick. I don’t know why I always think of things involving U.S. presidents, but my point of comparison for the likelihood of widely deployed superintelligence by the end of 2030 was that I thought it was more likely the JFK assassination turned out to be a hoax, and that JFK was still alive.[1]
I initially resisted putting any definite odds on near-term AGI, but I realized a lack of specificity was hurting my attempts to get my message across.
This approach doesn’t work perfectly, either, because what if different people have different opinions/intuitions on the probability of outcomes like Jill Stein winning? But putting low probabilities (well below 1%) into numbers has a counterpart problem in that you don’t know if you have the same intuitive understanding as someone else of what a 1 in 1,000 chance, a 1 in 10,000 chance, or a 1 in 100,000 chance means with regard to highly irreducibly uncertain events that are rare (e.g. recent U.S. presidential elections), unprecedented (e.g. AGI), or one-off (e.g. Russia ending the current war against Ukraine), and which can’t be statistically or mechanically predicted.
When NASA models the chance of an asteroid hitting Earth as 1 in 25,000 or the U.S. National Weather Service calculates the annual individual risk of being hit by lightning as 1 in 1.22 million, I trust that has some objective, concrete meaning. If someone subjectively guesses that Jill Stein has a 1 in 25,000 chance of winning in 2028, I don’t know if someone with a very similar gut intuition about her odds would also say 1 in 25,000, or if they’d say a number 100x higher or lower.
Possibly forecasters and statisticians have a good intuitive sense of this, but most regular people do not.
What do you mean by this? What is it that you’re 95% confident about?
Maybe this is a misapplication of the concept of confidence intervals — math is not my strong suit, nor is forecasting, so let me know — but what I had in mind is that I’m forecasting a 0.00% to 0.02% probability range for AGI by the end of 2034, and that if I were to make 100 predictions of a similar kind, more than 95 of them would have the “correct” probability range (whatever that ends up meaning).
But now that I’m thinking about it more and doing a cursory search, I think with a range of probabilities for a given date (e.g. 0.00% to 0.02% by end of 2034) as opposed to a range of years (e.g. 5 to 20 years) or another definite quantity, the probability itself is supposed to represent all the uncertainty and the confidence interval is redundant.
As you can tell, I’m not a forecaster.
I kinda get what you’re saying but I think this is double-counting in a weird way. A 0.01% probability means that if you make 10,000 predictions of that kind, then about one of them should come true. So your 95% confidence interval sounds like something like “20 times, I make 10,000 predictions that each have a probability between 0.00% and 0.02%; and 19 out of 20 times, about one out of the 10,000 predictions comes true.”
You could reduce this to a single point probability. The math is a bit complicated but I think you’d end up with a point probability on the order of 0.001% (~10x lower than the original probability). But if I understand correctly, you aren’t actually claiming to have a 0.001% credence.
I think there are other meaningful statements you could make. You could say something like, “I’m 95% confident that if I spend 10x longer studying this question, then I would end up with a probability between 0.00% and 0.02%.”
Yeah, I’m saying the probability is significantly less than 0.02% without saying exactly how much less — that’s much harder to pin down, and there are diminishing returns to exactitude here — so that means it’s a range from 0.00% to <0.02%. Or just <0.02%.
The simplest solution, and the correct/generally recommended solution, seems to be to simply express the probability, unqualified.
Yann LeCun (a Turing Award-winning pioneer of deep learning) leaving Meta AI — and probably, I would surmise, being nudged out by Mark Zuckerberg (or another senior Meta executive) — is a microcosm for everything wrong with AI research today.
LeCun is the rare researcher working on fundamental new ideas to push AI forward on a paradigm level. Zuckerberg et al. seem to be abandoning that kind of work to focus on a mad dash to AGI via LLMs, on the view that enough scaling and enough incremental engineering and R&D will push current LLMs all the way to AGI, or at least very powerful, very economically transformative AI.
I predict that in five years or so, this will be seen in retrospect (by many people, if not by everyone) as an incredibly wasteful mistake by Zuckerberg, and also by other executives at other companies (and the investors in them) making similar decisions. The amount of capital being spent on LLMs is eye-watering and could fund a lot of fundamental research, some of which could have turned up some ideas that would actually lead to useful, economically and socially beneficial technology.
LeCun is also probably one of the top people to have worsened the AI safety outlook this decade, and from that perspective perhaps his departure is a good thing for the survival of the world, and thus also Meta’s shareholders?
I couldn’t disagree more strongly. LeCun makes strong points about AGI, AGI alignment, LLMs, and so on. He’s most likely right. I think the probability of AGI by the end of 2032 is significantly less than 1 in 1,000 and the probability of LLMs scaling to AGI is even less than that. There’s more explanation in a few of my posts. In order of importance: 1, 2, 3, 4, and 5.
The core ideas that Eliezer Yudkowsky, Nick Bostrom, and others came up with about AGI alignment/control/friendliness/safety were developed long before the deep learning revolution kicked off in 2012. Some of Yudkowsky’s and Bostrom’s key early writings about these topics are from as far back as the early 2000s. To quote Clara Collier writing in Asterisk:
So, regardless of the timeline of AGI, that’s dubious.
LessWrong’s intellectual approach has produced about half a dozen cults, but despite many years of effort, millions of dollars in funding, and the hard work of many people across various projects, and despite many advantages, such as connections that can open doors, it has produced nothing of objective, uncontroversial, externally confirmable intellectual, economic, scientific, technical, or social value. The perceived value of anything it has produced is solely dependent on whether you agree or disagree with its worldview — I disagree. LessWrong claims to have innovated a superior form of human thought, and yet has nothing to show for it. The only explanation that makes any sense is that they’re wrong, and are just fooling themselves. Otherwise, to quote Eliezer Yudkowsky, they’d be “smiling from on top of a giant heap of utility.”
Yudkowsky’s and LessWrong’s views on AGI are correctly seen by many experts, such as LeCun, as unserious and not credible, and, in turn, the typical LessWrong response to LeCun is unacceptably intellectually bad and doesn’t understand his views on a basic level, let alone respond to them convincingly.
Why would any rational person take that seriously?
Just calling yourself rational doesn’t make you more rational. In fact, hyping yourself up about how you and your in-group are more rational than other people is a recipe for being overconfidently wrong.
Getting ideas right takes humility and curiosity about what other people think. Some people pay lip service to the idea of being open to changing their mind, but then, in practice, it feels like they would rather die than admit they were wrong.
This is tied to the idea of humiliation. If disagreement is a humiliation contest, changing one’s mind can feel emotionally unbearable. Because it feels as if to change your mind is to accept that you deserve to be humiliated, that it’s morally appropriate. Conversely, if you humiliated others (or attempted to), to admit you were wrong about the idea is to admit you wronged these people, and did something immoral. That too can feel unbearable.
So, a few practical recommendations:
-Don’t call yourself rational or anything similar
-Try to practice humility when people disagree with you
-Try to be curious about what other people think
-Be kind to people when you disagree so it’s easier to admit if they were right
-Avoid people who aren’t kind to you when you disagree so it’s easier to admit if you were wrong
Has anyone else noticed anti-LGBT and specifically anti-trans sentiment in the EA and rationalist communities? I encountered this recently and it was bad enough that I deactivated my LessWrong account and quit the Dank EA Memes group on Facebook.
I’m sorry you encountered this, and I don’t want to minimise your personal experience
I think once any group becoms large enough there will be people who associate with it who harbour all sorts of sentiments including the ones you mention.
On the whole though, i’ve found the EA community (both online and those I’ve met in person) to be incredibly pro-LGBT and pro-trans. Both the underlying moral views (e.g. non-traditionalism, impartiality and cosmpolitanism etc) point that way, as do the underlying demographics (e.g. young, high educated, socially liberal)
I think where there might be a split is in progressive (as in, leftist politically) framings of issues and the type of language used to talk about these topics. I think those often find it difficult to gain purchase in EA, especially on the rationalist/LW-adjacent side. But I don’t think those mean that the community as a whole, or even the sub-section, are ‘anti-LGBT’ and ‘anti-trans’, and I think there are historical and multifacted reasons why there’s some emnity between ‘progressive’ and ‘EA’ camps/perspectives.
Nevertheless, I’m sorry that you experience this sentiment, and I hope you’re feeling ok.
The progressive and/or leftist perspective on LGB and trans people offers the most forthright argument for LGB and trans equality and rights. The liberal and/or centre-left perspective tends to be more milquetoast, more mealy-mouthed, more fence-sitting.
The context for what I’m discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here.
Warning: This is a polemic that uses harsh language. I still completely, sincerely mean everything I say here and I consciously endorse it.[1]
It has never stopped shocking and disgusting me that the EA Forum is a place where someone can write a post arguing that Black Africans need Western-funded programs to edit their genomes to increase their intelligence in order to overcome global poverty and can cite overtly racist and white supremacist sources to support this argument (even a source with significant connections to the 1930s and 1940s Nazi Party in Germany and the American Nazi Party, a neo-Nazi party) and that post can receive a significant amount of approval and defense from people in EA, even after the thin disguise over top of the racism is removed by perceptive readers. That is such a bonkers thing and such a morally repugnant thing, I keep struggling to find words to express my exasperation and disbelief. Effective altruism as a movement probably deserves to fail for that, if it can’t correct it.[2]
My loose, general impression is that people who got involved in EA because of global poverty and animal welfare tend to be broadly liberal or centre-left and tend to be at least sympathetic toward arguments about social justice and anti-racism. Conversely, my impression of LessWrong and the online/Bay Area rationalist community is that they don’t like social justice, anti-racism, or socially/culturally progressive views. One of the most bewildering things I ever read on LessWrong was one of the site admins (an employee of Lightcone Infrastructure) arguing that closeted gay people probably tend to have low moral integrity because being closeted is a form of deception. I mean, what?! This is the “rationalist” community?? What are you talking about?! As I recall based on votes, a majority of forum users who voted on the comment agreed.[3]
Overall, LessWrong users seem broadly sympathetic to racist arguments and views.[4] Same for sexist or anti-feminist views, and extremely so for anti-LGBT (especially anti-trans) views. Personally, I find it to be the most unpleasant website I’ve spent more than ten hours reading. When I think of LessWrong, I picture a dark, dingy corner of a house. I truly find it to be awful.
The more I’ve thought about it, the more truth I find in the blogger Ozy Brennan’s interpretation of the LessWrong and the rationalist community through the concept of the “cultic milieu” and a comparison to new religious movements (not cults in the more usual sense connoting high-control groups). Ozy Brennan self-identifies as a rationalist and is active in the community, which makes this analysis far more believable than if it came from an outsider. The way I’d interpret Ozy’s blog post, which Ozy may not agree with, is that rationalists are in some sense fundamentally devoted to being incorrect, since they’re fundamentally devoted to being against consensus or majority views on many major topics — regardless of whether those views are correct or incorrect — and inevitably that will lead to having a lot of incorrect views.
I see very loose, very limited analogies between LessWrong and online communities devoted discussing conspiracy theories like QAnon or to online incel communities. Conspiracy theories because LessWrong has a suspicious, distrustful, at least somewhat paranoid or hypervigilant view on people and the world, this impulse to turn over rocks to find where the bad stuff is. Also, there’s the impulse to connect too much. To subsume too much under one theory or worldview. And too much reliance on one’s own fringe community to explain the world and interpret everything. Both, in a sense, are communities built around esoteric knowledge. And, indeed, I’ve seen some typical sort of conspiracy theory-seeming stuff on LessWrong related to American intelligence agencies and so on.
Incel communities because the atmosphere of LessWrong feels rather bitter, resentful, angry, unhappy, isolated, desperate, arrogant, and hateful, and in its own way is also a sort of self-help or commiseration community for young men who feel left out of the normal social world. But rather than encouraging healthy, adaptive responses to that experience, instead both communities encourage anti-social behaviour, leaning into distorted thinking, resentment, and disdainful views of other people.
I just noticed that Ozy recently published a much longer article in Asterisk Magazine on the topic of actual high-control groups or high-demand groups with some connection to the rationalist community. It will take me a while to properly read the whole thing and to think about it. But at a glance, there are some aspects of the article that are relevant to what I’m discussing here, such as this quote:
And this quote:
And this one:
In principle, you could have the view that the typical or median person is benefitted by the Sequences or by LessWrong or the rationalist community, and it’s just an unfortunate but uncommon side-effect for people to slip into cults or high-control groups. It sounds like that’s what Ozy believes. My view is much harsher: by and large, the influence that LessWrong/the rationalist community has on people is bad, and people who take these ideas and this subculture to an extreme are just experiencing a more extreme version of the bad that happens to pretty much everyone who is influenced by these ideas and this subculture. (There might be truly minor exceptions to this, but I still see this as the overall trend.)
Obviously, there is now a lot of overlap between the EA Forum and LessWrong and between EA and the rationalist community. I think to the extent that LessWrong and the rationalist community have influenced EA, EA has become something much worse. It’s become something repelling to me. I don’t want any of this cult stuff. I don’t want any of this racist stuff. Or conspiracy theory stuff. Or harmful self-help stuff for isolated young men. I’m happy to agree with the consensus view most of the time because I care about being correct much more than I care about being counter-consensus. I am extremely skeptical toward esoteric knowledge and I think it’s virtually always either nonsense or prosaic stuff repackaged to look esoteric. I don’t buy these promises of unlocking powerful secrets through obscure websites.
There was always a little bit of overlap between EA and the rationalist community, starting very early on, but it wasn’t a ruinous amount. And it’s not like EA didn’t independently have its own problems before the rationalist community’s influence increased a lot, but those problems seemed more manageable. The situation now feels like the rationalist community is unloading more and more of its cargo onto the boat that is EA, and EA is just sinking deeper and deeper into the water over time. I feel sour and queasy about this because EA was once something I loved and it’s becoming increasingly laden with things I oppose in the strongest possible terms, like racism, interpersonal cruelty, and extremely irrational thinking patterns. How can people who were in EA because of global poverty and animal welfare, who had no previous affiliation with the rationalist community, stand this? Are they all gone already? Have they opted to recede from public arguments and just focus on their own particular niches? What gives?
And to the extent that the racism in EA is independently EA’s problem and has nothing to do with the influence of the rationalist community (which obviously has to be more than nothing), then that is 100% EA’s problem, but I can’t imagine racism in EA could be satisfactorily addressed without significant conflict with and alienation of many people who overlap between the rationalist community and EA and who either endorse or strongly sympathize with racist views. (For example, in January 2023, when the Centre for Effective Altruism published a brief statement that affirmed the equality of Black people in response to the publication of a racist email by the philosopher Nick Bostrom, the most upvoted comment was from a prominent rationalist that started, “I feel really quite bad about this post,” and argued at length that universal human equality is not a tenet of effective altruism. This is unbelievably foolish and unbelievably morally wrong. Independently, that person has said and done things that seem to indicate either support or sympathy for racist views, which makes me think it was probably not just a big misunderstanding.)[5] That’s why the diversion from talking about racism on the EA Forum into discussing the rationalist community’s influence on EA.
Racism is paradigmatically evil and there is no moral or rational justification for it. Don’t lose sight of something so fundamental and clear. Don’t let EA drown under the racism and all the other bad stuff people want to bring to it. (Hey, now EA is the drowning child! Talk about irony!)
Incidentally, LessWrong and the rationalist community are dead wrong about near-term AGI as well — specifically, the probability of AGI before January 1, 2033 is significantly less than 0.1% and the MIRI worldview on alignment is most likely either just generally wrong/misguided or at least not applicable to deep learning-based systems — and that poses its own big problem for EA to the extent that EA has been influenced to accept LessWrong/rationalists’ views about near-term AGI. So, the influence of the rationalist community on EA has been damaging in multiple respects. (Although, again, EA bears responsibility for its part in all of it, both allowing the influence and for whatever portion of the mistake it would have made without that influence.)
About a day after posting this quick take, I changed the first sentence of this quick take from just italicized to a heading to make the links to the Reflective Altruism post more prominent and harder to miss. The sentence was always there.
Edited on October 25, 2025 at 11:22 PM Eastern to add: If you don’t know about the incident I’m referring to here, the context for what I’m discussing is explained in two Reflective Altruism posts: part 1 here and part 2 here. The links to these Reflective Altruism posts were always in the first sentence of this quick take, but I’m adding this footnote to make those links harder to miss.
Edited on October 25, 2025 at 10:52 PM Eastern to add: I purposely omitted a link to this comment because I didn’t want to make this quick take a confrontation against the person who wrote it. But if you don’t believe me and you want to see the comment for yourself, send me a private message and I’ll send you the link.
Edited on October 25, 2025 at 11:25 PM Eastern to add: This is extensively documented in a different Reflective Altruism post from the two I have already linked, which you can find here.
Edited on October 25, 2025 at 11:19 PM Eastern to add: I’m referring here to the Manifest 2024 conference, which was held at Lighthaven in Berkely, California, a venue owned by Lightcone Infrastructure, the same organization that owns and operates LessWrong. I’m also referring to the discussions that happened after the event. There have been many posts about this event on the EA Forum. One post I found interesting was from a pseudonymous self-described member of the rationalist community that was critical of some aspects of the event and of some aspects of the rationalist community. You can read that post here.
Edited on October 26, 2025 at 11:57 PM Eastern to add: See also the philosopher David Thorstad’s post about Manifest 2024 on his blog Reflective Altruism. David’s posts are nearly encyclopedic in their thoroughness and have an incredibly high information density.
A possible explanation for why this post is heavily downvoted:
It makes serious, inflammatory, accusative, broad claims in a way that does not promote civil discussion
It rarely cites specific examples and facts that would serve to justify these claims
You linked to an article by Reflective Altruism, but I think it would have been beneficial to put links to specific examples directly in your text.
Two of the specific examples you use do not seem to be presented accurately:
You’re talking about that post. However, you’re failing to mention that it currently has negative karma and twice as much disagreement as agreement. If anything, it is representative of something that EA (as a whole) does not support.
Your claim implies that the commenter said that because of racist reasons. However, they say in that very comment that “I value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality.” And much of their disagreement centred on the form of the statement.
Why did you not specify that in your post?
Thank you for your comment.
I am a strong believer in civility and kindness, and although my quick take used harsh language, I think that is appropriate. I think, in a way, it can even be more respectful to speak plainly, directly, and honestly, as opposed to being passive-aggressive and dressing up insults in formal language.
I am expecting people to know the context or else learn what it is. It would not be economical for me to simply recreate the work already done on Reflective Altruism in my quick take.
That post only has negative karma because I strong downvoted it. If I remove my strong downvote, it has 1 karma. 8 agrees and 14 disagrees is more disagrees than agrees, but this is not a good ratio. Also, this is about more than just the scores on the post, it’s also about the comments defending the post, both on that post itself and elsewhere, and the scores on those comments.
I don’t think racist ideas should have 1 karma when you exclude my vote.
I think if someone says in response to a racist email about Black people from someone in our community that Black people have equal value to everyone else, your response should not be to argue that Black people have “approximately” as much value as white people. Normally, I would extend much more benefit of the doubt and try to interpret the comment more charitably, but subsequent evidence — namely, the commenter’s association with and defense of people with extreme racist views — has made me interpret that comment much less charitably than I otherwise would. In any case, even on the most charitable interpretation, it is foolish and morally wrong.
I think the issue is that, from my standpoint, there is a combination of harsh language, many broad claims about EA and LessWrong, which are both very negative and vague, and a lack of specific evidence in the text.
I expect few people here to be swayed by this kind of communication, since you may simply be overreacting and having an extremely low threshold to use terms like “racism”. It’s the discourse I tend to see on Twitter.
As an example of what I’d call an overreaction, when you say that someone did something “unbelievably foolish and unbelievably morally wrong,” I am thinking of very bad stuff, like doing fraud with charity money.
I am not thinking about a comment where someone said that “I value people approximately equally in impact estimates” (instead of “absolutely equally”). The lack of evidence means I can’t base myself on the commenter’s specific intentions.
There is a lot of context to fill people in on and I’ll leave that to the Reflective Altruism posts. I also added some footnotes that provide a bit more context. I wasn’t even really thinking about explaining everything to people who don’t already know the background.
I may be overreacting or you may be underreacting. Who’s to say? The only way to find out is to read the Reflective Altruism posts I cited and get the background knowledge that my quick take presumes.
I agree that discourse on Twitter is unbelievably terrible, but one of the ways that I believe using Twitter harms your mind is you just hear the most terrible points and arguments all the time, so you come to discount things that sound facially similar in non-Twitter contexts. I advocate that people completely quit Twitter (and other microblogging platforms like Bluesky) because I think it gets people into the habit of thinking in tweets, and thinking in tweets is ridiculous. When Twitter started, it was delightfully inane. The idea of trying to say anything in such a short space was whimsical. That it’s been elevated to a platform for serious discourse is absurd.
Again, the key context for that comment is that an extremely racist email by the philosopher Nick Bostrom was published that used the N word and said Black people are stupid. The Centre for Effective Altruism (CEA) released a very short, very simple statement that said all people are equal, i.e., in this context, Black people are equal to everyone else.
The commenter responded harshly against CEA’s statement and argued a point of view that, in context, reads as the view that Black people have less moral value than white people. And since then that commenter was involved in a controversy around racism, i.e., the Manifest 2024 conference. If you’re unfamiliar can read about that conference on Reflective Altruism here.
In that post, there’s a quote from Shakeel Hasim, who previously was the Head of Communications at the Centre for Effective Altruism (CEA):
So, don’t take my word for it.
The hazard of speaking too dispassionately or understating things is it gives people a misleading impression. Underreacting is dangerous, just as overreacting is. This is why the harsh language is necessary.
Yes, knowing the context is vital to understanding where the harsh language is coming from, but I wasn’t really writing for people who don’t have the context (or who won’t go and find out what it is). People who don’t know the context can dismiss it, or they can become curious and want to find out more.
But colder, calmer, more understated language can also be easily dismissed, and is not guaranteed to elicit curiosity, either. And the danger there is that people tend to assume if you’re not speaking harshly and passionately, then what you’re talking about isn’t a big deal. (Also, why should people not just say what they really mean?)
Thanks for the answer; it explains things better for me.
I’ll just point out that another element that bugged me about the post was the lack of balance. It felt that things were made with an attitude that tries to judge everything in a negative light, which doesn’t make it trustworthy in my opinion.
Two examples:
The email was indeed racist, but Nick Bostrom said this in an email that was 26 years old, for which he apologised since (the apology itself may be discussed, but this is still important context missing).
The comment literally states the opposite, and I did provide quotes. It really feels like you are trying to interpret things uncharitably.
So far, I feel like the examples provided are mostly debatable. I’d expect more convincing stuff before concluding there is a deep systemic issue to fix.
The CEA’s former head of communications’ quote is more relevant evidence, I must admit, though I don’t know how widespread or accurate their perception is (it doesn’t really match what I’ve seen).
I’d also appreciate some balance by highlighting all the positive elements EA brings to the table, such as literally saving the lives of thousands of Black people in Africa.
I think the overall theme of your complaints is that I don’t provide enough context for what I’m talking about, which is fair if you’re reading the post without context, but a lot of posts on the EA Forum are “inside baseball” that assume the reader has a lot of context. So, maybe this is an instance of context collapse, where something written with one audience in mind is interpreted differently because another audience with less context or a different context.
I don’t think it’s wrong for you to have the issues you’re having. If I were in your shoes, I would probably have the same issues.
But I don’t know how you could avoid these issues and still have “inside baseball” discussion on the EA Forum. This is a reason the “community” tag exists on the forum. It’s so people can separate posts that are interesting and accessible to a general audience from posts that only make sense if your head has already been immersed in the community stuff for a while.
I agree this is important context, but this is the sort of “inside baseball” stuff where I generally assume the kind of people interested in reading EA Forum community posts are already well aware of what happened and now I’m only providing more context because you’re directly asking me about it. Reflective Altruism is excellent because the author of that blog, David Thorstad, writes like encyclopedia articles of context for these sort of things. So, I just refer you to the relevant Reflective Altruism posts about the topics you’re interested in. (There is a post on the Bostrom email, for example.)
The comment says that people are approximately equally valuable, not that they are equally valuable, and it’s hard to know what exactly this means to the author. But the context is CEA is saying that Black people are equally valuable and that commenter is saying he disagrees, feels bad about what the CEA is saying, and harshly criticizes the CEA for saying this. And, subsequently, the author has organized a conference that was friendly to people with extreme racist views such as white nationalism. The subsequent discussion of that conference did not allay the concerns of people who find that concerning.
What we’re talking about here is stuff like people defending slavery, defending colonialism, defending white nationalism, defending segregation, defending the Nazi regime in Germany, and so on. I am not exaggerating. This is literally the kind of things these people say. And the defenses about why people who say such things should be welcomed into the effective altruist community are not satisfactory.
For me, this is a case where, at multiple steps, I have left a more charitable interpretation open, but, at multiple turns, the subsequent evidence has pointed to the conclusion that Shakeel Hasim (the former Head of Communications at the CEA) came to, that this is just straight-up racism.
I refer you to the following Reflective Altruism posts: Human biodiversity (Part 2: Manifest), about the Manifest 2024 conference and the ensuing controversy around racism, and Human Biodiversity (Part 7: LessWrong). The post on LessWrong has survey data that supports Shakeel Hasim’s comment about racism in the rationalist community.
I’ve had an intense interest in and affinity for effective altruism since before it was called effective altruism. I think it must have been in 2008 when I joined a Facebook group called Giving What We Can created by the philosopher Toby Ord. As I recall, it had just a few hundred members, maybe around 200. The website for Giving What We Can was still under construction and I don’t think the organization had been legally incorporated at that point. So, this has been a journey of 17 years for me, which is more than my entire adult life. Effective altruism has been an important part of my life story. Some of my best memories of my time at university was with my friends I made through my university effective altruism group. That’s a time in my life I will always treasure and bittersweetly reminisce on, sweetly because it was so beautiful, bitterly because it’s over.
If I thought there was nothing good about EA, I wouldn’t be on the EA Forum, and I wouldn’t be writing things about how to diagnose and fix EA’s problems. I would just disavow it and disassociate myself from it, as sadly many people have already done by now. I love the effective altruism I knew in the decade from 2008 to 2018, and it would be sad to me if that’s no longer on the Earth. For instance, I do think saving the lives of people living in poverty in sub-Saharan Africa is a worthy cause and a worthy achievement. This is precisely why I don’t like EA both abandoning global poverty as a cause area and allowing the encroachment of the old colonialist, racist ideas that the people I admire in international development like the economist William Easterly (author of the book The White Man’s Burden and the old blog Aid Watch) warned us so insistently we needed to avoid in contemporary international aid work.
Can you imagine a worse corruption, a worse twisting of this than to allow talk about why Black people are more genetically suited to slavery than white people, or how Europe did Africa a favour by colonizing it, or how Western countries should embrace white nationalism? That’s fucking insanity. That is evil. If this is what effective altruism is becoming, then as much as I love what effective altruism once was, effective altruism should die. It has betrayed what it once was and, on the values of the old effective altruism, the right decision would be to oppose the new effective altruism. It really couldn’t be more clear.
Thanks, I understand better the context and where you’re coming from. The stylé is easier for me to read and I appreciate that.
I won’t have much more time for this conversation, but just two points:
Is this actually true? To me global poverty was still number one in terms of donations, Give well is doing great, and most of the charity entrepreneurship charities are on this topic.
Oh, yes, that would be awful. But I’d expect that virtually everybody in the EA forum would be against that.
And so far, in the examples you’ve given, you don’t show that even a sizeable minority of people would agree with these claims. For instance, for Manifold, you pointed to the fact that some EAs work with a forecasting organisation from the rationalist community who did a conference that invited many speakers to speak on forecasting and some of these speakers previously wrote some racist stuff on a topic unrelated to the conference (and even then that lead to quite a debate).
My understanding might be inaccurate, of course, but that’s such a long chain that I would consider this as quite far from a prevalent issue which currently has large negative consequences.
Another issue, and why the comment is getting downvoted heavily (including by myself) is because you seem to conflate the is-ought distinction with this post, and without the is-ought distinction being conflated, this post would not exist.
You routinely leap from “a person has moral views that are offensive to you” to “they are wrong about the facts of the matter”, and your evidence for this is paper thin at best.
Being able to separate moral views from beliefs on factual claims is one of the things that is expected if you are in EA/LW spaces.
This is not mutually exclusive with the issues CB has found.
I don’t agree with this evaluation and, as stated, it’s just an unsupported assertion. So, there is nothing really here for me to respond to except to say I disagree.
It would help to have an example of what you mean by this. I imagine, if you gave an example, I would probably say that I think your characterization is simply wrong, and I find your wording obnoxious. This comes across as trying to insult me personally rather than trying to make a substantive argument that could conceivably be persuasive to me or to any outside person who’s on the fence about this topic.
I’m guessing you may have wrongly inferred that I reject certain factual claims on moral grounds, when really I reject them on factual grounds and part of what I’m criticizing is the ignorance or poor reasoning that I strain to imagine must be required to believe such plainly false and obviously ridiculous things. Yet it is also fair to criticize such epistemic mistakes for their moral ramifications. For example, if someone thinks world affairs are orchestrated by a global Jewish conspiracy, that’s just an unbelievably stupid thing to think and they can be rightly criticized for believing something so stupid. They can also rightly be criticized for this mistake because it also implies immoral conduct, namely, unjustifiable discrimination and hatred against Jewish people. If someone thinks this is a failure to decouple or a failure to appreciate the is/ought distinction, they don’t know what they’re talking about. In that case, they should study philosophy and not make up nonsense.[1]
But I will caveat that I actually have no idea what you meant, specifically, because you didn’t say. And maybe what you intended to say was actually correct and well-reasoned. Maybe if you explained your logic, I would accept it and agree. I don’t know.
I don’t know what you meant by your comment specifically, but, in general, I have sometimes found arguments about decoupling to be just unbelievably poorly reasoned because they don’t account for the most basic considerations. (The problem is not with the concept of decoupling in principle, in the abstract, it’s that people try to apply this concept in ways that make no sense.)[2] They are woefully incurious about what the opposing case might be and often contradict plain facts. For example, they might fail to distinguish between the concept of a boycott of an organization with morally objectionable views that is intended to have a causal impact on the world vs. the concept of acknowledging both positive and negative facts about that organization. For example:
Person A: I don’t want to buy products from Corporation Inc. because they fund lobbying for evil policies.
Person B: But Corporation Inc. makes good products! Learn to decouple!
(This is based on a real example. Yes, this is ridiculous, and yet something very similar to this was actually said.)
People don’t understand the basic concepts being discussed — e.g., the concept of a boycott and the rationale for boycotts — and then they say, “tut, tut, be rational!” but anyone could say “tut, tut, be rational” when anyone disagrees with them about anything (even in the cases they happen to be dead wrong and say things that don’t make sense), so what on Earth is the point of saying that?
This kind of “tut, tut” comes across to me as epistemically sloppy. The more you scold someone who disagrees with you, the more you lose face if you have to admit you made an embarrassing reasoning mistake, so the less likely you will be to admit such mistakes and the more you’ll double down on silly arguments because losing face is so uncomfortable. So, a good way to hold wrong views indefinitely is to say “tut, tut” as much as possible.
But, that’s only generally speaking, and I don’t know what you meant specifically. Maybe what you meant to say actually made sense. I’ll give you the benefit of the doubt, and an opportunity to elaborate, if you want.
This also obviously applies to prudential cases, in addition to moral cases. If you make a stupid mistake like putting the cereal in the fridge and the milk in the cupboard, you can laugh about that because the stakes are low. If you make a stupid mistake that is also dangerous to you, such as mixing cleaning products that contain bleach and ammonia (which produces chlorine gas), then you can criticize this mistake on prudential grounds as well as epistemic grounds. (To criticize a mistake on prudential or moral grounds is only valid if it is indeed a mistake, obviously.) And no one should assert this criticism is based on some kind of basic logical error where you’re failing to distinguish prudential considerations from epistemic ones — anyone saying that would not know what they’re talking about and should take a philosophy class.
In general, a common sort of reasoning error I observe is that people invoke a correct principle and apply it incorrectly. When they are pressed on the incorrect application, they fall back to defending the principle in the abstract, which is obviously not the point. By analogy, if someone you knew was talking about investing 100% of their savings in GameStop, it would be exasperating if they defended this decision by citing — very strong, quite plausibly completely correct — research about how it’s optimal to have an all-equity portfolio. It would be infuriating if they accused you of not understanding the rationale for investing in equities simply because you think a 100% GameStop portfolio is reckless. The simple lesson of this analogy: applying correct principles does not lead to correct conclusions if the principle is applied incorrectly! It’s obvious to spot when I deliberately make the example obvious to illustrate the point, but often less obvious to spot in practice — which is why so many people make errors of this kind so often.
An example here is this quote, which straddles dangerously close to “these people have morality that you find to be offensive, therefore they are wrong on the actual facts of the matter” (Otherwise you would make the Nazi source allegations less central to your criticism here):
(I don’t hold the moral views of what the quote is saying, to be clear).
It’s really quite something that you wrote almost 2000 words and didn’t include a single primary citation to support any of those claims. Even given that most of them are transparently false to anyone who’s spent 5 minutes reading either LW or the EA Forum, I think I’d be able to dig up something superficially plausible with which to smear them.
And if anyone is curious about why Yarrow might have an axe to grind, they’re welcome to examine this post, along with the associated comment thread.
Edit: changed the link to an archive.org copy,
since the post was moved to draft after I posted this.Edit2: I was incorrect about when it was moved back to a draft, see this comment.
I believe Yarrow is referencing this series of articles from David Thorstadt, which quotes primary sources extensively.
The sources are cited in quite literally the first sentence of the quick take.
To my knowledge, every specific factual claim I made is true and none are false. If you want to challenge one specific factual claim, I would be willing to provide sources for that one claim. But I don’t want to be here all day.
Since I guess you have access to LessWrong’s logs given your bio, are you able to check when and by whom that LessWrong post was moved to drafts, i.e., if it was indeed moved to drafts after your comment and not before, and if it was, whether it was moved to drafts by the user who posted it rather than by a site admin or moderator?
My bad, it was moved back to draft on October 3rd (~3 weeks ago) by you. I copied the link from another post that linked to it.
Hey, so you’re 0 for 2 on your accusations! Want to try again?
And, indeed, this seems to show your accusation that there was an attempt to hide the post after you brought it up was false. An apology wouldn’t hurt!
The other false accusation was that I didn’t cite any sources, when in fact I did in the very first sentence of my quick take. Apart from that, I also directly linked to an EA Forum post in my quick take. So, however you slice it, that accusation is wrong. Here, too, an apology wouldn’t hurt if you want to signal good faith.
My offer is still open to provide sources for any one factual claim in the quick take if you want to challenge one of them. (But, as I said, I don’t want to be here all day, so please keep it to one.)
Incidentally, in my opinion, that post supports my argument about anti-LGBT attitudes on LessWrong. I don’t think I could have much success persuading LessWrong users of that, however, and that was not the intention of this quick take.
Yes, indeed, there was only an attempt to hide the post three weeks ago. I regret the sloppiness in the details of my accusation.
I did not say that you did not cite any sources. Perhaps the thing I said was confusingly worded? You did not include any links to any of the incidents that you describe.
Huh? Why not just admit your mistake? Why double down on an error?
By the way, who do you think saved that post in the Wayback Machine on the exact same date it was moved to drafts? A remarkable coincidence, wouldn’t you say?
Your initial comment insinuated that the incidents I described were made up. But the incidents were not made up. They really happened. And I linked both to extensive documentation on Reflective Altruism and directly to a post on the EA Forum so that anyone could verify that the incidents I described occurred.
There was one incident I described that I chose not to include a link to out of consideration for your coworker. I wanted to avoid presenting the quick take as a personal attack on them. (That was not the point of what I wrote.) I still think that is the right call. But I can privately provide the link to anyone who requests it if there is any doubt this incident actually occurred.
But, in any case, I very much doubt we are going to have a constructive conversation at this point. Even though I strongly disagree with your views and I still think you owe me an apology, I sincerely wish you happiness.
Thanks for sharing your misgivings.
I think it may be illuminating to conceptualise that EA has several “attractor failure modes” that it can coalesce into if insufficient attention is paid to methods of making EA community spaces not do that. You’ve noted some of these attractor failures in your post, and they are often related to other things that overlap with EA. They include (but are not limited to):
The cultic self-help conspiratorial milieu (probably from rationalism)
Racism and eugenicist ideas
Doomspirals (many versions depending on cause area, but “AI will kill us all P(doom) = 95%” is definitely one of them)
The question, then, is how does one balance community moderation to both promote the environment of individual truth seeking necessary to support EA as a philosophical concept, while also striving to avoid these, given a documented history within EA of them leading to things that don’t work out so well? I wonder what CEA’s community health team have said on the matter.
I’m very glad of Reflective Altruism’s work and I’m sorry to see the downvotes on this post. Would you consider a repost as a main post with dialed down emotive language in order to better reach people? I’d be happy to give you feedback on a draft.
Thanks. I’ll think about the idea of doing a post, but, honestly, what I wrote was what I wanted to write. I don’t see the emotion or the intensity of the writing as a failure or an indulgence, but as me saying what I really mean, and saying what needs to be said. What good’s sugar-coating it?
Something that anyone can do (David Thorstad has given permission in comments I’ve seen) is simply repost the Reflective Altruism posts about LessWrong and about the EA Forum here, on the EA Forum. Those posts are extremely dry, extremely factual, and not particularly opinionated. They’re more investigative than argumentative.
I have thought about what, practically, to do about these problems in EA, but I don’t think I have particularly clear thoughts or good thoughts on that. An option that would feel deeply regrettable and unfortunate to me would be for the subset of the EA movement that shares my discomfort to try to distinguish itself under some label such as effective giving. (Someone could probably come up with a better label if they thought about it for a while.)
I hope that there is a way for people like me to save what they love about this movement. I would be curious to hear ideas about this from people who feel similarly.