Distancing EA from rationality is foolish

Edit: If you are landing here from the EA Forum Digest, note that this piece is not about Manifest and I don’t want it it to be framed as being about Manifest.

Recently, I’ve noticed a growing tendency within EA to dissociate from Rationality. Good Ventures have stopped funding efforts connected with the rationality community and rationality, and there are increasing calls for EAs to distance themselves.

This trend concerns me, and I believe it’s good to make a distinction when considering this split.

We need to differentiate between ‘capital R’ Rationality and ‘small r’ rationality. By ‘capital R’ Rationality, I mean the actual Rationalist community, centered around Berkeley: A package deal that includes ideas about self-correcting lenses and systematized winning, but also extensive jargon, cultural norms like polyamory, a high-decoupling culture, and familiarity with specific memes (ranging from ‘Death with Dignity’ to ‘came in fluffer’).

On the other hand, ‘small r’ rationality is a more general concept. It encompasses the idea of using reason and evidence to form conclusions, scout mindset, and empiricism. It also includes a quest to avoid getting stuck with beliefs resistant to evidence, techniques for reflecting on and improving mental processes, and, yes, many of the core ideas of Rationality, like understanding Bayesian reasoning.

If people want to distance themselves, it’s crucial to be clear about what they’re distancing from. I understand why some might want to separate from aspects of the Rationalist community – perhaps they dislike the discourse norms, worry about negative media coverage, or disagree with prevalent community views.

However, distancing yourself from ‘small r’ rationality is far more radical and likely less considered. It’s similar to rejecting core EA ideas like scope sensitivity or cause prioritization just because one dislikes certain manifestations of the EA community (e.g., SBF, jargon, hero worship).

Effective altruism is fundamentally based on pursuing good deeds through evidence, reason, and clear thinking—in fact when early effective altruists were looking for a name, one of the top contenders was rational altruism. Dissecting the aspiration to think clearly would in my view remove something crucial.

Historically, the EA community inherited a lot of epistemic aspects from Rationality[1] – including discourse norms, emphasis on updating on evidence, and a spectrum of thinkers who don’t hold either identity closely, but can be associated with both EA and rationality. [2]

Here is the crux: if the zeitgeist pulls effective altruists away from Rationality, they should invest more into rationality, not less. As it is critical for effective altruism to cultivate reason, someone will need to work on it. If people in some way connected to Rationality are not who EAs will mostly talk to, someone else will need to pick up the baton.

  1. ^

    Clare Zabel in 2022 expressed similar worry:

    Right now, I think the EA community is growing much faster than the rationalist community, even though a lot of the people I think are most impactful report being really helped by some rationalist-sphere materials and projects. Also, it seems like there are a lot of projects aimed at sharing EA-related content with newer EAs, but much less in the way of support and encouragement for practicing the thinking tools I believe are useful for maximizing one’s impact (e.g. making good expected-value and back-of-the-envelope calculations, gaining facility for probabilistic reasoning and fast Bayesian updating, identifying and mitigating one’s personal tendencies towards motivated or biased reasoning). I’m worried about a glut of newer EAs adopting EA beliefs but not being able to effectively evaluate and critique them, nor to push the boundaries of EA thinking in truth-tracking directions.

  2. ^

    EA community actually inherited more than just ideas about epistemics: compare for example Eliezer Yudkowsky’s essay on Scope Insensitivity from 2007 with current introductions to effective altruism in 2024.