Leaning into EA Disillusionment

1. Intro

In this post I describe a phenomenon that I think is more common than we give it credit for: “EA disillusionment.” By this, I basically mean a process where someone comes into EA, engages heavily, but ends up feeling negatively toward the movement.

I know at least a handful of people who have experienced this (and I’m sure there are many more I don’t know)—people who I think are incredibly smart, thoughtful, caring, and hard-working, as well as being independent thinkers. In other words, exactly the kind of people EA needs. Typically, they throw themselves into EA, invest years of their life and tons of their energy into the movement, but gradually become disillusioned and then fade away without having the energy or motivation to articulate why.

I think this dynamic is bad for EA in multiple ways, some obvious, some less so. Obviously, people getting disillusioned and leaving is not fun (to put it mildly) for them, and obviously it’s bad for EA if promising people stop contributing to the movement. But I think the most important downside here is actually that it results in a major blindspot for EA: at present, the way people tend to become disillusioned means that they are (a) unusually likely to have the exact kinds of serious, thoughtful critiques that the EA community most wants to hear, but (b) unusually unlikely to offer them. The result is that EA stays blind to major problems that it could otherwise try to improve on.

Why would this be true?

  • (a) The kind of people I mean are unusually likely to have useful, major critiques to offer because they have spent years immersing themselves in the EA world, often changing careers for EA reasons, developing EA social circles, spending time on community building, and so on.

  • (b) But, they’re unusually unlikely to offer these critiques, because by the time they have developed them, they have already spent years pouring time and energy into EA spaces, and have usually ended up despairing the state of the community’s epistemics, social dynamics, error correction processes, etc. This makes the prospect of pouring even more time into trying to articulate complicated or nuanced thoughts especially unappealing, relative to the alternative of getting some distance and figuring out what they want to be doing post-EA.

I believe a healthier EA movement would be one where more people are able to go through a gentler version of the “disillusionment pipeline” described below, so that they come out the other side with useful perspectives on EA that they are more willing, able, and encouraged to share.

This post aims to do 4 things:

  1. Acknowledge the existence of a significant group of people who engage heavily with EA but end up feeling negatively toward the movement (“disillusioned EAs”), who tend to fade away quietly rather than making their concerns known.

  2. Describe a 3-phase pipeline of EA disillusionment: infatuation, doubt, and distancing.

  3. Point out some of the (in my view, real and important) problems that drive people through these three stages.

  4. Make some suggestions for individuals at various stages of this pipeline, including the possibility that it’s valuable to lean into feelings of doubt/​distance/​disillusionment, rather than trying to avoid them.

This intro has covered point (1); the rest of the post covers (2)-(4).

A core idea of this post is that going through an extreme version of the first (“infatuation”) phase of the disillusionment pipeline can be very harmful. Infatuation causes people to reorient huge parts of their lives—careers, social circles, worldviews, motivational structures—around EA, making it somewhere between painful and impossible to contemplate the idea that EA might be wrong in important ways.

2. What EA disillusionment looks like

Everyone walks their own path, so I don’t claim the below is a faithful representation of what any one person has gone through. But roughly speaking, the pattern I’m pointing to looks something like the following…

Phase 1: Infatuation

Person discovers EA and is immediately taken by it. Often they’re starting from either feeling a total lack of meaning and purpose, or from feeling overwhelmed by the world’s problems and confused why no one else seems to care about how best to help. Maybe they have already started thinking through some EA-related ideas (e.g. cost effectiveness, opportunity cost), and are thrilled to find others who see the world the same way.

  • “Ahh there’s so much terrible stuff in the world what do I do why does no one seem freaked out about this” /​ “Why do people’s brains seem to turn off when thinking about charity, why do all the smart people around me balk at using basic analytic tools to think about how to do good”

  • “Wait wait there’s this whole EA movement that’s all about doing the most good? There are thousands of these people? There are books and podcasts and Oxford professors and billionaire donors with professional foundations? Thank goodness, I’ve found my people, I’m home safe”

  • “OK well looks like these people have things figured out, so my job is surely to slot in and find where this community thinks I can contribute most and put my head down and work hard and save the world” /​ “These people seem more right than any group I’ve ever interacted with, and they seem very sure about XYZ—so probably deferring to that has higher expected value than anything I can come up with on my own”

For people who feel this “click” when they first encounter EA, the feeling can be totally intoxicating. Discovering a community that shares some of your core priorities/​values/​intellectual assumptions, and that has spent many more person-hours figuring out their implications, naturally leads you to grant that community a lot of goodwill… even if you didn’t check all of their conclusions 100%, and some of them seem a little off. Often, this infatuation phase leads people to put in years of effort, including changing career tracks, leaving old social or intellectual circles, spending time on community building/​preaching the gospel, etc.

Newcomers who are perceived as very promising can have an especially overwhelming version of this experience, with community members (even community leaders) telling them they’re super smart and high-potential, offering to pay high salaries and cover travel costs, rapidly bringing them into high-trust spaces, etc.

Phase 2: Doubt comes in

Person finds their feet in EA; after originally feeling swept off their feet, they start to notice that maybe things don’t add up quite the way they thought.

  • “Wait hang on, I’ve been putting my head down and following the community consensus… but I’m not actually sure the messages I’m getting make sense”

  • “Hm, when I poked on things I was unsure of, some people gave answers that didn’t make sense and some people told me they just think it because other (high status) people think it and some people told me they had no idea and didn’t think it was obviously right… so now I’m really confused”

  • “Wait now I’m like months/​years in and I feel like most of the wisdom I’ve absorbed was actually built on shallow foundations and I’m not convinced there’s nearly as much consensus or solid thinking as it seems there is”

  • “There were a lot of points where I adopted the community consensus because I thought the people at the top had thought everything through incredibly rigorously, but now I’ve met some of those people and even though they’re smart, I don’t think they’re nearly as reliable as I had been assuming”

  • “…and also the world is HUGE and complicated and we’re only accounting for tiny pieces of how it works and also we’re all so young and naive”

  • “Aaaaaah”

Different people cope with this phase differently. Some move on relatively quickly to phase 3; others get stuck here, feeling conflicted between growing doubts and impulses to squash those doubts in the name of Doing The Most Good™. At least one friend has described feeling paralyzed/​unable to think for themselves about some of the key issues in question, because at this point so much of their career, social circles, and motivational structures are built around believing EA dogma.

Phase 3: Distancing

Person gradually, painfully comes to terms with the fact that EA actually can’t offer them what they hoped. They grapple with the idea that not only have they wasted years of work on ideas they’re no longer sure are right, but they also now probably think that many of their coworkers, friends, and even former heroes/​mentors are misguided (or even actively causing harm).

  • “EA seems to be more of a social club of people agreeing with each other’s takes than a serious intellectual community offering something novel to the world. It pretends to be purely about finding truth and doing good, but of course in reality it has its own status, signaling, hierarchies, and bullshit. I give up on finding people who take thinking about these problems really seriously”

  • “Interacting with new EAs who think EA is the gospel is actively off-putting. I can’t stand the echo chambers or false confidence or people blindly agreeing with each other; so many people just seem to be repeating talking points rather than actually thinking anything through”

  • “It sure seems like a lot of effort goes into trying to persuade other people of EA talking points (e.g. “you should work on existential risk and that means either AI or bio”) rather than welcoming newcomers’ disagreements and different intuitions… that feels pretty icky, especially given how shaky some of this stuff seems”

  • “I no longer believe in what I’m doing, but neither do I have conviction that other things I can think of would be better, this is horrible what do I do”

In general, the deeper a person goes in phase 1, and the more of their life they reorient based on that, the more painful phases 2 and 3 will be. A common outcome of phase 3 right now seems to be for the person to leave EA, either explicitly or in a gradual fade-out. This may be the right call for them as an individual, but from EA’s perspective this is a bad outcome, since the community never gets to learn from or improve on what drove them away.

But it does seem possible for this to go differently; to use myself as an example, I would maybe describe myself as “quasi-disillusioned,” in that when I talk with “disillusioned” friends who inspired this post, I often agree with them on many substantive points; yet I find myself more inclined to keep engaging with EA in some ways (though with more distance than in the past) than they do. I think part of what’s going on here might be that I went through the pipeline above relatively quickly, relatively early. I think that has left me coming out of stage 3 with some emotional and intellectual distance from the community, but still with interest and motivation to keep engaging.

3. Some factors driving EA disillusionment

It’s difficult to nail down exactly what causes people to go through the pipeline described above—in part because different people experience it differently, but mostly because a lot of the relevant factors are more about “vibes” or subtle social pressures than explicit claims EA makes.

Nonetheless, here is my attempt to gesture at some of the factors that I think cause people to become disillusioned, written in the form of takeaways for EA as a movement/​community:

  • In the grand scheme of things, EA “knows” so, so little about how to do good. Not nothing! But still, relative to the scale and complexity of the world, extremely little. EA could do a much better job of emphasizing that everything we’re doing is a work in progress, and to encourage people coming into the community to hold onto their skepticism and uncertainties, rather than feeling implicit pressure to accept EA conclusions (aka EA dogma) wholesale.

  • There are huge differences in how well-grounded different EA claims are; we should be much more mindful of these differences. “Donations to relieve poverty go much further in the developing world than in the developed world” or “If you care about animal welfare, it probably makes more sense to focus on farmed animals than pets because there are so many more of them” are examples of extremely well-grounded claims. “There’s a >5% chance humanity goes extinct this century” or “AI and bio are the biggest existential risks” are claims with very different epistemic status, and should not be treated as similarly solid. “You, personally, should [fill in the blank]” is different again, and is (in my view) a category of claim we should be especially wary of making strongly.[1]

  • EA sells itself as having very few intellectual assumptions or prerequisites,[2] but in practice the ideas associated with the community are built on a very specific set of paradigms and assumptions. We should be both more aware and more cautious of letting EA-approved frames solidify. Not that plenty of them aren’t useful or valid—they are. But ideas like “your career options are earning to give, community building, or direct work,” or “the two types of work on AI are technical safety and governance,” or “the most important kind of AI safety work is alignment research,” which can come to feel simple and self-evident, actually conceal a ton of conceptual baggage in terms of how to slice things up. If we accept frames like these blindly, rather than questioning what all those words mean and how they were chosen and what they subtly miss, we will do less good.

  • We should be cautious of encouraging people to fall too hard in the “infatuation” phase. Instead of just giving them positive reinforcement (which I think is currently the norm—“It’s so cool you’re getting these ideas so quickly! You’re such a natural EA!”), established community members should see it as part of their role to help people to keep their feet, take their time with major life changes, and realize that they may later feel less infatuated with EA than they do at present. My aspiration for this post is that people could send it to infatuated newcomers to partially serve this function.

An aside on EA being wrong about a bunch of stuff

A common thread in the above is that EA is sure to be making subtle mistakes about a lot of things, and huge mistakes about a few things.

On the one hand, I doubt many EAs would dispute this claim outright—EA does famously love criticism, and in practice I think the community is good at giving people positive feedback for writing up long, detailed, step-by-step arguments about things EA might be getting wrong.

But on the other hand, in many settings the community as a whole seems to give off strong implicit vibes of “we have things figured out; let us tell you how you should think about this; here are the approved talking points.” This especially seems to be directed toward people who are encountering EA for the first time.

A tension that has to be navigated here is that often people’s first objections are not novel, so strong counterarguments already exist. If someone comes to their first EA event and asks why EA doesn’t support giving money to the homeless, or why you would be worried about existential risk from AI when it’s obvious that AI isn’t conscious, then it makes sense to walk them through what they’re missing.

The problem is distinguishing situations like that from situations where their take is a bit off-kilter from the standard EA consensus, or where they’re coming at the topic from a new angle that doesn’t quite fit—which could be a sign of simply not yet understanding an argument, but could also be a sign of having something new to contribute. In cases like that, what often seems to happen is that the newcomer experiences some combination of explicit arguments and implicit social pressure to ignore the subtle discordant notes in their thinking, and instead just accept the ready-made conclusions EA has to offer. This sands the edges off their views, making them less able to think independently and less inclined to express when they do.

(I don’t think this is the place to get into details of specific examples, but in a footnote I’ll list some examples of places in my field—AI policy/​governance—where I think EA thinking is much shakier than people treat it as.)[3]

So, one of my aims with this post is to nudge things slightly in a direction that gives people more implicit encouragement to foster doubts, welcome half-formed notes of unease, play around with their own ideas, and take seriously their own ability to notice gaps in mainstream EA thinking. Without this, our loud protestations that we love well-thought-through, cleanly-written-up criticism will be pretty useless.

4. Suggestions for individuals

Again, everyone experiences all this differently, so I don’t pretend to have a complete or perfect list of suggestions. But some things that might be good include the following.

Anticipate and lean into feelings of doubt, distance, and disillusionment

  • Know that other people have gone through the disillusionment pipeline, including (especially!) very smart, dedicated, caring, independent-minded people who felt strong affinity for EA. Including people who you may have seen give talks at EA Global or who have held prestigious jobs at EA orgs. Consider that you, too, may end up feeling some of the things described above.

  • If you notice yourself feeling doubts, lean into it! Sure, sometimes there will be a nicely argued counter-argument that you’ll find persuasive, but often there won’t be, especially if it’s in an area you know particularly well. Remember that the goal is to actually make the world a better place, not to agree with the EA community; EA ≠ inherently impactful. The possibility that if you think things through for yourself you might come up with a different conclusion is something you should search out, not be scared of.

  • If you’re going through something like phase 2 or 3 and finding it emotionally difficult, know that you’re not alone. If you feel like you’re losing something important to your life, give yourself grace to grieve that. Remember that you don’t have to accept or reject EA wholesale. You can stay in touch with people whose ideas or company you value; you can work on an EA cause in a non-EA way, or a non-EA cause in an EA way; you can float around the edges of the community for a while (aka take an EA-break or an EA-slow), or leave entirely if you want.

Maintain and/​or build ties outside EA

  • Keep your head above water in the non-EA world. Rather than getting swept away in the EA tide, make sure you’re aware of and taking seriously the expertise, wisdom, intuitions, etc. of people from other communities who do work related to yours; they almost certainly know things and can do things that EA doesn’t and can’t. (This can still be true even if you find EA ideas, conversations, etc. more valuable on the whole.) In my work, I notice a clear difference between people who seem to only read and talk to EA sources and those who are familiar with a broader set of issues and perspectives—I much prefer working with the latter group.

  • Relatedly, be wary of cutting ties to non-EA friends, social spaces, hobbies, or other sources of meaning. If you don’t have many such ties (e.g. because you’ve just moved to a new place), actively foster new ones. This isn’t just so that you have something to catch you if you end up wanting more distance from EA—it’s also an important psychological buffer to have in place to ensure that you feel able to think clearly about EA ideas, social pressures, and so on in the first place. If your life is set up such that your brain subconsciously thinks that criticizing EA could alienate you from your entire support structure, you’re likely to have a much harder time thinking independently.

Defer cautiously, not wholesale

  • Don’t dedicate your whole career to doing what someone else thinks you should do, if you don’t really get—in your head and your gut—why it’s a good idea. Of course some amount of deferring to others is necessary to navigate life, but make sure you have explicit tags in your world model for “on this point, I’m deferring to X because Y, even though I don’t really get it;” and seriously, don’t let that tag apply to the big picture of what you’re doing with your life and work.

  • I know too many people who have gotten multiple years into a career track only to realize that they got there without really grokking what they were doing. The result is usually both that they find it very hard to stay motivated, and that they struggle to figure out how to navigate the countless day-to-day decisions about how to prioritize and what to aim for because they don’t have a deeply rooted internal sense of what they’re doing and why.

Assume EA is making mistakes, and help find them

  • Figuring out gaps, mistakes, subtle wrongnesses, etc. is so helpful! That’s what we most desperately need as a community! And it will often begin with a feeling of something being subtly off, or someone whose judgment you respect seeming incomplete. Notice those moments, treasure them, and promote them to conscious attention. If something feels wrong but you can’t immediately articulate a clear argument for why, don’t dismiss it—keep noodling away, talking with friends, and see if over time you can get more clarity on it.[4]

  • Especially if you are coming to EA after having spent much time and intellectual energy elsewhere, there will likely be things you have thought more deeply about than anyone else in the community. If you get deep into some EA issue, then it’s very likely that the fact that you have different experiences than the small number of other people working on that issue so far means you will see things that others have missed. EA needs your help pointing out the things you think are wrong or subtly off the mark!

5. Conclusion

As one friend who previewed this post put it:

“EAs who go too deep in Phase 1 will find it more professionally/​ personally/​ psychologically costly to accept Phase 2. I think this is bad for a couple reasons: (i) It creates a community of people who have selective blindspots/​biases, i.e., refusing to ask themselves questions that will prompt Phase 2; and (ii) it means that entering Phase 2 can be completely destabilizing, leading them quickly to a ‘fuck EA’ view.

The ‘EA is the one true path’ and ‘fuck EA’ camps are both epistemically suspect. I think we need a higher percentage of people in a ‘EAs have a lot of great ideas, but the community is also misguided in a bunch of ways’ camp.”

I agree with this; one aim of this post is to argue that we should much more actively welcome and foster the existence of the “EAs have a lot of great ideas, but the community is also misguided in a bunch of ways” camp.

Ultimately, I hope that dealing better with the dynamics described in this post could result in an EA community that is more self-aware, less dogmatic, less totalizing, and thereby more able to persist and have a large positive impact into the future.

Thanks to Rebecca Kagan, Michael Page, George Rosenfeld, and a couple of others (who preferred not to be named) for comments on earlier drafts of this post.

  1. ↩︎

    This post is a good example of people apparently internalizing a message of “you, personally, should [fill in the blank]”—without necessarily being told it explicitly. (See also that post’s author noting reluctance to include in the original post that she thinks AI might be less important than the community consensus thinks it is—talk about pressure not to question EA dogma, yikes.)

  2. ↩︎

    I am complicit in describing EA this way in the past; I now see that post as articulating an aspiration, not the reality of how EA works.

  3. ↩︎

    When I interact with EAs who are interested in AI policy/​governance, they often seem very bought into various ideas and frames that seem pretty speculative to me. These include: focusing on solutions based on “international cooperation” (whatever that means); assuming a distinction between AI “capabilities” and AI “safety”; a fixation on “AGI timelines,” rather than a more flexible conception of what kinds of things we might see over what kinds of time periods; an inclination to separate out and disregard “near-term” AI risk concerns, such as those relating to fairness, social justice, and power dynamics. To be clear—many of these ideas are useful starting points, and some of the best critiques or expansions of these ideas have come from people associated with EA. But nonetheless, there seems to be an effect where ideas like this get codified and then passed onto newcomers as if they were the Correct Takes or The Way Things Are, rather than as if they were useful preliminary thinking tools.

  4. ↩︎

    (For instance, this post is the result of a couple years’ noodling and chatting about some stuff that felt off to me, which I couldn’t put into words very well initially, and still can’t put into words perfectly.)