The ones that walk away
Alice: I’ve grown disillusioned with the EA community. I still want to dedicate my life to doing as much good as I can, but I am no longer certain that EA is the best way to do that.
Bob: I see where you’re coming from, but where specifically is that disillusionment leading you?
Alice: I am still confident that major EA causes are important areas to work on. I think EA organizations do good work in those areas, so I would be quite happy to work at some of them. On the other hand, I’m much less willing to defer to EA institutions than before, and I’m unlikely to attend EA events or personally associate with EAs. So I imagine mostly disengaging from EA, albeit with some professional interest in EA organizations.
Bob: You’re disentangling different aspects of EA as a community. We are linked first and foremost by our moral commitments, to having a larger moral circle and trying to do the most good for that moral circle. You still hold those commitments. On top of that, we’re also linked by the intellectual commitment to think rigorously and impartially about ways to do the most good. It sounds like you still believe in that, and the change is that you want to do more of that thinking for yourself and less of it through the EA collective consciousness. Is that right?
Alice: Pretty much.
Bob: But if you still hold the moral and intellectual commitments that define effective altruism, why do you want to disengage from EA?
Alice: For me, the social dimension creates a dangerous tribalism. I get upset when people criticize EA on Twitter and in my life, and I feel the need to defend it. My in-group bias is being activated to defend people and arguments that I would not otherwise defend.
Bob: Isn’t being cognizant of tribalism enough to help you avoid it?
Alice: That’s unlikely, at least for me. I’m not a brain in a vat; my emotions are important to me. They don’t dictate every action I take, but they have some sway. Furthermore, everyone thinks they are above tribalism, so we should be skeptical about our ability to succeed where everyone else fails.
Bob: Point taken, but this argument proves too much. This is not just an argument against identifying with EA—it’s an argument against identifying with any collective, since every collective makes you feel some tribalism.
Alice: And that’s exactly what I’m defending. I think it makes sense to work with collectives to accomplish shared goals—as I said, I would still work at EA organizations—but I am much less excited about identifying with them. That shared identity is not necessary for us to do good work together, and it creates a lot of scope for abuse.
Bob: That feels uncomfortably transactional. Can you really work with someone towards a shared goal that is meaningful to you without feeling some bond with them? Don’t you feel kinship with people who care about animal suffering, for example?
Alice: Well… I see what you mean, so I’ll step back from the strong claim. But the EA community is far more tightly knit than that basic moral kinship. We have group houses, co-working spaces, student groups, conferences with afterparties, a CEA community health team, the Forum, Dank EA Memes, EA Twitter… this is not your average community, and the typical EA could probably step back quite a lot while retaining that kinship and the sense of working together to make the world better.
Bob: It’s true that this is a highly-engaged community, but most of those aren’t just for fun; they have some role in our ability to do good. To pick on two examples you listed, I’ve met people at conferences who I learnt a lot from, and the Forum is one of the best websites on the internet if you filter it aggressively. I wouldn’t take this reasoning too literally, but I still suspect that if you disengaged from the Forum and stopped meeting EAs at conferences, it would reduce your impact.
Alice: I’m uncomfortable with that kind of reasoning. “If you engage less with the specific EA movement, you will renege on your moral commitments” sounds like “if you engage less with our specific religious institution, you will renege on your commitment to God”. It could be true, but it’s also a self-serving belief for communities to hold.
Bob: That’s undeniable, and I would never take this argument into the realm of moral blackmail. At the same time, it’s a real possibility. Do you have a specific reason to believe it’s wrong?
Alice: Nope, and I’m comfortable with that.
Bob: I understand your concern, but EA started as a bunch of individuals reading a book, donating some money and going on with life. There are sensible reasons why we quickly grew out of that; it’s just not the way to do the most good. We need to act collectively, to form organizations that scale, and we need to exchange ideas to learn from each other, so I think even the best possible version of the EA community is not far from what we have now.
Alice: Of course EA had to grow, but the question comes back to whether this community is actually realizing the promise of that growth. I don’t want to lean too heavily on recent events, but I think they are actually informative that we have less of a handle on doing good than we thought.
Bob: I think that’s only true because people previously had unusually naive beliefs about EA infallibility. Now that EA infallibility has been smashed to pieces, we are all moving to a new relationship with EA. I’m optimistic that this new relationship will be much healthier than before, and I think you underestimate our ability to push it in that direction. I can personally think of a few Forum posts and conversations with people that have changed my mind about important topics, and with a community as small as ours, it doesn’t take a lot to have an effect. Yes, we can have groupthink and bad equilibria, but we aren’t naive about those things, and we are constantly trying to do better. Given that, I personally can’t be too pessimistic about the future of EA.
Alice: I like that vision, and I have zero interest in dismissing it. But as someone whose unusually naive beliefs about EA were smashed, I don’t think it’s as simple as hitting reload on a save file. I personally need to re-interrogate ways to do good with my life, and whether non-EA frameworks rise to that challenge better than EA has. Maybe the end result of that exploration will be a renewed confidence in EA, and maybe it won’t. Regardless, I need a clean slate, so I am disengaging from the EA community. And maybe that’s also good for my long-term impact because [post-hoc rationalization].
Bob: Well then, godspeed. I hope you find what you’re looking for.
- What’s surprised me as an entry-level generalist at Open Phil & my recommendations to early career professionals by 30 Mar 2023 21:48 UTC; 101 points) (
- EA & LW Forum Weekly Summary (16th − 22nd Jan ’23) by 23 Jan 2023 3:46 UTC; 15 points) (
- EA & LW Forum Weekly Summary (16th − 22nd Jan ’23) by 23 Jan 2023 3:46 UTC; 13 points) (LessWrong;
I really liked this post, and I felt like I understood both characters’ views super well. I feel like this type of writing often makes one character a strawman, but I didn’t feel that at all in this case. Great job!
+1, I really liked this post, and found it very kind / empathetic on both sides! I think this is maybe now my favorite post in this genre.
Other posts I like which feel related:
EA disillusionment
“It’s ok to leave EA” (different vibe)
EA-break, EA-slow
Leaving a line of retreat (slightly different audience)
Thanks for writing this :)
Effective altruism in the garden of ends doesn’t look too related but it’s the real progenitor of this post, because it catalyzed my thinking about doing good in a more expansive way than we normally think of, so I wanted to shout it out too :)
I’ve had similar feelings to Alice. Part of it is that group membership serves a role of signalling information about yourself to others. Its very different to describe yourself to others as an EA when the primary association with it is “slightly weird but well meaning group of charitable people” vs when its “those weird crypto/eugenics people”. And in the latter case you are better off moving to labelling yourself as something else
That seems bad in equilibrium. For example, if the public view of EA after the WWOTF publicity tour is “those people who think about the long term future”, then a global health/animal welfare person in EA would be “better off” by not calling themselves EA and labelling themselves as something else. But that would make it much harder to have a big tent within EA.
Why is big tent EA an end in itself? The EA movement exists for the purpose of doing good, not for having a movement. If multiple smaller movements are more effective at doing good then we should do that.
Multiple groups make it easier to specialise and avoid having single points of failure. Though you lose some economies of scale and coordination benefits.
IMO big tent is valuable due to gains from trade. I have X cause area, but Charlie is better suited to work on X than I am, and I am better suited to work on their cause area Y. Our labour swap is more effective for our cause areas than each focusing on our own area.
Gains from trade, and agglomeration effects, and economies of scale. Being effective is useful for doing good, having a lot of close friends and allies is useful for being effective.
It’s not necessarily an end in and of itself, but a scenario like this can lead to fairly arbitrary factors deciding what EA “is” in a self reinforcing cycle. Let’s say a popular news outlet wrote a critical article about EA and then many EAs decided to stop identifying with it because of the now negative connotations. It seems wrong to let external forces dictate what EA Is in that way.
I think a meta that emerges here is: do I have Alice-like feelings because I feel that EA will weaken (or “detract from my issues”) as a rallying call or because I’m concerned with how I’m perceived (both by others and myself), which is why I really appreciated how you framed Alice’s POV as personal and emotional, KT!
If you stop calling yourself an EA in public because you think doing so will give people the wrong impression, that’s one thing, I guess.
Thanks for writing that.
There is—or needs to be—a place Alice can go to do good most effectively in a way that works for her, and we need to both honor that and help her in the bext stage of her altruistic journey to the extent we can. That is a rather nonspecific statement for sure because Alice is an loosely defined fictional character.
(This is not commentary on events that led Alice to where she is today, or on the possibility of reforms that might lead her to come back. It is a statement about what we owe Alice right now, and what we owe a world which will benefit from having an EA alum in a different altruistic community.)
My opinion is that Alice’s journey is a personal one, or at least one whose path is practically defined by being independent of EA. But same caveat about loosely defined fictional character.
Because Alice told Bob that she still aims to do as much good as she can, I thought she might have answered this question differently:
I imagined that someone in Alice’s position might instead have said something like:
I thought this point might be important because it suggests how high the bar is for a community that aspires to do good by attracting and retaining impact-motivated people: Even if the community creates the right kinds of spaces and has the right aims, people like Alice are likely to disengage unless, in practice, staying involved seems to be the best possible use of their limited time and other resources.
(I really appreciated this post. Thank you!)
I wanted Alice’s response to capture a resistance to moral blackmail that I think is important to have. I personally could not make a tight philosophical defense in my head for why leaving EA is actually impact maximizing, but I am also more committed to resisting moral blackmail than I am to giving tight philosophical defenses of my beliefs. But I think your point makes total sense.
Do you see Bob’s question as moral blackmail? I feel like it ta a reasonable question to ask l, but maybe I’m missing something
Maybe moral blackmail is too harsh of a term. But I think that it is important to feel like you can leave without violating your moral commitments. That doesn’t in any way mean that Bob is wrong or trying to commit moral blackmail.
This is a fun writing style.
I feel a lot like Alice but have decided to stay and try to make change. EA is what we make it. And the more of us disillusioned/disappointed people stay and try to enact change, the more likely it is that we can tilt EA in the direction we would like to see. On the other hand, if we leave, it is quite likely EA will slip quickly into exactly what we think it shouldn’t be—something that will be exacerbated by a negative feedback loop of us leaving and then EA struggling to attract a more diverse crowd (this negative feedback loop becomes a positive feedback loop if we stay!).
That said, I realize many people just don’t have the capacity to stay and enact change. I absolutely do not think you should stay and fight if you risk burn-out or other mental ill-health. But I hope enough of us have some energy left to stay and make EA closer to what we think it should be.
My comment was inspired by this post encouraging people fight for their own version of doing what’s right.
Totally agree. Some EA-skeptical friends asked me if I was EA or EA-adjacent and I said “if everyone with my views described themselves as EA-adjacent (which some of them definitely do) then EA would be much worse for it”.
That said, the leaver’s story is not about making EA better, but about making the world better. And it’s good to open your mind to the idea that these don’t coincide. We would love for them to coincide, but that’s exactly why we should be receptive to thinking that they don’t.
I really feel like this captures my own thinking pretty well at the moment. I am worried that leaving also has other effects on the community as well though, such as Evaporative Cooling.
Great post! If this story is about your journey, I wish you the best of luck and hope you find what you’re looking for. I definitely see advantages in some people spending time within the community and then leaving to explore outside of this and I can definitely see how keeping one foot within the community could stop someone from either fully diving into something else or to see how the world could look from outside the EA framework.
Somewhat relevant (takes the hard proves-too-much stance): https://www.econlib.org/archives/2014/10/dear_identity_p.html
I’ve been Alice. I had some experiences within EA that lead me to take a year long EA-leave. When I left I did not know for how long, and if I would come back. This was defiantly the right thing for me to do. If you’re Alice and you feel you need to take a step back, then you are probably correct. Even if you can’t exactly articulate why, you are probably correct. If the EA network is net positive for you and your work, then you will be back.
I liked this and felt all points were reasonable, except for this one:
“I think even the best possible version of the EA community is not far from what we have now.”
I strongly disagree if this is meant literally, ie. including possible future versions. I imagine if EA’s development continues for multiple decades, which I believe it can, then that future EA would look far more impressive than current EA.
Alternatively, if it only means “given when and where it started, we’re in one of the best timeline with respect to the EA community so far”, that might be up for debate.
But given the context, it seems only the first makes sense? Actually on re-reading the passage, I’m not sure how this statement connects to the rest of the paragraph.
This is great, I really appreciate you writing it. I just took vacation for a couple months and basically did what Alice said. Any readers feel free to DM me if you’d like to discuss these feelings + what you might do about it :))
What a refreshing read, easy to digest but with deep thoughts! Thank you for this. I find the “faith vs church” argument quite compelling and I’ve used it in the past to explain similar conundrums (although I am not a religious person myself).
☝️ That is the crux of what Alice needs to explore. Is Alice upset by criticism of EA (the principles), or by criticism of someone’s actions who is—in some way—associated with EA? The two are very different.
I think the argument here is that these two cannot be / are not distinguished, which bothers her. When a community is so strongly interlinked to its values that they are one and the same to the “general public” (i.e. outsiders), criticism of individuals of that community tends to be directed toward the community as a whole as well as the underlying values. I can relate to this being frustrating.
Conflating the actions of a person with the values of a group is a fallacy. It contains elements of Cherry Picking, Fallacy of Composition, and Ad Hominem.
I wasn’t saying that it’s correct to do so. My point is precisely that the public tends to not distinguish, and I think it’s perfectly legitimate to be bothered by this and feel especially frustrated because your views are being invalidated by the wider public due to an incorrect assumption.
I think you and I have different interpretations of the word “legitimate.” Oxford says “conforming to the law or to rules; … able to be defended with logic or justification.” I guess it is technically true if you are allowing for the possibility of fallacious logic.
Yes, people make faulty logical conclusions all the time. I’m not saying those people are bad or internally inconsistent. But if such a stance drives someone away from a cause area in which they could have had a tangible positive impact, that is a suboptimal outcome.
Dave, now you are cherry picking. Oxford also states “for which there is a fair and acceptable reason. SYNONYMS valid, justifiable.”
And there are “fair and acceptable reasons” to be bothered by people that don’t care about the impact that predictable, “faulty logical” conclusions of the public have on their behavior resulting in “suboptimal outcome”.
It’s a fair assumption that a great proportion of the general public does not distinguish between people and cause. If you truly want the cause to succeed—or have optimal outcome—you should care about mitigating that risk and not blame suboptimal outcome on logical fallacies of others.
Please don’t “should” on me—or anyone else for that matter—Julian. It’s disrespectful.
I feel that educating people about logical fallacies is the way forward. But I am curious to hear what you or others propose that doesn’t simply enable people to perpetuate existing behaviors.
I have written this post on EA governance and democratization. I hope it is on interest:
https://forum.effectivealtruism.org/posts/KjahfX4vCbWWvgnf7/effective-altruism-governance-still-a-non-issue