Effective altruism is no longer the right name for the movement

TL;DR

  • As some have already argued, the EA movement would be more effective in convincing people to take existential risks seriously by focusing on how these risks will kill them and everyone they know, rather than on how they need to care about future people

  • Trying to prevent humanity from going extinct does not match people’s commonsense definition of altruism

  • This mismatch causes EA to filter out two groups of people: 1) People who are motivated to prevent existential risks for reasons other than caring about future people; 2) Altruistically motivated people who want to help those less fortunate, but are repelled by EA’s focus on longtermism

  • We need an existential risk prevention movement that people can join without having to rethink their moral ideas to include future people and we need an effective altruism movement that people can join without being told that the most altruistic endeavor is to try to minimize existential risks

Addressing existential risk is not an altruistic endeavor

In what is currently the fourth highest-voted EA forum post of all time, Scott Alexander proposes that EA could talk about existential risk without first bringing up the philosophical ideas of longtermism.

If you’re under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one.

But right now, a lot of EA discussion about this goes through an argument that starts with “did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?”

Regardless of whether these statements are true, or whether you could eventually convince someone of them, they’re not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know.

The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like “the hinge of history”, “the most important century” and “the precipice” all point to the idea that existential risk is concentrated in the relatively near future—probably before 2100.

The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they’re good is that if there’s a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know.

I agree with Scott here. Based on the reaction on the forum, a lot of others do as well. So, let’s read that last sentence again: “if there’s a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know”. Notice that this is not an altruistic concern – it is a concern of survival and well-being.

I mean, sure, you could make the case that not wanting the world to end is altruistic because you care about the billions of people currently living and the potential trillions of people who could exist in the future. But chances are, if you’re worried about the world ending, what’s actually driving you is a basic human desire for you and your loved ones to live and flourish.

I share the longtermists’ concerns about bio-risk, unaligned artificial intelligence, nuclear war, and other potential existential risks. I believe these are important cause areas. But if I’m being honest, I don’t worry about these risks because they could prevent trillions of unborn humans from existing. I don’t even really worry about them because they might kill millions or billions of people today. The main reason I worry about them is because I don’t want myself or the people I care about to be harmed.

Robert Wright asks “Concern for generations unborn is laudable and right, but is it really a pre-requisite for saving the world?” For the vast majority of people, the answer is obviously not. But for a movement called effective altruism, the answer is a resounding yes. Because an altruistic movement can’t be about its members wanting themselves and their loved ones to live and flourish – it has to, at least primarily, be about others.

Eli Lifland responds to Scott and others who question the use of the longtermist framework to talk about existential risk, arguing that, by his rough analyses, “without taking into account future people, x-risk interventions are approximately as cost-effective as (a) near-term interventions, such as global health and animal welfare and (b) global catastrophic risk (GCR) interventions, such as reducing risk of nuclear war”. Eli may be right about this but his whole post is predicated on the idea that preventing existential risks should be approached as an altruistic endeavor – which makes sense, because it’s a post on the Effective Altruism forum to be read by people interested in altruism.

But preventing existential risk isn’t only in the purview of altruists. However small or wide one’s circle of concern is, existential risk is of concern to them. Whether someone is totally selfish, cares only for their families, cares only for their communities, cares only for their country, or is as selfless as Peter Singer, existential risks matter to them. EAs might require expanding the circle of concern to future people to justify prioritizing these existential risks, but the rest of the world does not.

(One might argue that this is untrue, that most people outside of the EA movement clearly don’t seem to care or think much about existential risks. But this is a failure of reasoning, not of moral reasoning. There are many who don’t align themselves with effective altruists who highly prioritize addressing existential risks, including Elon Musk, Dominic Cummings, and Zvi Mowshowitz.)

The EA movement shoots itself in the foot with by starting its pitches to reduce existential risks with philosophical arguments on how they should care about future people. We need more people taking these cause areas seriously and working on them, and the longtermist pitch is far from the most persuasive one. Bucketing the issue as an altruistic one also prevents what’s really needed to attract lots of talent: high levels of pay without any handwringing about how that’s not how altruism should be. Given the importance of these issues, we can’t afford to filter out everyone who rejects longtermist arguments, can’t be motivated by altruistic considerations for future people, or would only work on these problems as part of high-earning, high-status careers (aka the overwhelming majority of people).

The EA movement then proceeds to shoot itself in the other foot by keeping all the great and revolutionary ideas of the bed net era in the same movement as longtermism. When many who are interested in altruism see EA’s focus on longtermism, they simply don’t recognize it as an altruism-focused movement. This isn’t just because these ideas are new. Some of the critics who once thought that caring about a charity’s effectiveness somehow made the altruism defective eventually came around because ultimately the bed net era really was about how to do altruism better. The ideas of using research and mathematical tools to identify the most effective charities and only giving to them are controversial, but fundamentally it still falls under people’s intuitive definition of what altruism is. This is not the case with things like trying to solve the AI alignment problem or lobbying congress to prioritize pandemic prevention, which fall well outside that definition.

The consequence of this is to filter out potential EAs who find longtermist ideas too bizarre to be part of the movement. I have met various effective altruists who care about fighting global poverty, and maybe care about improving animal welfare, but who are not sold on longtermism (and are sometimes hostile to portions of it, usually to concerns about AI). In their cases, their appreciation for what they consider to be the good parts of EA outweighed their skepticism of longtermism, and they become part of the movement. It would be very surprising if there weren’t others who are in a similar boat, except being somewhat more averse to longtermism and somewhat less appreciative of the rest of the EA, the balance swings the other way and they avoid the movement altogether.

Again, EA has always challenged people’s conceptions of how to do altruism, but the pushback to the bed nets era EA was about the concept of altruism being effective and EA’s claims that the question had a right answer. Working or donating to help poor people in the developing world already matched people’s conceptions of altruism very well. The challenge longtermism poses to people is about the concept of altruism itself and how it’s being extended to solve very different problems.

The “effective altruism” name

“As E.A. expanded, it required an umbrella nonprofit with paid staff. They brainstormed names with variants of the words “good” and “maximization,” and settled on the Centre for Effective Altruism.” – The New Yorker in their profile of Will MacAskill

So why did EA shoot itself in both feet like this? It’s not because anyone decided that it would be a great idea to have the same movement both fighting global poverty and preventing existential risk. It happened organically, as a result of how the EA movement evolved.

Will MacAskill lays out how the term “effective altruism” came about in this post. When it was first introduced in 2011, the soon-to-be-named-EA community was still largely the Giving What We Can community. With the founding of 80,000 Hours, the movement began taking its first steps “away from just charity and onto ethical life-optimisation more generally”. But this was still very much the bed nets era, where the focus was on helping people in extreme poverty. Longtermist ideas had not yet taken a stronghold in the movement.

Effective altruism was, I believe, a very good name for the movement as it was then. The movement has undergone huge changes since then, and to reflect these changes it decided to… drop the long form and go by just the acronym? Here’s Matt Yglesias:

The Effective Altruism movement was born out of an effort to persuade people to be more charitable and to think more critically about the cost-effectiveness of different giving opportunities. Effective Altruism is a good name for those ideas. But movements are constellations of people and institutions, not abstract ideas — and over time, the people and institutions operating under the banner of Effective Altruism started getting involved in other things.

My sense is that the relevant people have generally come around to the view that this is confusing and use the acronym “EA,” like how AT&T no longer stands for “American Telephone and Telegraph.”

If Matt is right here, it means that many EAs already do agree that effective altruism is a bad name for what the movement is today. But if they’re trying to remedy it by just dropping to using the acronym EA, they’re not doing a very good job. (It’s also worth noting that the acronym EA means something very different to most people.)

So, what is the remedy? Well, this is an EA criticism post, so actually addressing it is someone else’s job. I can think of two ways to address this.

The first is that the effective altruism movement gets a new name that properly reflects the full scope of what it tries to do today. This doesn’t mean ditching the Effective Altruism name entirely. Think about what Google did in 2015 when it realized that its founding product’s brand name was no longer the right name for the company, given all the non-Google projects and products it was working on: it restructured itself to form a parent company called Alphabet Inc. It didn’t ditch the Google brand name; in fact, Google is still its own company, but it’s a subsidiary of Alphabet, which also owns other companies like DeepMind and Waymo. EA could do something similar.

The second option is that extinction risk prevention be its own movement outside of effective altruism. Think about how the rationality and EA movements coexist: the two have a lot of overlap but they are not the same thing and there are plenty of people who are in one but not the other. There’s also no good umbrella term that encompasses both. Why not have a third extinction risk prevention movement, one that has valuable connections to the other two, but includes people who aren’t interested in moral thought experiments or reading LessWrong, but who are motivated to save the world?

Once the two movements are distinguished, they can each shed the baggage that comes from their current conflation with each other. The effective altruism movement can be an altruism-focused movement that brings in people who want to help the less fortunate (humans or animals) to the best of their abilities, without anyone trying to be convince them that protecting the far future is the most altruistic cause area. The extinction risk prevention movement can be a movement that tries to save the world, attracting top talent with money, status and a desire to have a heroic and important job, rather than narrowing itself to people who can be persuaded to care deeply about future people.

Restructuring a movement is hard. It’s messy, it involves a lot of grunt work, it can confuse things in the short term, and the benefits might be unclear. But consider that the EA movement is just 15 or so years old and that the longtermist view is based on the idea that humanity could still be in its infancy and we want to ensure there is still a very long way to go. Do we really want to trap the movement in a framework that no one chose because it’s too awkward and annoying to change it?

Concluding with a personal perspective

I’m a co-founder and organizer of the Effective Altruism chapter at Microsoft. Every year, during Give month, we try to promote the concept of effective altruism at the company. We’ve had various speakers over the years but the rough pitch goes something like this: “We have high-quality evidence to show that some charities are orders of magnitude more effective than others. If you want do the most good with your money, you should find organizations that are seen and measured to be high impact by reputed organizations like GiveWell. This can and will save lives, something Peter Singer’s drowning child thought experiment shows is arguably a moral obligation.”

That’s where our pitches end for most people because we usually only have an hour to get these new ideas across. If I was to think about how to pitch longtermism afterwards, here’s my impression of how it would come across:

“Alright, so now that I’ve told you how we have a moral obligation to help those in extreme poverty and how EA is a movement that tries to find the best ways to do that, you should know that real effective altruists think global poverty is just a rounding error. We’ve done the math and realized that when you consider all future people, helping people in the third world (you know, the thing we just told you that you should do) is actually not a priority. What you really want to donate to is trying to prevent existential risks. We can’t really measure how impactful charities in this space are (you know, the other thing we just told you should do) but it’s so important that even if there’s a small chance it could help, the expected value is higher.”

(I’m being uncharitable to EA and longtermism here, but only to demonstrate how I think I will end up sounding if I pitch longtermism right after our talks introducing people to EA ideas for the first time.)

I love introducing people to the ideas of Peter Singer and how by donating carefully, we can make a difference in the world and save lives. And I introduce these ideas as effective altruism, because that’s what they are. But I sometimes have a fear in the back of my mind that some of the attendees who are intrigued by these ideas are later going to look up effective altruism, get the impression that the movement’s focus is just about existential risks these days, and feel duped. Since EA pitches don’t usually start with longtermist ideas, it can feel like a bait and switch.

I would love to introduce people to the ideas of existential risk separately. The longtermist cause areas are of huge importance, and I think there needs to be a lot of money and activism and work directed at solving these problems. Not because it’s altruistic, even if it is that too, but because we all care about our own well-beings.

This means that smart societies and governments should take serious steps to mitigate the existential risks that threaten its citizens. Humanity’s fight against climate change has already shown that it is possible for scientists and activists to make tackling an existential risk a governmental priority, one that bears fruit. That level of widespread seriousness and urgency is needed to tackle the other existential risks that threaten us as well. It can’t just be an altruistic cause area.