Making Effective Altruism Enormous

When planning a project, a key question is what success looks like.

What does Effective Altruism look like if it is successful?

I think a lot of the answer is that, as a social movement, it’s successful if its ideas are adopted and its goals are pursued—not just by proponents, but by the world. Which leads me to a simple conclusion: Effective Altruism is at least three orders of magnitude too small—and probably more than that. And I think that the movement has been hampered by thinking about a scale far, far too small to maximize its impact—which is the goal.

I’ll talk about three specific aspects where I think EA can and should scale.

  1. Dollars donated

  2. Cause areas being worked on

  3. People who are involved

I want to lay out the case of what it looks like for this to change, and inter-alia, suggest how all three of these are connected—because I don’t think any of them will grow anything like significantly enough without the other two.

1. Funding

Funding today

At present, EA donors have an uncommitted pool of tens of billions of dollars—though it can’t all be liquidated tomorrow. But that isn’t enough to fully fund everything we know is very valuable. Givewell has recently raised its funding bar to 8x giving directly, and has a $200m shortfall this year. We fully expect that there will be more money in the future—but no-one seems to be claiming that the amounts available would be enough to, say, raise the standard of living in sub-Saharan Africa by even a factor of two, to one fifteenth of the level in the United States. That would take far more money.

The funding we should aim for

The obvious goal should be to get to the point where we’re funding everything better than giving directly, and funding that, as well. We’re talking about a minimum of tens of billions of dollars per year, plausibly far more. This cannot possibly be sustained by finding more billionaires, unless wealth inequality rises even faster than interest in EA. We need more people, and clearly, if the money is supposed to go to improving the world, not everyone can be hired by EA orgs.

Instead, I claim we need to go back and enable the original vision of many EA founders, who were simply looking to maximize their charitable impact. That is, we need people to earn-to-give in the normal sense—having a job they like, living a normal life, and donating perhaps 10% of their income to effective charity. And that’s a sustainable vision for a movement that can be embraced by hundreds of millions or billions of people. And as an aside, if the ideas are widely embraced, it’s also far more likely to be embraced by politicians for allocation of international aid, and create even more value democratically[1].

Scale likely requires diversifying cause areas

Alongside this, if and as effective altruism gets larger, the set of things effective altruists focus on will need to expand. If the EA donor base grows enough, we will fill the current funding gap for EA organizations. And a broad base of supporters will have a small segment who will work more directly on these issues. Scaling the interventions gets us only so far—there will be a need for more causes. We will hopefully quickly fill the funding gaps for scaling newer ideas, and need to expand. Once we can save every life that can be saved for $10,000, we will need to move on to more and more expensive interventions, interventions that address welfare, preventative healthcare for the uninsured in wealthy countries, and so on. If we successfully scale, the world will be a better and very different place.

2. Cause Areas

As mentioned, there are a few reasons to expand cause areas over time. But before doing so, there is a conceptual elephant in the room. That is, Effective Altruism embraces cause-neutrality. Cause neutrality has meant, historically, that we should find the biggest and most important single cause area, and focus on that. It’s good advice for individuals or small groups. That means we shift quickly from one thing to the next—global poverty, animal welfare, existential risk. I claim that important reasons exist to temper the focus on the single highest leverage area, especially as the areas each grow.

Decreasing Marginal Returns

First, as resources grow, we expect to find decreasing marginal returns in each area. Eventually, we will want to fall back to prioritizing other causes. As the funding increases, the “cheapest” opportunities to identify effective interventions disappear. And as areas move from needing generalists to implementation, the need changes from dedicated people, to funding to pay direct workers to get things done. Second, as the number of people involved in, say, effective global biorisk reduction increases, the bar for entry changes, as it requires much more specific skill sets, and most people are unable to contribute to the work directly. Over the past decade, it seems that effective altruists who once focused on global health next switched to animal welfare, then perhaps biosecurity, or AI safety. And that makes sense, since they changed from areas that needed intensive investigation to ones where they needed organizations and funding [2]. Not only does this suggest a high probability that other areas will continue to be identified, and undergo this transition, it implies that having a broader pool of people looking at diverse areas means that identifying opportunities is easier.

Different Talents

Next, heterogenous talents have different optimal avenues for direct work. The naive version of Effective Altruism (that few people well-versed in the movement, or simply sensitive to reality, would agree with,) would tell petroleum engineers to try to switch to AI safety research, or perhaps to alternative meat engineering. But those skills are not at all related. So even if such an engineer was at all willing to retrain and find work in those areas, which seems unlikely, as a novice, they are risking their career for a small chance they could be impactful in a new domain. Instead I suspect we could point to the need for petroleum engineers in geothermal energy production, with clearly positive impact, and point to the ability to earn-to-give if they are hoping to focus on even more impactful areas. In fact, EA organizations already have diverse needs—from public relations to operations to finance to general middle management. Moving people from these areas into “higher priority” areas isn’t going to help.

Different Values

Third, values differ, and a broad-based movement benefits from encouraging diversity and disagreement[3]. For example, there are people who strongly discount the future. People who do not assign moral weight to animals. People who view mental suffering as more important than physical suffering. People who embrace person-affecting views. Average utilitarians. Negative utilitarians. And so on. And these different views lead to different conclusions about what maximizes ”the good”—meaning that different causes should be prioritized, even after accepting all of the fundamental goals of effective altruism.

There are even people who disagree with those fundamental claims, and, for example, feel that they should give locally, not just due to the over-used and confused claims about local knowledge, but because they have deontological or virtue ethical positions. These are sometimes compatible with effective altruism, but often are not. So aside from my own questions about how to address moral uncertainties, I think that a fundamental part of benefiting others is respecting their current preferences, even when I disagree with them, and allowing them to benefit from thinking about effectiveness. Not everything needs to be EA. As long as they aren’t doing harm, there seems to be little reason to discourage donations or activism which improves the world a little because we disagree with their priorities, or think we know better. We probably want to encourage allies, instead of criticizing them—and in many cases, point out that the disagreements are minor, or straw men.

Big Tent EA

Some criticisms of EA have been that it’s too demanding. This seems wrong. Not only do very few effective altruists embrace a maximalist and burdensome utilitarian view about ethical obligations, but we should be happy to encourage consequentialist altruistic giving across domains, even if not optimal. While I would be appalled if GiveWell decided that education in the United States was a top global priority for charity, I’m perfectly happy with funds donated to schools being donated more effectively. First world poverty is much more expensive to solve on a per-person basis than helping subsistence farmers escape a poverty trap in the developing world—but I still hope people donating to the cause can be shown how to fund effective poverty reduction strategies over donations to Salvation Army. These seem like a useful expansion of Effective Altruist ideas and ideals, even if it doesn’t optimize along every dimension of our priorities. Effective Altruists addressing those areas seems like a potentially large benefit, even ignoring the indirect effects of exposing people to the questions of cause prioritization.

Finally, expanding the tent for Effective Altruism seems positive for the world—and as I’ll argue below, it is unlikely to be damaging to core EA priorities. I would hope to have effective altruism as a movement encouraging a broad adoption of the very basic version of what we promote; applying consequentialist thinking to giving.

3. People

I’d estimate that Effective Altruism is strongly embraced by, at most, 10,000 people. That is small—if we take 80,000 hours logic to its conclusion, it implies less than a billion hours of committed time by effective altruists. There are 8 billion people in the world. If we’re trying to continue to improve the world, we need to scale further.

That means Effective Altruism absolutely cannot be an “elitist” group. And to be clear, we aren’t interested in a big tent movement because it’s strategically valuable. We are interested in a big tent movement because a moral statement—that people can and should try to improve the world with our resources—applies universally. So we welcome people who “just” want to do good better with their charitable giving. And several EA orgs, such as Giving What We Can, do a good job making that clear—but I think that many parts of EA clearly missed getting the message.

Bigger EA means not everyone is working directly.

As mentioned above, not everyone will work in EA orgs. Yes, everyone I know working in Effective Altruism is critically limited by the difficulty of finding people to work on key areas, with idiosyncratic skills or narrow expertise outside of our focus areas—and scaling would definitely make that easier. The need for people is not, however, limited to direct work. We want community members that embrace earning-to-give—again, not in the sense of maximizing their incomes to give, but more simply working at generally beneficial and/​or benign jobs and giving money effectively. We want to make the world better, safer, and happier, and that means bringing the world along—not deciding for them.

To put it in slightly different terms, you don’t get to make, much less optimize, other people’s decisions. Organizations like 80,000 hours offer advice to more dedicated EAs about using their careers for good, but they aren’t game guides for life. And we need to be clear that not everyone in the world should focus on working directly on the highest-value interventions, especially given that talents and capabilities differ. Some—the vast majority of all people, in fact—should have normal jobs. If scaling EA means that everyone needs to directly work on EA causes, we’re sharply limited in how much we can scale.

Objections

There are a variety of objections to my claims, which I will sort into two general buckets of normative versus positive. The first sort of disagreement is that the claims here are wrong about what we should do—that we shouldn’t embrace people with different values, or that we should encourage people to maximize impact even when they are doing other things. I’m not going to debate those. The second sort are predictive disagreements—that if we embrace the strategies I’m suggesting, Effective Altruism as a movement will be less effective. I think these are both more important, and that it is possible to discuss them more productively.

Specifically, as mentioned above, I claim that big-tent effective altruism is unlikely to damage core EA priorities.

First, I think that it’s very plausible that the multiplier effect of changing charitable giving in developed countries could have very large impacts—the half trillion dollars a year given charitably in the United States doesn’t need to become much more effective to have a huge impact, and moving a couple billion dollars from something completely ineffective to solving “first-world problems” isn’t going to save as many lives as donations to AMF, but it will still have a larger impact than what many or even most people in EA work on.

Second, I think that with even minimal care, people are unlikely to end up misled into thinking that analyses which compare charities in terms of impact per dollar imply that lower-impact charities are nearly as effective as top givewell charities. Relatedly and finally, there could be a concern that we would push people who would otherwise be more impactful to focus on near-term issues, and not realize they are not actually being as impactful as they could be. This similarly seems unlikely, given the degree to which EA tends to be (brutally) honest about impact. Though as I’ll mention below, slightly less abusive and better informed criticism of other people’s views and choices is probably needed.

I would be interested in hearing if and why others disagree.

Concrete next steps

I don’t think that this vision directly impacts most people’s priorities within EA. People who want to work on global development, AI safety, biorisk, or animal welfare should continue to do so.

But I do hope that it impacts their vision for EA, and the way they interact with others both inside and outside of EA. Yes, perceptions matter. And if we don’t have room for people who are interested and thinking about what they should do, even if they decide to choose different careers, to prioritize differently than ourselves, or to “only” donate 5%, or 1% of their income to effective causes, in my view, we’re making it very unlikely that the movement is as successful as it could be. Worse, potentially, we won’t be able to find out when we’re making mistakes, because anyone who disagrees will have been told they aren’t EA enough.

So as I’ve said before, I never want to hear anyone told “that’s not EA,” or see people give unsolicited criticism of someone else’s choice of causes. A movement that grows and remains viable needs to be one where we can be honest, but also not insult others’ choices and values. But I unfortunately see this happen. I hear from people who started getting involved in local groups, got turned off, and almost left. I can reasonably assume they aren’t the only ones who had that experience. If EA doesn’t allow for people to be partly on board, or says that certain things aren’t good enough, we’re cutting off diversity, we’re alienating allies, and we’re making our work on outreach and publicizing the ideas less impactful. So if I’m right, the vision many people—especially newcomers and the younger and more enthusiastic EA devotees - seem to have is definitely not an effective way to scale a movement. And as I argued, we want to scale.

  1. ^

    For EA to be embraced politically, in a democratic society, it also needs to be embraced by at least a large part of the population—i.e. it requires scaling.

  2. ^

    As an aside, the changing focus of the initiators doesn’t mean that each problem has been solved, or made less valuable—just that there are even more neglected or higher leverage opportunities. We still need funding, and people, to finish solving these high-leverage and important problems.

  3. ^

    Effective Altruism has done some work on this front, but far more is needed.