Clarifying the core of Effective Altruism

For some time now I’ve been rather hesitant about pitching Effective Altruism to people, because I wasn’t really sure how to summarise EA. It’s become a fairly diverse movement by now, in terms of causes and interventions that people prioritise. Is there a way to link all of these areas under one banner, with that banner still making substantive and novel claims? Here I’ve drawn on previous discussion by Will MacAskill and Ben Todd, attempted to improve the claims they make, and then discussed some extensions to their arguments which seem important.

Note: this post is in two halves. If you only read one half, read the second one; it’s more important, and more novel.

What is Effective Altruism?

Here’s Ben:

The claim: If you want to contribute to the common good, it’s a mistake not to pursue the project of effective altruism.

The project of effective altruism: is defined as the search for the actions that do the most to contribute to the common good (relative to their cost). It can be broken into (i) an intellectual project – a research field aimed at identifying these actions and, (ii) a practical project to put these findings into practice and have an impact.

I define the ‘common good’ in the same way Will MacAskill defines the good in “The definition of effective altruism”, as what most increases welfare from an impartial perspective. This is only intended as a tentative and approximate definition, which might be revised.

I find it odd that this definition is non-normative—that is, it doesn’t say what people morally should do. In particular, it doesn’t actually defend being impartial or welfarist, or even moral at all. Yet in practice, a shared belief in the importance of morality is a defining characteristic of EA, and I’m not sure what we gain by excluding it from the definition. Movements can fail by being too demanding, but they can also fail by not being demanding enough to foster a strong sense of purpose—especially for people who aren’t very motivated by altruism by default. I suspect that the intuitions of movement leaders might not represent the latter group very well.

Will’s definition is also non-normative; in justifying this choice, Will says:

There are two ways in which the definition of effective altruism could have made normative claims. First, it could have made claims about how much one is required to sacrifice: for example, it could have stated that everyone is required to use as much of their resources as possible in whatever way will do the most good; or it could have stated some more limited obligation to sacrifice, such as that everyone is required to use at least 10% of their time or money in whatever way will do the most good.

But these two options seem very far from exhausting the space of possibilities. For one thing, normative claims don’t need to be as specific as the ones Will mentions. For another, they don’t need to be phrased in terms of moral obligations. So I’d propose to split Ben’s claim above into two:

  • If you could contribute much more to the common good without making major personal sacrifices, then it’s morally important to do so.

  • If you want to contribute much more to the common good, it’s a mistake not to pursue the project of effective altruism.

Here we’re not specifying whether morally important actions are obligatory or good but not required (in technical terms, supererogatory). I expect that it’ll be useful, when advocating for EA, to highlight that some people choose to interpret it either way. And similarly for what we mean by “contribute much more” and “major personal sacrifices”. This is a little watered-down, but I think that it’s good enough for almost all purposes—individuals are free to adopt strong definitions, but it’s not necessary for the movement as a whole to stand for any of them in particular.

One other feature of my definition is that the two claims I’ve made don’t contain any maximalist language. By contrast, Ben’s definition implies that not contributing to the common good as efficiently as possible is a mistake (as highlighted in the comments on his original post). And Will also talks about doing “as much good as possible” with given resources. But I’ve personally never found such phrases compelling, for a few reasons.

  1. I think that ethics is not well-defined enough for “the most good” to be a coherent concept (for roughly the same reasons that many other concepts tend to break down when we push them to extremes).

  2. In the face of radical uncertainty about the future, it seems hard to ever justifiably claim that one course of action is the “best thing to do”, rather than just a very good thing to do.

  3. Almost everyone chooses altruistic actions based partly on non-altruistic goals—for example, by factoring in their personal preferences about which charity to donate to. Yet if those choices are driven by doing a lot of good, then the fact that they’re not technically choosing to maximise the good shouldn’t make a difference.

I think that emphasising the moral importance of doing a lot more good still captures the core idea here, without the additional commitments entailed by saying that people should be maximalist (for reasons I describe here). However, I’m still happy to talk about the project of effective altruism as the search for the actions that do the most to contribute to the common good (given limited resources) [0], since it’s such a convenient phrase—as long as we understand that it’s only an approximation.

What are the arguments for Effective Altruism?

Ben again:

The three main premises supporting the claim of EA are:

  • Spread: there are big differences in how much different actions (with similar costs) contribute to the common good.

  • Identifiability: We can find some of the high-impact actions with reasonable effort.

  • Novelty: The high-impact actions we can find are not the same as what people who want to contribute to the common good typically do.

Since I’ve added a moral claim to his original formulation, we presumably need moral premises to support them. It seems like welfarism and impartiality should be the two of them, and then I’d add a third about how individuals should relate to morality, in order to support the normative claim I made previously. However, I won’t dig into the details of these now; I’m more interested in discussing the three empirical premises.

I think these three premises do a good job of summarising the core argument for EA. However, I think that they give a misleading impression of EA unless we acknowledge that different people interpret the scope of these claims very differently, and cite very different evidence in favour of them. For example, the implicit definition of “big differences” used when discussing donating to AMF versus the Make-A-Wish foundation is very far from the one used when discussing astronomical waste. I think that attempting to convey the core ideas in EA without explicitly addressing this tension may create confusion. We might also unintentionally perform a motte-and-bailey fallacy by defending weaker versions of the arguments, and then acting on stronger ones. So below I identify three domains in which we can apply these arguments, roughly corresponding to different views on what epistemic standards EA should apply. Different people can then explicitly distinguish which versions of the premises they’re defending. Note that these have considerable overlap, but I think a rough attempt to disambiguate them is better than none.

EA as social science

First is the domain of standard academic research in the social sciences: randomised controlled trials, statistical analysis of data, peer review, and so on. One interpretation of our premises is that using these types of analysis to judge the impacts of interventions allows us to identify interventions that are several orders of magnitude more impactful than usual. Let’s call this the “social sciences” version of EA. Under this I’d also include bringing in ideas about charity evaluation from the business world—for example, not penalising charities for high overheads or staff costs.

EA as hits-based altruism

It turns out, however, that there are a lot of domains in which reaching a solid academic consensus is very hard, and yet the impacts of good work can be large—for example, political advocacy for morally important policies. What does EA add to existing thinking about these domains? I’d identify two core claims: that we can significantly increase our impact by

  • Using careful consequentialist reasoning which incorporates quantitative considerations (but isn’t necessarily as rigorous as academic research is meant to be); and by

  • Being less risk-averse, and generally thinking more like entrepreneurs and venture capitalists.

The arguments that entrepreneurs make about why they’ll succeed are a very long way from being academically rigorous; indeed, they’re often barely enough to convince venture capitalists who actively embrace crazy ideas. But nevertheless, those entrepreneurs succeed often enough in business to make it valuable to back them; we might hope that the same is true for altruists with similarly ambitious plans. I’ll call this domain “EA as hits-based altruism”.

To be clear, this perspective on EA isn’t just about starting new organisations, but more generally about finding powerful yet neglected levers to influence the world. I consider Norman Borlaug launching the Green Revolution to be one of the best examples. I hope that clean meat will be a comparable success story in a few decades; and the same for projects to improve institutional decision-making. Another type of “hit” is the discovery of a new moral truth—for example, that wild animal suffering matters. Note that a great altruistic idea doesn’t need to be as counterintuitive as a great startup idea, because the altruism space is much less competitive. Most of OpenPhil’s donations in policy-oriented philanthropy and scientific research funding seem to be working within this domain.

EA as trajectory change

Thirdly, we can try to predict humanity’s overall trajectory over the timeframe of centuries or longer, and how to shift it (which I’ll call “EA as trajectory change”). Compared with hits-based altruism, this depends on much more speculative reasoning about much bigger-picture worldviews, and receives much less empirical feedback. Previous events which plausibly qualify as successful trajectory changes include abolitionism; feminism; the foundation of democracy in America; the enlightenment; the scientific revolution; the industrial revolution; the Allied victory in World War 2; and the fight against global warming. These tended to involve influencing people’s moral values, or changing the way progress in general occurs; looking forward, they might also involve reducing existential risk. I think there’s a pretty clear case that such changes can do a huge amount of good; the more pressing question is whether we’re able to identify them and influence them to a non-negligible extent.

For each of the domains I’ve just discussed, we can make the case for EA by arguing that the three original premises are all applicable to it. In doing so, we’ll need to make claims about the general properties of the types of interventions in each domain. I hope that the categories are natural enough to make this tenable, but I expect it to nevertheless be a difficult endeavour. Note that some interventions might be supported by different versions of the EA premises in different ways—for example, we might think that the goal of reducing existential risk is tractable because of arguments about EA as trajectory change, but then also endorse unusual ways to go about it because of arguments about EA as hits-based altruism. As another example, the cause area of improving institutional decision-making draws on a bunch of academic research, making it an example of EA as social science. However, the justifications for why improving this research will lead to large benefits tend to rely on arguments in one of the other domains.

I give some more specific thoughts on how defensible the EA premises are in each of these domains in a follow-up post, My Evaluations of Different Domains of Effective Altruism.

[0] Note that I prefer “given limited resources” over “relative to their cost”, for reasons described here.