Effective Altruism and Utilitarianism

I think it’d be interesting to discuss the relationship between effective altruism and utilitarianism (or consequentialism more broadly). These are my initial reflections on the topic, intended as a launching point for discussion rather than a comprehensive survey. (Despite the five years I spent studying philosophy—before deciding that this didn’t actually help anyone! - this post is focused on practicalities rather than precise analysis, though I’d enjoy hearing that too.)

A classification of EA views

EAs often say that their views don’t presuppose consequentialism (in its classic form, the view that an act is right if and only if it results in at least as much expected good as any other available act). And this is true for a wide range of characteristically EA views, such as that giving large sums (say 10% of your income) to charities ranked as highly cost-effective is a good thing. However it is not true for other views which some regard as part of EA, such as that small chances of astronomical effects on the sorts of lives that are brought about in the future can overwhelm any effects your actions have on people who exist now. This may not logically presuppose consequentialism, but it is generally based on it. On many moral views people who don’t exist—and especially people who wouldn’t exist but for your actions—don’t matter morally in the same way as people who do. So it is helpful to divide EA beliefs into these three categories:

  1. Those that are probably true even if consequentialism is false

  2. Those that are probably false—sometimes even repugnant! - on non-consequentialist views

  3. Those that fall into neither of the above categories (it will be a toss-up whether these beliefs are true even if consequentialism is false).

Which category a belief falls into is important. One uncontroversial reason for this is that many people are not consequentialists and that we want to convince them. Beliefs in category 1 will be the easiest sell, followed by those in category 3; beliefs in category 2 will be a tough sell.

Another reason is that consequentialism may be false. The importance of this possibility depends upon the probability we assign to it, but it must carry some weight unless it can be rejected absolutely, which is only plausible on the most extreme forms of moral subjectivism. I do not find these views credible, but going into this would be a digression, so I’ll simply flag that moral subjectivists will have a different perspective on this. I’ve also found that some other anti-realists are extremely confident (though not certain) that consequentialism is true, though it’s an open question how often this is reasonable.

When we’re concerned with convincing non-consequentialists, we will focus on the particular non-consequentialist positions that they hold, which will generally be those that are most popular. When we’re concerned with the possibility that consequentialism is false, by contrast, we should really care about whether the EA views at issue are true or false on the non-consequentialist theories that we find most plausible rather than on the theories that are most popular. For instance, if you think that you might owe special duties to family members then this is relevant regardless of how popular that position is. (That said, its very popularity may make you find it more plausible, as you may wish to avoid overconfidently rejecting moral claims that many thoughtful people accept.)

Which categories do EA views fall into?

The answer to this question depends on what non-consequentialist positions we are talking about. I’ll consider a few in turn.

First, take the position that people who don’t exist have less moral weight. There are several versions of this position, each with different implications. On one version, only people who exist matter at all; this would make far future oriented charities less promising. On another, people who don’t yet exist matter less; the implications of this depend on how much less, but in some cases effects on non-existent people won’t alter which charities are most effective. On yet another, certain sorts of people matter less—for example, those who won’t exist because you acted a certain way. This example would affect our evaluation of existential risk charities.

Second, there are a wide variety of positions which directly or indirectly reduce the relative moral weight of animals, or of people who don’t currently exist. Consequentialism (and in particular classical utilitarianism, or some aspects thereof) is plausibly the moral theory that is friendliest to them. For example, when it focuses on pleasure and pain it puts concern for animals on the strongest possible ground, since it is in their capacity to feel pain that animals are closest to us. So we should expect animals’ moral weight to decrease if we give some credence to these positions.

A third sort of non-consequentialist position is that we should not act wrongly in certain ways even if the results of doing so appear positive in a purely consequentialist calculus. On this position we should not treat our ends as justifying absolutely any means. Examples of prohibited means could be any of the adjectives or nouns commonly associated with wrongdoing: dishonesty, unfairness, cruelty, theft, et cetera. This view has strong intuitive force. And even if we don’t straightforwardly accept it, it’s hard not to think that a sensitivity to the badness of this sort of behaviour is a good thing, as is a rule of thumb prohibiting them—something that many consequentialists accept.

It would be naive to suppose that effective altruists are immune to acting in these wrong ways—after all, they’re not always motivated by being unusually nice or moral people. Indeed, effective altruism makes some people more likely to act like this by providing ready-made rationalisations which treat them as working towards overwhelming important ends, and indeed as vastly important figures whose productivity must be bolstered at all costs. I’ve seen prominent EAs use these justifications for actions that would shock people in more normal circles. I shouldn’t give specific examples that are not already in the public domain. But some of you will remember a Facebook controversy about something (allegedly and contestedly) said at the 2013 EA Summit, though I think it’d be fairest not to describe it in the comments. And there are also attitudes that are sufficiently common to not be personally identifiable, such as that one’s life as an important EA is worth that of at least 20 “normal” people.

A fourth and final non-consequentialist position to consider is that you owe special duties to family members or others who are close to you, and perhaps also to those with whom you have looser relations, such as your fellow citizens. This may limit the resources that you should devote to effective altruism, depending on how these duties are to be weighed. It may give you permission to favour your near and dear. However, it seems implausible that it generally makes it wrong to donate 10% of your income, though a non-consequentialist friend did once argue this to me (saying that taking aunts out for fancy meals should take priority).

An important question about this position is how the special duties it refers to are to be weighed against EA actions. It may be that the case for these actions is so overwhelming—because of contingent facts about the way the world happens to be, with severe poverty that can be alleviated astonishingly cheaply—that they significantly reduce the call of these duties. The sheer scale of the good that we can do does seem to provide unusually strong reasons for action. However to assume that this scale is decisive would be to fail to take the possibility that consequentialism is false seriously, because non-consequentialists are not concerned only with scale.

Taking a step back, it’s worth noting that in the above I’ve focused on the ways in which some effective altruists could go wrong if consequentialism is false. The outlook for effective altruism in general is still quite positive in this case. On most non-consequentialist views, effective altruist actions generally range from being supererogatory (good, but beyond the call of duty) to being morally neutral. These views generally consider giving to charity good and would consider taking this to EA lengths at worst misplaced, not seriously morally wrong (unless you seriously neglect those for whom you are responsible). They would hardly consider concern for animals’ wellbeing morally wrong either, even if animals have a significantly lower moral status than humans.

Ironically, the worst these views would generally say about effective altruism is that it suffers from high opportunity costs. Having mistaken what matters, an effective altruist would not pursue it effectively. But these views generally consider that supererogatory, so again the picture is not so bad. (I owe these points to Gregory Lewis.)

What’s your take?

I’d love to hear people’s take on these issues, and on the relationship between effective altruism and consequentialism more broadly, which I certainly haven’t covered comprehensively. Which of the non-consequentialist positions above do you find most plausible, and what are the implications of this? And are there other non-consequentialist positions that would have implications for effective altruists?

(I would like to thank Theron Pummer, Gregory Lewis and Jonas Müller for helpful comments.)