The Alienation Objection to Consequentialism

This post condenses part of a book chapter, “The Alienation Objection to Consequentialism” (fc. in the Oxford Handbook of Consequentialism), which I coauthored with lead author Barry Maguire. The preprint, penultimate version of the chapter can be found here. All mistakes in this post are entirely my own.

I. Introduction

Most EAs are either consequentialists or lean consequentialist [1]. However, (my impression is that) many EAs also practice Bayesian epistemology and/​or strive to take seriously various forms of uncertainty, including moral uncertainty and epistemic uncertainty arising from peer disagreement. In this post, I discuss an objection to consequentialism—the alienation objection—that has not, to my knowledge, received as much airtime within the EA community as have other objections, such as the over-demandingness objection and the cluelessness objection. I do not think the alienation objection is a knock-down objection to consequentialism. But I do think that, all-things-considered, it should lower our credence in consequentialism.

The post is structured as follows: I begin section II with an example to intuitively motivate the alienation concern (II.A). I then give a rough conceptual sketch of alienation (II.B). Section II ends by stating the alienation objection (II.C). Section III considers the prospects for avoiding alienation of two popular forms of consequentialism—Global Consequentialism (III.A) and Leveled Consequentialism (III.B)—that go beyond simple Act Consequentialism. Section IV concludes with some takeaways.

II. What is Alienation?

A. An Example

Say you accept Act Consequentialism, according to which (very roughly) you should do whatever has the best consequences, i.e., whatever produces the most value in the world. Say further that, one Sunday afternoon, the action with the best consequences is to go support a friend who’s having a tough mental health day. Since you accept Act Consequentialism, you go support your friend. And in taking this action, you’re motivated by the fact that supporting your friend has the best consequences relative to everything else you could have done.

One thought to have about this case is that you have the wrong motivation in visiting your friend. Plausibly, your motive should be something like ‘my friend is suffering; I want to help them feel better!’ and not ‘helping my friend has better consequences than anything else I could have done.’ Imagine what it would be like to frankly admit to your friend, “I’m only here because being here had the best consequences. If painting a landscape would have led to better consequences, I would have stayed home and painted instead.” Your friend would probably experience this remark as cold, or at least overly abstract and aloof. They might have hoped that you’d visit them because you care about them and about your relationship with them, not because their plight happened to offer you your best opportunity for doing good that afternoon. To put things another way, we might say that your motivation alienates you from your friend.

B. The Concept of Alienation

The example of visiting your friend (hopefully) gives some vague intuitive sense of alienation. But what, in general, is alienation? As I’ll understand it in this post, alienation arises when we are inhibited from participating in an important type of normative ideal. Rather than trying to say exactly what a normative ideal is, I’ll give some examples. On the interpersonal level, there are ideals of being a good friend, a loving spouse, and a nurturing parent. On the intrapersonal level, there are certain psychological ideals, such as having coherence among our commitments, beliefs, and motives (more on this below). And on a more macroscopic level, there are ideals of being a virtuous citizen and being appropriately connected to the natural world.

As these examples may suggest, the type of ideal I have in mind involves some sort of harmony, closeness, or connection between things when it is realised. Such harmony might exist, for example, between two people in the case of friendship; or among different elements in a person’s psychology; or between someone and the natural world; or perhaps even among different social classes in a just society (insofar as the just society features class distinctions).

Another aspect that emerges when we reflect on these ideals is that many of them call for multifocal fittingness: for us to be appropriately oriented towards whatever it is (e.g. a person, a group, or a project) in our actions, motives, affects, ways of thinking, and level of commitment, and to be normatively integrated in this way both at discrete points in time and across time. To give a concrete example, friendship involves a holistic and distinctive way of engaging with our friends that plays out in our actions, motives, emotions, and commitments. We have fun hanging out with our friends and support them when they’re down; we’re motivated to spend time with our friends and to help them simply out of friendship; we feel joyful when something good happens to our friends; we don’t abandon our friends unless serious countervailing moral reasons to do so arise; etc. Alienation is just the state of being inhibited from participating in such a normative ideal. To take one last example, a parent may be alienated from their child if they adopt an extremely formal and disciplinary style of parenting that inhibits them from being affectionate towards their child.

C. Consequentialism and Alienation: The Basic Argument

In brief, the alienation objection is that accepting consequentialism inhibits us from participating in key normative ideals.

Of course, this is only a problem for consequentialism if these ideals are actually ethically authoritative (i.e., if they provide some fundamental normative input into our answer to the question, ‘how should I live my life, all-things-considered?’). I won’t provide a positive argument for the conclusion that there is, in fact, at least one ethically authoritative normative ideal. And I freely admit that if there are no such ideals, the alienation objection to consequentialism gets nowhere.

Why should anyone care about the alienation objection, then? Here are two reasons.

First, it is intuitively plausible that some normative ideals are authoritative. For example, the relational state of being in a committed romantic relationship seems to generate internal norms of its own and thereby to furnish us with reasons to behave in certain holistic ways (again, the ideal involves our motives, thoughts, emotions, actions, etc. as they relate to our significant other). The plausibility that some ideals are authoritative warrants an investigation into whether consequentialism can accommodate them. If you have a non-zero credence that some normative ideals are authoritative, then it’s relevant to know whether consequentialism is compatible with these ideals.

Second, a methodological point: given the intuitive plausibility of their ethical authority, certain normative ideals such as deep friendship constitute inputs into the process of thinking about ethics. (This essentially means they are defeasible starting points.) Consequentialism does not have this epistemic status: it is not an input into ethical theory; rather it purports to be the final ethical theory. If accepting consequentialism is incompatible with participating in key normative ideals, then, barring a compelling argument that no normative ideals are authoritative, it is consequentialism, and not the ideals, that needs to be revised.

III. Two Consequentialist Strategies [2]

A. Global Consequentialism

Act Consequentialism says we should perform whichever action has the best consequences, i.e., whichever action does the most on-balance good. Said differently, Act Consequentialism applies a direct consequentialist assessment to actions. Very roughly, Global Consequentialism applies a direct consequentialist assessment to everything we can evaluate, rather than just to actions. So, according to Global Consequentialism, in addition to performing the actions with the best consequences, we should also have whichever motives have the best consequences, whichever character traits have the best consequences, whichever roof colour has the best consequences, etc. Here are three problems for Global Consequentialism:

Problem #1: Global Consequentialism conflicts with partiality, but key normative ideals require partiality.

Both interpersonal and intrapersonal non-alienation often require some degree of partiality. For instance, fully participating in the (interpersonal) ideal of friendship involves devoting some non-trivial amount of resources (e.g., time, caring attention, and money) to our friendships. Similarly, fully participating in the (intrapersonal) ideal of pursuing a ground project—i.e., a project that gives shape and meaning to our life, such as developing a lifelong passion for amateur photography (as did Derek Parfit)—involves non-trivial resource expenditures. Presumably, however, these resources can often do more good if we allocate them impartially, e.g. if we donate our money to effective charities rather than spending it hanging out with friends or pursuing ground projects. Since we could do more good by allocating our resources impartially, Global Consequentialism requires us to do so (to the neglect of our friendships and ground projects).

Problem #2: Global Consequentialism runs into a dilemma regarding our moral beliefs and motives.

In this subsection we’ll explore a dilemma for Global Consequentialism. I’ll start by briefly stating the dilemma and then go on to flesh it out. Here is the dilemma: Global Consequentialism either (a) allows us to have a “friendly,” non-alienated motive but, in doing so, sacrifices an intuitive fit between our moral beliefs and motives or (b) retains the intuitive fit between moral beliefs and motives but, in doing so, requires us to have an alienating motive.

We can begin to make sense of this dilemma by recalling a thought we had earlier: having a consequentialist reason as our motive can be alienating. In the case of the Act Consequentialist supporting their friend, we saw that there was something wrong with the explicitly consequentialist motive, ‘supporting my friend produces the most value out of anything I could be doing right now.’

Global Consequentialism can avoid this problem. Recall, Global Consequentialism says you should have whichever motive produces the most value. Plausibly, having a “friendly” motive—‘my friend is suffering; I want to help them feel better!’—produces more value than having the explicitly consequentialist motive. So, according to Global Consequentialism, you should have the friendly motive when you go to support your friend. And if you have the friendly motive, you are not alienated from your friend. So far so good.

If the Global Consequentialist adopts this strategy, though, they run into a new problem: they will not be motivated by their own moral beliefs. This is a problem because, plausibly, it is fitting to be motivated by our moral beliefs. To provide some intuitive support for this claim, say you have the moral belief that we should keep our promises (unless keeping them would cause someone serious undue harm). Say further that you keep a promise to help a new acquaintance move a couch into their apartment. In this scenario, it is fitting for part of your motivation to be something like, ‘well, I promised I’d help them move the couch today, so I’d better head on over and help them out.’ It would be strange if the fact that you promised to help move the couch did not show up in your motivation to help move the couch, even though you believe you should keep your promises.

Say we accept that it’s fitting to be motivated by our moral beliefs. The problem for the Global Consequentialist is that if they have a friendly motive, they are not being motivated by their moral beliefs. To see this, return to the case of supporting your friend who’s having a tough mental health day. In this case, the Global Consequentialist’s moral belief is that they should support their friend because doing so has the best consequences. If the Global Consequentialist were motivated by this moral belief, their motivation would be, ‘helping my friend has better consequences than anything else I could have done.’ As we said before, this motive is alienating.

However, I suggested above that the Global Consequentialist should, by their own lights, have the friendly motive in supporting their friend (because having the friendly motive produces more value than having the consequentialist motive). But the friendly motive is simply, ‘my friend is suffering; I want to help them feel better!’. There is nothing about producing the most value or having the best consequences in this motive. So the Global Consequentialist is not motivated by their own moral belief, which is that they should visit their friend in order to produce the most value. Rather, the Global Consequentialist is directly motivated out of concern for their friend. Being motivated in this way avoids alienation from their friend but sacrifices the intuitive fit between moral beliefs and motives.

Problem #3: Global Consequentialism has trouble making sense of another intuitive fit, this one between our motives and commitments.

Problem #2 implicitly endorsed a psychological ideal that involves harmony between our moral beliefs and motives. Global Consequentialism seems unable to accommodate this ideal. Let’s now consider the relationship between our motives and commitments. I suggest that having a bona fide commitment to something—whether that is to another person, or to a skill like playing the violin, or to a ground project of figuring out how to minimise the suffering of nonhuman animals on factory farms—makes it fitting to have certain motives. In particular, if you’re committed to something, it’s fitting to be motivated to spend time and effort positively engaging with it, whether that’s by spending quality time with the other person, or by practicing the violin, or by doing research on effective factory farming interventions.

According to Global Consequentialism, however, the fact that we’re committed to something has no direct relevance to whether we should have the motives that fit with our commitment. Rather, on Global Consequentialism, we should simply have whichever commitments maximise value and whichever motives maximise value. If it happens to work out that having motives which fit with our commitments maximises value, great, but if not, tough luck.

Here are two problems with this Global Consequentialist response. First, even when Global Consequentialism does tell us to have the motives that fit with our commitments, it does so for the wrong reason. Say you love your mom and are committed to maintaining a deep familial relationship with her and caring for her as she grows older. And say one day you are motivated by your love and care to spend an afternoon with your mom. In this case, you need and should have no further reason or explanation for this motivation other than something like, ‘I love and care about my mom, so I want to spend time with her.’ Put another way, your motive flows directly from your commitment. In contrast, Global Consequentialism says that the ultimate reason/​justification for your caring motive is that having it maximises value. (An important comment about ethical theory: it’s not enough for a theory to give us the right answers. It needs to give us the right answers for the right reasons.)

The second problem is that accepting Global Consequentialism seems to undermine our even having the commitment in the first place. If we accept Global Consequentialism, we must accept that our commitments have no direct motivational significance. (Again, whether or not you should have a motive is, according to Global Consequentialism, solely a function of whether having it would produce the most value.) This is in tension with the plausible thought that being directly motivated in certain ways by what we’re committed to is just part of what it is to be committed to something.

B. Leveled Consequentialism

Leveled Consequentialism is centrally concerned with higher-level psychological states like commitments, values, identities/​roles, internalised decision procedures, character traits, and dispositions. These states are “higher-level” in the sense that they are relatively robust and persistent—we can’t just choose to have a different character trait in the way that we can choose to take, or not to take, an action. They are also higher-level because they generate and coordinate lower-level things like thoughts, emotions, motives, and actions. To return to an example from earlier, if you’re genuinely committed to something, you’ll tend to think about that thing in certain ways, feel joy/​excitement when good things happen with/​to that thing, be motivated to constructively engage with it, etc.

Leveled Consequentialism says we should have whichever higher-level psychological states it would be best to have. I’m going to focus on indirect versions of Leveled Consequentialism, according to which we should have a higher-level state iff and because it’s value-maximising to have it, but it’s permissible to have non-value-maximising thoughts, emotions, and motives and to perform non-value-maximising actions if and because they “come from” a value-maximising higher-level state. For example: if it’s a core part of your identity to be a loyal friend, and this facet of your identity is approved on consequentialist grounds, then you are permitted to buy a plane ticket to go to your friend’s wedding, even though you could have done more good by declining the wedding invitation, donating the money you would have spent on the ticket, and not contributing so much to fossil fuel emissions by abstaining from air travel.

One of the main motivations for this indirect approach is the recognition that realising some of the most important goods in life, like steadfast friendship, romantic love, spontaneity, and highly-skilled flow states, requires (among other things) an integrated motivational, cognitive, and affective engagement with various things just for their own sakes, free from consequentialist calculation. (In our jargon above, this type of holistic “for-its-own-sake” engagement is (partially) constitutive of one important type of normative ideal.) The hope of indirect Leveled Consequentialism is to (i) orient our lives around promoting the good by selecting and cultivating our higher-level states in accordance with consequentialist reasoning while (ii) retaining the ability to realise key life goods such as friendship, love, spontaneity, and flow by “screening off” (a good deal of) our everyday thoughts, emotions, motives, and actions from consequentialist calculation.

The Leveled Consequentialist approach sounds promising. One worry we might have about it, however, is that the contingency of our higher-level states on their being value-maximising is itself alienating (at least in some cases). Imagine being friends with someone just because being their friend is value-maximising. Something is intuitively wrong with this. Specifically, even if the relegation of consequentialism to a higher level in your psychology successfully “screens off” most of your first-order thoughts, emotions, motives, and actions regarding this friend, such that you usually interact with them just like a regular friend would, you are prepared to end the friendship upon judging that doing so would maximise value. Or to flip the situation around, imagine how you might feel if one of your friends, or worse, your significant other admitted to you, “I’m only your friend/​SO because it maximises impartial value.” Upon hearing this, you might rightfully begin to question your relationship with this person.

Of course, it is open to the Leveled Consequentialist to simply add another level in cases like this. They might reasonably say, “no, you have it wrong; it’s not individual relationships that must stand the test of consequentialist assessment, but the higher-level structures that coordinate and support individual relationships (e.g., a generally affable disposition to form close interpersonal connections, or an internalised policy to maintain a robust friendship circle for its own sake). If it’s value-maximising to have such a disposition or policy, you’re good to go; you never need to submit your individual relationships to consequentialist assessment!”

This proposal seems like it might indeed avoid alienation. Note, though, that it avoids alienation only by relegating consequentialism to quite a lofty and hands-off level. On this proposal, a vast swath of our cognitive, affective, intentional, motivational, and practical life will be governed by our commitments (e.g. to being the world’s best dad), identities or roles (e.g. a role as a teacher and the core part this role plays in our identity), internalised decision procedures (e.g. ‘be honest unless doing so would cause someone serious undue harm’), character traits (e.g. being a warm, loyal friend), etc. And remember that, according to indirect Leveled Consequentialism, thought patterns, motives, intentions, affects, and actions that “come from” our higher-level states are justified by their relationship to these higher-level states. They do not need to meet the test of consequentialist assessment. The result is that a great deal of our ethical life—our way of being in the world, of relating to ourselves and others—comes to be governed and justified in a non-consequentialist manner.

One can insist that the ethical theory we’ve arrived at is nonetheless consequentialist because our higher-level (or highest-level) states are cultivated and justified on consequentialist grounds. But whether we want to call the resulting theory consequentialist, non-consequentialist, half-consequentialist, or something else seems to me like a less interesting, merely linguistic dispute rather than a substantive philosophical disagreement. The substantive conclusion we’ve reached is that to avoid alienation and secure a range of life’s most important goods, consequentialism must ascend to a higher, more abstract, and less hands-on level of governance in our ethical lives, ceding much practical authority to a variety of non-consequentialist principles and modes of engagement with the world, other beings, and ourselves.

This completes the paper summary. What follows are my own thoughts.

Conclusion

What follows if some normative ideals are ethically authoritative and/​or if some of life’s highest goods require non-consequentialist modes of engagement? Before offering some incomplete thoughts on this question, I want to be clear that I fully support the things we in the EA community canonically advocate, such as going reducetarian/​vegetarian/​vegan, donating more and to the most effective organisations, choosing careers based on careful impact assessment, and promoting cause neutrality. Although I find the considerations around alienation and normative ideals compelling, I also think we have strong ethical reasons to be altruistic—indeed, to be more altruistic than commonsense morality suggests—and that when we set out to be altruistic, we should do so as effectively as possible.

With this caveat in place, I’ll turn to offering some closing thoughts on the upshots of the alienation objection. First, at the theoretical level: I think the objection should decrease our credence in consequentialism. Your posterior will, of course, be a function of (at least) your prior in consequentialism, how compelling you find the objection, and how much epistemic weight you give to interpersonal disagreement. But since one’s credences in different ethical theories constitute an important input into the process of handling moral uncertainty—and hence, I think, into leading an ethical life, which must account for moral uncertainty—the upshot that alienation concerns should update our credence in consequentialism is important.

Second, at the practical level: throughout this post I have stressed that key normative ideals, like that of deep friendship, are constituted by a holistically integrated, multifocal type of extended engagement with sources of value. Deep friendship is constituted by a complex of thought patterns, emotions, intentions, commitments, activities, and internal norms that cohere together in the right way and arise or are done simply out of friendship: for nothing other than the sake of the other person and your relationship with them. (Reflecting on goods like friendship and love highlights the fact that ethics touches our entire lives, not just our actions). And yet, as I hope to have shown, internalising a consequentialist ethic can inhibit our full participation in these key normative ideals and thereby vitiate some of life’s highest goods. And even if we are able to screen off our first-order responses by pushing consequentialism into the background or into a higher-level, organising role in our lives, consequentialism often gives us the wrong reasons for having certain motives, taking certain actions, etc., and thereby fails as an ethical theory just as much as it would in giving the wrong prescriptions.

It’s ok to hang out with your friends, kiss your significant other, and pursue a hobby that brings you joy; to be motivated to do these things, intend to do them, and feel good about doing them; and for none of these things to be dependent on, motivated by, or justified by the fact that doing or having them maximises value, or the fact that they arise from a higher-level psychological state that maximises value, or any other consequentialist reason. We can hold—as I wholeheartedly do—that we should devote a substantive portion of our resources and life’s work to altruism, and that we should put exquisite care into doing so effectively, without subjecting the totality of our existence to consequentialist assessment or accepting that consequentialism, by itself, can account for the totality of our ethical lives.

Notes

[1] In the 2019 EA Survey, 80.7% of EAs identified with consequentialism.

[2] In the full paper we discuss two other classes of consequentialist-inspired ethical theory—Hybrid Theories and Relative Value Theories—that I’ve omitted from this post for the sake of brevity.