Act utilitarianism: criterion of rightness vs. decision procedure

A useful distinction for people thinking about act consequentialism in general and act utilitarianism in particular is the distinction between a criterion of rightness and a decision procedure (which has been discussed by Toby Ord in much more detail). A criterion of rightness tells us what it takes for an action to be right (if it’s actions we’re looking at). A decision procedure is something that we use when we’re thinking about what to do. As many utilitarians have pointed out, the act utilitarian claim that you should ‘act such that you maximize the aggregate wellbeing’ is best thought of as a criterion of rightness and not as a decision procedure. In fact, trying to use this criterion as a decision procedure will often fail to maximize the aggregate wellbeing. In such cases, utilitarianism will actually say that agents are forbidden to use the utilitarian criterion when they make decisions.

There’s nothing inconsistent about saying that your criterion of rightness comes apart from the decision procedure it recommends. We can imagine views where the two come apart even more strongly than they do under utilitarianism. Imagine a moral theory that had the following as a criterion of rightness:

No Thoughts Too Many (NTTM): An action can only be right if it is not the result of a deliberative process about what is right to do.

If we assume that trying to use NTTM as a decision procedure would itself constitute a deliberative process, then the NTTM criterion of rightness is inconsistent with using NTTM as a decision procedure.

We can think of more mundane examples of criteria of rightness that won’t always recommend themselves as decision procedures. Suppose that a criterion of rightness for meditating is to clear your mind of thoughts, but that people who try to clear their mind of thoughts are worse at clearing their mind of thoughts than people who use other techniques, such as focussing on their breath. ‘Clear your mind of thoughts’ might be a good criterion of rightness for meditation, but a bad decision procedure to use by the lights of that criterion. Or suppose that you are holding a gun with a misaligned sight. The criterion of rightness might be ‘hit the target’ while it’s better to use ‘hit 2ft to the right of the target’ as your decision procedure. (An interesting class of examples of this sort involve what Jon Elster calls ‘states that are essentially by-products’, such as being spontaneous. Thanks to Pablo Stafforini for pointing this out.)

So when is it good to accept or use ‘act such that you maximize the aggregate wellbeing’ as a decision procedure? The simple answer, if we assume act utilitarianism, is: whenever doing so will maximize the aggregate wellbeing! Of course, it can be hard for us to know when accepting or using the utilitarian criterion of rightness as a maxim would be the best thing for us to do. Perhaps most of us should never use it as a decision procedure.

What are the reasons for thinking that ‘act such that you maximize the aggregate wellbeing’ is a bad decision procedure? First, this is extremely cumbersome as a decision procedure: it is simply implausible that it would be best for us to stop before taking every step forward or before buying every can of soup to consider how these things affect the aggregate wellbeing. For everyday tasks, it makes sense to have simpler maxims to hand. Second, it’s easy to apply it naively. When we try to think about all of the outcomes of our actions, we often fail to take into account those that aren’t obvious. An example of this fault can be seen in the classic naive utilitarian who promises to watch your purse and then, as soon as a needy stranger comes along, gives your purse to them. The immediate impact of their action might be to transfer money from you to the needy stranger, but doing so undermines the trust that people can have in them in the future. Moreover, in order to do a lot of good in the world, people need to co-operate, and it’s virtually impossible for people to co-operate without trust and honesty norms being in place. Third, explicitly using ‘act such that you maximize the aggregate wellbeing’ as a maxim can alienate us from those we care about. Very few people would be happy to think that, while talking to them, their friends are calculating how much moral good the conversation is generating.

There are definite costs to using the utilitarian criterion of rightness as a decision procedure. But sometimes these costs are not in play quite so much, and sometimes the benefits of using it in deliberation might outweigh these costs. I think this might be true when we’re thinking about ‘large-scale’ decisions: where to allocate large amounts of money (e.g. annual charity donations, or government funding), what policies governments or companies should have, what to do with our life (e.g. what career, what research, or what lifestyle to pursue), or how we should advise others on these matters. These are all decisions that share several features. First, using ‘act such that you maximize the aggregate wellbeing’ in these cases wouldn’t conflict with prosocial norms like ‘don’t lie’, and so they don’t undermine trust or co-operation. Second, we can make each of these decisions slowly and with care, which is important since it is so hard to apply the criterion well. Third, they are decisions with significant impact, which means that it will be more important to try to take into account how they affect the world: the cost of a more cumbersome decision procedure can be justified when the stakes are sufficiently high. Finally, using it in these impersonal circumstances is not likely to alienate people with whom we have close relationships. It is still questionable whether most of us should try to apply the criterion of rightness as a decision procedure in these cases, but they strike me as candidates for decisions where it will sometimes be right to do so. The rest of the time, the criterion of rightness can sit like an evaluative backdrop on life that one is aware of, but not too eager to call upon in deliberation.

It seems plausible to me that someone behaving well by the lights of the utilitarian criterion of rightness will not employ it as a maxim regularly (note that we don’t need to appeal to rule utilitarianism or moral uncertainty to make this point). And many of the best people, by the lights of the act utilitarian criterion of rightness, will never have even heard of utilitarianism. For the most part, I suspect that the ideal act utilitarian would be a good friend, would try to keep promises, would aim to be honest and kind, and, when it comes to major decisions, would stop to think about how these decisions will affect the wellbeing of everyone. This kind of person seems much better for the world than the naive act utilitarian that most people are understandably put off by.