The Ultimate Argument Against Deontology And For Utilitarianism

Crossposted to my blog. The formatting is a bit better there.

Very often, our intuitions about principles will conflict with our intuitions about cases. Most people, for example, have the following three intuitions:

The world would be improved if one person died and five others were saved.

If some action makes the world better, it isn’t wrong.

You shouldn’t kill one person to save five.

These are, however, inconsistent. In addition, as I’ve documented extensively, deontology—and more broadly any theory that believes in rights—requires giving up belief in dozens of plausible principles. Let me just list several:

  1. Perfectly moral beings shouldn’t hope you do the wrong thing.

  2. The fact that some act would give perfectly moral beings extra options doesn’t make it worse, all else equal.

  3. Doing the right thing won’t make things worse.

  4. If X is wrong and Y is wrong conditional on X, then X and Y are wrong.

  5. If some action would be done if you experienced everything experienced by anyone, or if you were behind the veil of ignorance, and was also approved of by the golden rule, that action is right.

  6. If you do the wrong thing and you can undo it before it’s affected anyone, you should.

  7. If some action is wrong, and you can prevent it from happening at no cost, you should do so.

  8. It’s bad when more people are endangered by trolleys.

  9. Our reasons to take action are given by the things that really matter.

  10. If you should do A instead of C, then it will never be true that some third option will make it so that you should do C instead. In other words, options that you don’t take don’t affect the desirability of the ones you do take.

  11. If some action makes everyone much better off in expectation, you should take it.

  12. It’s wrong to take lengthy sequences of immoral acts.

  13. If some action causes others to do the wrong things, that makes it worse as opposed to better.

  14. If some action is wrong, then if a person did it while sleepwalking, that would be bad.

  15. Perfectly moral beings being put in charge of more things isn’t bad.

There are, of course, various other principles that one should give up on rather than these. But the ones I’ve listed are the ones that are most plausible. And this only is the tip of the iceberg when it comes to arguments against deontology; there are many more. In addition, the same types of general arguments are available against other things that critics of utilitarianism believe in—for example, those who believe in desert have to believe that what you deserve depends on how lucky you are and that it’s sometimes better to be worse. But let’s use rights as a nice test case to see just how overwhelming the case is for utilitarianism.

The argument above is the cumulative case against deontic constraints. But let’s compare it to the cumulative case for deontic constraints, and then see which has more force. The argument for deontic constraints is that they’re the only way to make sense of the following intuitions:

  1. It’s wrong to kill a person and harvest her organs to save five people.

  2. You shouldn’t push people off bridges to stop trains from running over five people.

  3. You shouldn’t frame an innocent person to stop a mob from killing dozens of people.

  4. It would be very wrong to kill someone to get money, even if that money could be then used to save lots of lives.

  5. It’s wrong to steal from people, even if you get more pleasure from the stolen goods than they lost.

  6. It’s wrong to torture people even if you get more pleasure from torturing them than they get pain from being tortured.

  7. It’s wrong to bring people into existence, give them a good life, and then kill them.

So the question is which type of intuition we should trust more—the intuitions about cases or the intuitions about principles. If we should trust the intuitions about cases, then deontology probably beats utilitarianism. In contrast, utilitarianism utterly dominates deontology when it comes to intuitions about principles. But it seems like we have every reason in the world to trust the intuitions about principles over the intuitions about cases (for more on these points, see Huemer’s great article and the associated paper, Revisionary Intuitionism).

  1. Suppose that some intuition about principles was true. Well then, we’d expect the principle to be counterintuitive sometimes, because principles apply to lots of cases. If our moral intuitions are right 90% of the time, then if a principle applies to ten cases, we’d expect it to be counterintuitive once. Given that most of these principles apply to infinity cases, it’s utterly unsurprising that they’ll occasionally produce counterintuitive results. In contrast, if some case-specific judgment were right, it would be a bizarre, vast coincidence if it conflicted with a plausible principle. So we expect true principles to conflict with cases but we don’t expect true cases to conflict with principles. As a consequence, when cases conflict with principles, we should guess that the principle is true and the judgment about the case is false.

  2. We know that our intuitions constantly conflict. That’s why ethicists disagree and there are moral conflicts. In addition, lots of people historically have been very wrong about morality, judging, for example, that slavery is permissible. So we know that at the very least, many of our judgments about cases must be wrong. In contrast, we don’t have the same evidence for persistent error about principles. I can’t think of a single intuition about broad moral principles that has been unambiguously overturned. So trusting the deontological intuitions over the utilitarian ones is trusting the intuitions that we know to be wrong over the ones that we don’t know to be wrong. You might object by pointing out that utilitarians disagree with lots of broad principles, such as, for example, the principle that people have rights. But we don’t intuit the principle itself—and if we do, it’s clearly debunkable, for reasons I’ll explain in a bit. Instead, we infer it from cases. But this means the reason to believe in rights comes from intuitions about cases, which we know to often be wrong.

  3. We know that people’s moral beliefs are hugely dependent on their culture. The moral intuitions of people in Saudi Arabia differ dramatically from the intuitions of people in China, which differ dramatically from the intuitions of people in the U.S.. So we have good reason to expect that what we think are genuine moral intuitions are often just reflections of culturally transmitted values. But none of the principles I gave has any plausible cultural explanation—there is no government document that declares “the fact that some act would give perfectly moral beings extra options doesn’t make it worse, all else equal.” No one is taught that from a young age. In contrast, norms about rights are hammered into us all from a young age—the rules taught to us from the time we are literally babies are deontological. We are told by our teachers, parents, and government documents that people have rights—and if you doubt that, it’s seen as a sign of corrupt character (I remember one debater who attempted to paint me as a terrible person declared that I “literally don’t think anyone has human rights.”) We are told not to take the toys of others, not to only take away their toys if doing so is optimific. So it’s not hard to explain how we would come to have these intuitions even if they were bullshit. In contrast, there is no remotely plausible account of how we came to have the intuitions that, if accepted, lead us inescapably to utilitarianism.

  4. We know that our moral beliefs are often hugely influenced by emotions. Emotions can plausibly explain a lot of our non-utilitarian beliefs—contemplating genuine homicide brings out a lot of emotion. In contrast, the intuition that “if it’s wrong to do A and wrong to do B after doing A, it’s wrong to do A and B,” is not at all emotional. It seems true, but no one is passionate about it. So very plausibly, unreliable, emotional reactions can explain our non-utilitarian intuitions. Our desert-based intuitions come from our anger towards people who do evil; our rights-based intuitions come from the horror of things like murder. We have lots of evidence for this—we know, for example, that when people’s brains are damaged so that they are less emotional, they become almost 6 times more likely to support pushing the fat man off the bridge to stop six.

  5. We know that humans have a tendency to overgeneralize principles that are usually true. Huemer gives a nice example of the counterfactual theory of causation—intuitively a lot of people believe a simple counterfactual model of causation, that has clear counterexamples—I’ll provide the full quote in a footnote[1]. But this tendency can explain every single non-utilitarian intuition. It’s obviously almost always wrong to kill people. And so we infer the rule “it’s wrong to kill,” even in weird gerrymandered scenarios where it’s not wrong to kill. Every single counterexample to utilitarianism seems to involve a case in which an almost universally applicable heuristic doesn’t apply. But why would we trust our intuitions in cases like that? If we think about murder a million times, and conclude it’s wrong all of those times, then we infer the rule “you shouldn’t murder,” even if there are weird scenarios where you should. That’s why when you modify a lot of the deontological scenarios so that they are less like real-world cases, our intuitions about them go away. You might object that utilitarian principles can also be explained in this way. But the utilitarian principles aren’t just attempts to generalize our intuitions about cases—we have an independent intuition that they’re true, before considering any cases. The reason we think that perfectly moral beings shouldn’t hope you do the wrong thing is not that there are lots of cases where perfectly moral beings hope you do particular things and we also know that those things are rights. Instead, it’s that we have an independent intuition that you should want people to do the right thing—but that means that we don’t acquire the intuition from overgeneralizing, or from generalizing at all. The intuitions supporting utilitarianism don’t rely on judgments about cases, but instead rely on the inherent plausibility of the principle itself.

  6. We know that our linguistic intuitions affect our moral intuitions. We think things are wrong because they sound wrong. But the moral intuitions supporting utilitarianism sound much less convincing than the moral intuitions supporting deontology. No one recoils in horror at the idea that the fact that some action gives some perfectly moral person extra options is wrong. In contrast, people do recoil in horror at the idea that it’s okay to kill and eat people if you get enough pleasure from it. Anscombe famously declared of the person who accepts that you should frame an innocent person to prevent a mob from killing several people, “I do not want to argue with him; he shows a corrupt mind.” So it’s unsurprising that so many people are non-utilitarians—the non-utilitarian intuitions sound convincing, while the utilitarian intuitions sound sort of boring and bureaucratic. No one cares much about whether “if it’s wrong to do A and then do B then it’s wrong to do A and B.” When people are given moral dilemmas in a second language, they’re more likely to give utilitarian answers—this is perfectly explained by our non-utilitarian intuitions being dependent on non-truth-tracking linguistic intuitions.

  7. Some of our beliefs have evolutionary explanations. Caring more about those closer to us, not wanting to do things that would make us directly morally responsible, and caring more about friends and family is easily evolutionarily debunked. But those intuitions are core deontological intuitions.

  8. False principles tend to have lots of counterexamples, many of them being very clear. For example, the principle that you shouldn’t do things that would be bad for everyone to do implies that you shouldn’t be the first to cut a cake, move to a secluded forest, and that it’s wrong to kiss your spouse (it would be terrible if everyone kissed your spouse). Often, you can derive straightforward contradictions from them. So when there is a principle without a clear counterexample—as is true of these principles—you should give it even more deference.

  9. Our moral beliefs are subject to various biases. But it seems our deontological intuitions are uniquely subject to debunking. For example, humans are subject to status quo bias—an irrational tendency to maintain the status quo. But deontological norms instruct us to maintain the status quo, even when diverging from it would be better. So we’d expect to be biased to believe in deontic norms even if they weren’t real.

  10. Deontological norms seem like the norms we’d expect one to adopt to rationalize a theory that tries to minimize being blamable. Our intuition that you shouldn’t kill people to save five can be explained by you being blamable if you kill one but not blamable if you just fail to act—after all, everyone is constantly failing to act all the time. But if our moral beliefs stem from rationalizing a course of action for which no one could criticize us, then they can be debunked.

So it seems like we have overwhelming evidence against the trustworthiness of our deontological intuitions. It’s not hard for a utilitarian to explain why so many intuitions favor deontology on the hypothesis that utilitarianism is true. In contrast, there is no plausible explanation of why we would come to have so many intuitions that favor utilitarianism if it were false. The deontologist seems to have to suppose that we make errors over and over again, with no explicable explanation, and the intuitions that are in error are the ones that are most trustworthy. This is miraculously improbable, and gives us very good reason to give up deontology.

Edit: After I wrote this, I realized it greatly resembled Richard Y Chappell‘s master argument. They’re similar, but I think my argument is more sweeping and describes how the entire debate between utilitarians should be resolved. I also disagree with his master argument, but that’s a story for another day.

  1. ^

    “For example, the following generalization seems initially plausible:

    (C) For any events X and Y, if X was the cause of Y, then if X had not occurred, Y would not have occurred.

    But now consider the following case: The Preemption Case: Two mob assassins, Lefty and Righty, have been hired to assassinate FBI informant Stoolie. As it happens, both of them get Stoolie in their sights at about the same time, and both fire their rifles. Either shot would be sufficient to kill Stoolie. Lefty’s bullet, however, reaches Stoolie first; Consequently, Lefty’s shot is the one that actually causes Stoolie’s death. However, if Lefty had not fired, Stoolie would still have died, because Righty’s bullet would have killed him.

    This shows that there can be a case in which X is the cause of Y, but if X had not occurred, Y would still have occurred.”