Hereâs a script submission on the topic of Utilitarianism and Moral Opportunity. If an animation ends up being created based on this, Iâd be keen to add the video to the front page of utilitarianism.net
***
Imagine that a killer asteroid is heading straight for Earth. With sufficient effort and ingenuity, humanity could work to deflect it. But no-one bothers. Everybody dies.
This is clearly not a great outcome, even if no-one has done anything morally wrong (since no-one has done anything at all).
This scenario poses a challenge to the adequacy of traditional morality, with its focus on moral prohibitions, or âthou shalt notsâ. While itâs certainly important not to mistreat others, prohibitive morality â or what philosophers call deontology â isnât sufficient to address todayâs global challenges.
Prohibitions arenât enough. We need a positive moral vision that guides us towards securing a better future. Ideally, our actions should be guided by whatâs truly important.
[Utilitarianism]
This general idea, that we should be guided by considerations of overall value (or whatâs important), is called consequentialism. Different consequentialists may have different theories of value. One simple but appealing theory of value is welfarism, the view that what ultimately matters is the well-being of sentient beings like ourselves. When we combine consequentialism and welfarism, and count each individualâs interests equally and without bias, the resulting moral theory is called utilitarianism.
Utilitarianism is a controversial, but commonly misunderstood, moral theory. In what follows, weâll set out the basic case for utilitarianism, and then address some common misconceptions.
Utilitarianism draws on three basic principles:
(1) Welfarism: what ultimately matters is the well-being of sentient beings
(2) Impartiality: everyone matters equally
and
(3) Consequentialism: itâs better to do more good than less.
[Welfarism]
You probably agree that your own interests matter (as do the interests of those you love). But why? Is it because you aim at things? Well, so does a homing missile, but inanimate weapons surely lack intrinsic value. Is it because youâre alive? That seems more plausible, until you realize that bacteria are also alive, but donât seem to matter morally.
The most plausible answer that philosophers have come up with is that what grounds our moral status is sentience: our ability to suffer or enjoy conscious experiences. It seems clear that sentient creatures matter in a way that viruses and inanimate rocks do not.
Might things beyond sentient creatures â such as flourishing natural ecosystems â also matter? Environmental preservation can of course have great instrumental value, to protect the well-being of existing and future sentient beings. So one can continue to support environmentalism whichever way one answers this theoretical question. But itâs at least harder to see how an ecosystem by itself could have value, without anybody there to value it.
Finally, even if one thinks that some things do matter besides well-being, itâs important not to be too extreme about it. A world where all the sentient beings were in constant agony would clearly be a terrible world, no matter what else it had going for it. So even if we allow some modest weight to other values, we should probably all agree that welfarism is at least approximately correct, in that promoting overall well-being is the most important thing that contributes to making the world better.
[Impartiality]
On to the second principle: Everyone matters equally. The greatest moral atrocities in historyâfrom slavery to the Holocaustâstem from denying moral equality, and holding that certain groups of people donât matter and can rightly be oppressed, their interests and well-being disregarded by those with greater power.
Utilitarianism rejects the source of this evil at its root. It opposes not just racism, sexism, and homophobia, but also nationalism, speciesism, presentism, and any other bias or âismâ that would lead us to disregard the suffering of any sentient being.
Utilitarians believe that if someone can suffer, then they matter morally, and we ought in principle to care as much about preventing their suffering (and promoting their well-being) as we would anyone elseâs. Just as we recognize it was wrong for others, historically, to disregard othersâ interests, so we should expect that disregarding othersâ interests could lead us into moral error today.
Today, many people systematically disregard the urgent needs of the global poor, of non-human animals, and of future generations. Utilitarians urge us to rectify this error, and do what we can to help all of those in need, so that others might get to lead the sorts of flourishing lives that we would wish for ourselves and our loved ones.
Some hold that strict impartiality is too extreme. Surely, you might think, itâs justifiable to prioritize your friends and family over total strangers, at least to some extent? Maybe so. But even if we can give some extra weight to our nearest and dearest, we may still agree that utilitarianism is at least approximately correct, in that itâs important to still give significant weight to the interests of others. It would be a serious moral error to disregard them completely, or to come close to doing so.
[Consequentialism]
Our final principle holds that itâsbetter to do more good than less. This sounds obvious, but is often neglected. For example, when donating to charity, very few people put effort into finding the best cause possible. But some organizations can do hundreds or even thousands of times more good than others, so the choice of where to give can be even more important than how much you give. $100 to a highly effective charity will be much more worthwhile than even $100,000 to an ineffective (let alone counterproductive) charity. For this reason, utilitarianism encourages people to find and put into practice the very best ways of doing good.
Utilitarianism gets controversial when there are tradeoffs between different peopleâs interests. It seems wrong to kill an innocent person as a means to saving several other lives. And in practice, utilitarians will agree: anyone who thinks that violating individual rights will lead to better results in the long run is almost certainly mistaken. High social trust is incredibly valuable, and real-world utilitarians know they can do better by behaving cooperatively, rather than villainously, in pursuit of the greater good.
Critics may insist that this isnât good enough: that utilitarians are here getting the right result for the wrong reasons, and that it would be wrong to kill one as a means even if it would truly do more good. But why? If we delve into deeper moral explanations, consequentialist reasons â that this would lead to a better world than any alternative action â seem hard to beat. Non-consequentialist prohibitions, by contrast, run into the paradox of deontology: that it seems downright irrational to insist that killing is so bad that it ought not to be done, even to prevent more killings. If killing is so bad, shouldnât we wish to minimize its occurrence? Deontology looks like it cares more about clean hands than it does about peopleâs lives, and itâs hard to see how that could be an accurate view of what ultimately matters.
So, while itâs clear that you shouldnât go around killing people for the so-called âgreater goodâ, utilitarians agree with this practical claim. Intuitively monstrous acts are likely to be horrendously counterproductive. Critics disagree that this is why those acts are wrong, but this makes little difference in practice. Even if you side with the critics on this explanatory question, you might still agree that utilitarianism is at least approximately correct, as it not only tells us to avoid monstrous acts, but additionally reminds us to pursue positively good ones.
[The Veil of Ignorance]
An important argument for utilitarianism invokes a thought experiment known as the veil of ignorance. The basic idea is that our judgments are often biased in our own favor. Itâs not a coincidence that white supremacists are overwhelmingly white themselves, for example. To avoid such biases, itâs worth asking what it would be rational to want if you didnât know who in the world you were. Imagine looking down on the world from behind a âveil of ignoranceâ: a Godâs-eye view of everything that occurs, but one that leaves you ignorant of which person down there is you. It would clearly be irrational to endorse white supremacy from behind a veil of ignorance, given the odds that you could end up suffering the consequences as a non-white person. This test provides a simple proof that white supremacy is morally unjustifiable, since even the white supremacist himself could no longer endorse it from the âneutralâ position behind the veil.
But the veil of ignorance can be applied more broadly than this. If you assume that youâre equally likely to end up as anyone, standard decision theory implies that the rational choice is whatever option maximizes well-being on average. This is worth bearing in mind when presented with a supposed counterexample to utilitarianism. Imagine it maximizes well-being to push someone in front of a trolley, activating the emergency brakes in time to save five others. Critics claim that pushing the one in front of the trolley is wrong. But note that this act is what all six people involved would agree to from behind the veil of ignorance! (After all, it gives each a 5â6 chance of survival, instead of just a â chance.) And how could it be wrong to do what everyone involved would have agreed to, if only theyâd been freed of the biasing information of which of them is in the more or less privileged positions?
By violating what would be unanimously agreed upon behind the veil of ignorance, non-utilitarian views implicitly act as reactionary forces, protecting the privileges of the status quo against the greater needs of those in less safe or fortunate positions. If the one was already on the track, few would think it okay (let alone required) to lift him to safety and thereby cause the deaths of the other five. This shows that the alleged counterexample depends upon status quo bias. If you reject the idea that the default state of the world is morally privileged, you should likewise reject the distinction between âkillingâ and âletting dieâ that this counterexample relies upon. Whether we should prefer the outcome in which the trolley hits five people, or just hits the one, should not depend upon which we think of as being the âdefaultâ. And if the choice is between consistently preferring more deaths or fewer, the moral answer is surely clear.
But again, thatâs just to talk about what matters in principle. We should ultimately want whatâs overall best for everyone. You can imagine weird hypotheticals where this yields verdicts of a sort that you wouldnât want people to act upon in real life. But the world is not a trolley problem. In practice, utilitarians agree, the best way to achieve moral goals is to respect peopleâs rights. That doesnât require building rights into the very goal to be achieved, however. Rights are just a means â though a robustly useful one, not to be neglected â for averting harms and securing better outcomes.
[Demandingness]
We saw that status-quo privilege is implicit in non-consequentialist explanations of why killing is wrong. Privilege also shapes the other main objection to utilitarianism, namely, that it is too demanding. Consider: those who are wealthy by global standards could do a lot of good by transferring much of their wealth to the global poor. GiveWell estimates that their top charities can save a life for under $5000, which is extraordinary. To put this number in perspective: Americans spend over 250 billion dollars each year on alcohol. Utilitarianism plainly implies that it would be morally better for us to spend less on ourselves, and more to help those in need. This can be uncomfortable to hear, but it also seems hard to deny. Most moral views will agree that it would be better to do more to help others. (Itâs surely what you would choose from behind a global veil of ignorance.)
Utilitarianism is sometimes represented as claiming that we ought to maximize well-being. But this is to use âoughtâ in an ideal sense, as picking out which action would be best. It has nothing to do with the ordinary notion of obligation, according to which falling short renders you liable for blame, guilt, and other negative reactions. Doing more good is always better. But thatâs not to say that anything short of moral perfection is categorically bad, as opposed to simply less than perfectly good. So itâs misleading to claim that utilitarianism demands moral perfection. It simply recognizes that better is better, as is surely undeniable.
[Conclusion]
We began with the problem that prohibition-based moralities are insufficient to guide us towards a better future. We should, of course, respect othersâ rights, as doing otherwise would almost certainly result in more harm than good. But we canât stop there. We should also look for positive opportunities to make the world a better place. We need to think carefully about what is truly important, how to protect it, and how to safely promote it.
Utilitarianism is one moral theory that might help guide us here. It claims, plausibly enough, that what ultimately matters is the well-being of sentient creatures like ourselves. It warns us not to disregard the interests of those who are distant or different from ourselves. And it reminds us that itâs better to do more good than less.
Philosophers have focused a lot of attention on objections to utilitarianism. For example: whether it offers an adequate explanation of the wrongness of killing, and whether it is excessively demanding. And weâve seen how utilitarians can respond to these objections. (You can learn much more on utilitarianism.net.) But to exclusively focus on these debates risks neglecting the most important insight of utilitarian moral theory, which is that avoiding wrongness isnât what ultimately matters. After all, you could avoid wrongness by simply not existing at all. But hopefully you aspire to more than that.
What matters, according to utilitarianism, is that sentient beingsâ lives go well. And yes, this ultimate concern can motivate us to avoid wrongdoing â as wrong actions risk making the world much worse. But thatâs just a small part of the overall picture. For apt moral goals may also motivate us to positively make the world better. And thatâs important too!
Our lives are filled with moral opportunity, not just peril. We need a moral theory that reflects this fact. Thereâs more to ethical life than just berating each other. If we can refocus our moral attention on the question of whatâs truly important, we may be better-positioned to work together, and achieve great things, when the opportunity arises.
Hereâs a script submission on the topic of Utilitarianism and Moral Opportunity. If an animation ends up being created based on this, Iâd be keen to add the video to the front page of utilitarianism.net
***
Imagine that a killer asteroid is heading straight for Earth. With sufficient effort and ingenuity, humanity could work to deflect it. But no-one bothers. Everybody dies.
This is clearly not a great outcome, even if no-one has done anything morally wrong (since no-one has done anything at all).
This scenario poses a challenge to the adequacy of traditional morality, with its focus on moral prohibitions, or âthou shalt notsâ. While itâs certainly important not to mistreat others, prohibitive morality â or what philosophers call deontology â isnât sufficient to address todayâs global challenges.
Prohibitions arenât enough. We need a positive moral vision that guides us towards securing a better future. Ideally, our actions should be guided by whatâs truly important.
[Utilitarianism]
This general idea, that we should be guided by considerations of overall value (or whatâs important), is called consequentialism. Different consequentialists may have different theories of value. One simple but appealing theory of value is welfarism, the view that what ultimately matters is the well-being of sentient beings like ourselves. When we combine consequentialism and welfarism, and count each individualâs interests equally and without bias, the resulting moral theory is called utilitarianism.
Utilitarianism is a controversial, but commonly misunderstood, moral theory. In what follows, weâll set out the basic case for utilitarianism, and then address some common misconceptions.
Utilitarianism draws on three basic principles:
(1) Welfarism: what ultimately matters is the well-being of sentient beings
(2) Impartiality: everyone matters equally
and
(3) Consequentialism: itâs better to do more good than less.
[Welfarism]
You probably agree that your own interests matter (as do the interests of those you love). But why? Is it because you aim at things? Well, so does a homing missile, but inanimate weapons surely lack intrinsic value. Is it because youâre alive? That seems more plausible, until you realize that bacteria are also alive, but donât seem to matter morally.
The most plausible answer that philosophers have come up with is that what grounds our moral status is sentience: our ability to suffer or enjoy conscious experiences. It seems clear that sentient creatures matter in a way that viruses and inanimate rocks do not.
Might things beyond sentient creatures â such as flourishing natural ecosystems â also matter? Environmental preservation can of course have great instrumental value, to protect the well-being of existing and future sentient beings. So one can continue to support environmentalism whichever way one answers this theoretical question. But itâs at least harder to see how an ecosystem by itself could have value, without anybody there to value it.
Finally, even if one thinks that some things do matter besides well-being, itâs important not to be too extreme about it. A world where all the sentient beings were in constant agony would clearly be a terrible world, no matter what else it had going for it. So even if we allow some modest weight to other values, we should probably all agree that welfarism is at least approximately correct, in that promoting overall well-being is the most important thing that contributes to making the world better.
[Impartiality]
On to the second principle: Everyone matters equally. The greatest moral atrocities in historyâfrom slavery to the Holocaustâstem from denying moral equality, and holding that certain groups of people donât matter and can rightly be oppressed, their interests and well-being disregarded by those with greater power.
Utilitarianism rejects the source of this evil at its root. It opposes not just racism, sexism, and homophobia, but also nationalism, speciesism, presentism, and any other bias or âismâ that would lead us to disregard the suffering of any sentient being.
Utilitarians believe that if someone can suffer, then they matter morally, and we ought in principle to care as much about preventing their suffering (and promoting their well-being) as we would anyone elseâs. Just as we recognize it was wrong for others, historically, to disregard othersâ interests, so we should expect that disregarding othersâ interests could lead us into moral error today.
Today, many people systematically disregard the urgent needs of the global poor, of non-human animals, and of future generations. Utilitarians urge us to rectify this error, and do what we can to help all of those in need, so that others might get to lead the sorts of flourishing lives that we would wish for ourselves and our loved ones.
Some hold that strict impartiality is too extreme. Surely, you might think, itâs justifiable to prioritize your friends and family over total strangers, at least to some extent? Maybe so. But even if we can give some extra weight to our nearest and dearest, we may still agree that utilitarianism is at least approximately correct, in that itâs important to still give significant weight to the interests of others. It would be a serious moral error to disregard them completely, or to come close to doing so.
[Consequentialism]
Our final principle holds that itâs better to do more good than less. This sounds obvious, but is often neglected. For example, when donating to charity, very few people put effort into finding the best cause possible. But some organizations can do hundreds or even thousands of times more good than others, so the choice of where to give can be even more important than how much you give. $100 to a highly effective charity will be much more worthwhile than even $100,000 to an ineffective (let alone counterproductive) charity. For this reason, utilitarianism encourages people to find and put into practice the very best ways of doing good.
Utilitarianism gets controversial when there are tradeoffs between different peopleâs interests. It seems wrong to kill an innocent person as a means to saving several other lives. And in practice, utilitarians will agree: anyone who thinks that violating individual rights will lead to better results in the long run is almost certainly mistaken. High social trust is incredibly valuable, and real-world utilitarians know they can do better by behaving cooperatively, rather than villainously, in pursuit of the greater good.
Critics may insist that this isnât good enough: that utilitarians are here getting the right result for the wrong reasons, and that it would be wrong to kill one as a means even if it would truly do more good. But why? If we delve into deeper moral explanations, consequentialist reasons â that this would lead to a better world than any alternative action â seem hard to beat. Non-consequentialist prohibitions, by contrast, run into the paradox of deontology: that it seems downright irrational to insist that killing is so bad that it ought not to be done, even to prevent more killings. If killing is so bad, shouldnât we wish to minimize its occurrence? Deontology looks like it cares more about clean hands than it does about peopleâs lives, and itâs hard to see how that could be an accurate view of what ultimately matters.
So, while itâs clear that you shouldnât go around killing people for the so-called âgreater goodâ, utilitarians agree with this practical claim. Intuitively monstrous acts are likely to be horrendously counterproductive. Critics disagree that this is why those acts are wrong, but this makes little difference in practice. Even if you side with the critics on this explanatory question, you might still agree that utilitarianism is at least approximately correct, as it not only tells us to avoid monstrous acts, but additionally reminds us to pursue positively good ones.
[The Veil of Ignorance]
An important argument for utilitarianism invokes a thought experiment known as the veil of ignorance. The basic idea is that our judgments are often biased in our own favor. Itâs not a coincidence that white supremacists are overwhelmingly white themselves, for example. To avoid such biases, itâs worth asking what it would be rational to want if you didnât know who in the world you were. Imagine looking down on the world from behind a âveil of ignoranceâ: a Godâs-eye view of everything that occurs, but one that leaves you ignorant of which person down there is you. It would clearly be irrational to endorse white supremacy from behind a veil of ignorance, given the odds that you could end up suffering the consequences as a non-white person. This test provides a simple proof that white supremacy is morally unjustifiable, since even the white supremacist himself could no longer endorse it from the âneutralâ position behind the veil.
But the veil of ignorance can be applied more broadly than this. If you assume that youâre equally likely to end up as anyone, standard decision theory implies that the rational choice is whatever option maximizes well-being on average. This is worth bearing in mind when presented with a supposed counterexample to utilitarianism. Imagine it maximizes well-being to push someone in front of a trolley, activating the emergency brakes in time to save five others. Critics claim that pushing the one in front of the trolley is wrong. But note that this act is what all six people involved would agree to from behind the veil of ignorance! (After all, it gives each a 5â6 chance of survival, instead of just a â chance.) And how could it be wrong to do what everyone involved would have agreed to, if only theyâd been freed of the biasing information of which of them is in the more or less privileged positions?
By violating what would be unanimously agreed upon behind the veil of ignorance, non-utilitarian views implicitly act as reactionary forces, protecting the privileges of the status quo against the greater needs of those in less safe or fortunate positions. If the one was already on the track, few would think it okay (let alone required) to lift him to safety and thereby cause the deaths of the other five. This shows that the alleged counterexample depends upon status quo bias. If you reject the idea that the default state of the world is morally privileged, you should likewise reject the distinction between âkillingâ and âletting dieâ that this counterexample relies upon. Whether we should prefer the outcome in which the trolley hits five people, or just hits the one, should not depend upon which we think of as being the âdefaultâ. And if the choice is between consistently preferring more deaths or fewer, the moral answer is surely clear.
But again, thatâs just to talk about what matters in principle. We should ultimately want whatâs overall best for everyone. You can imagine weird hypotheticals where this yields verdicts of a sort that you wouldnât want people to act upon in real life. But the world is not a trolley problem. In practice, utilitarians agree, the best way to achieve moral goals is to respect peopleâs rights. That doesnât require building rights into the very goal to be achieved, however. Rights are just a means â though a robustly useful one, not to be neglected â for averting harms and securing better outcomes.
[Demandingness]
We saw that status-quo privilege is implicit in non-consequentialist explanations of why killing is wrong. Privilege also shapes the other main objection to utilitarianism, namely, that it is too demanding. Consider: those who are wealthy by global standards could do a lot of good by transferring much of their wealth to the global poor. GiveWell estimates that their top charities can save a life for under $5000, which is extraordinary. To put this number in perspective: Americans spend over 250 billion dollars each year on alcohol. Utilitarianism plainly implies that it would be morally better for us to spend less on ourselves, and more to help those in need. This can be uncomfortable to hear, but it also seems hard to deny. Most moral views will agree that it would be better to do more to help others. (Itâs surely what you would choose from behind a global veil of ignorance.)
Utilitarianism is sometimes represented as claiming that we ought to maximize well-being. But this is to use âoughtâ in an ideal sense, as picking out which action would be best. It has nothing to do with the ordinary notion of obligation, according to which falling short renders you liable for blame, guilt, and other negative reactions. Doing more good is always better. But thatâs not to say that anything short of moral perfection is categorically bad, as opposed to simply less than perfectly good. So itâs misleading to claim that utilitarianism demands moral perfection. It simply recognizes that better is better, as is surely undeniable.
[Conclusion]
We began with the problem that prohibition-based moralities are insufficient to guide us towards a better future. We should, of course, respect othersâ rights, as doing otherwise would almost certainly result in more harm than good. But we canât stop there. We should also look for positive opportunities to make the world a better place. We need to think carefully about what is truly important, how to protect it, and how to safely promote it.
Utilitarianism is one moral theory that might help guide us here. It claims, plausibly enough, that what ultimately matters is the well-being of sentient creatures like ourselves. It warns us not to disregard the interests of those who are distant or different from ourselves. And it reminds us that itâs better to do more good than less.
Philosophers have focused a lot of attention on objections to utilitarianism. For example: whether it offers an adequate explanation of the wrongness of killing, and whether it is excessively demanding. And weâve seen how utilitarians can respond to these objections. (You can learn much more on utilitarianism.net.) But to exclusively focus on these debates risks neglecting the most important insight of utilitarian moral theory, which is that avoiding wrongness isnât what ultimately matters. After all, you could avoid wrongness by simply not existing at all. But hopefully you aspire to more than that.
What matters, according to utilitarianism, is that sentient beingsâ lives go well. And yes, this ultimate concern can motivate us to avoid wrongdoing â as wrong actions risk making the world much worse. But thatâs just a small part of the overall picture. For apt moral goals may also motivate us to positively make the world better. And thatâs important too!
Our lives are filled with moral opportunity, not just peril. We need a moral theory that reflects this fact. Thereâs more to ethical life than just berating each other. If we can refocus our moral attention on the question of whatâs truly important, we may be better-positioned to work together, and achieve great things, when the opportunity arises.