tl;dr: Total utilitarianism treats saving lives and creating new lives as equivalent (all else equal). This seems wrong: funding fertility is not an adequate substitute for bednets. We can avoid this result by giving separate weight to both person-directed and undirected (or âimpersonalâ) reasons. We have weak impersonal reasons to bring an extra life into existence, while we have both impersonal and person-directed reasons to aid an existing individual. This commonsense alternative to totalism still entails longtermism, as zillions of weak impersonal reasons to bring new lives into existence can add up to overwhelmingly strong reasons to prevent human extinction.
Killing vs Failing to Create
I think the strongest objection to total utilitarianism is that it risks collapsing the theoretical distinction between killing and failing to create. (Of course, there would still be good practical reasons to maintain such a distinction in practice; but I think thereâs a principled distinction here that our theories ought to accommodate.) While I think itâs straightforwardly good to bring more awesome lives into existence, and so failing to create an awesome life constitutes a missed opportunity for doing good, premature death is not just a âmissed opportunityâ for a good future, itâs harmful in a way that should especially concern us.
For example, we clearly have much stronger moral reasons to save the life of a young child (e.g. by funding anti-malarial bednets) than to simply cause an extra child to exist (e.g. by funding fertility treatments or incentivizing procreation). If totalism canât accommodate this moral datum, that would seem a serious problem for the view.
How can we best accommodate this datum? I think there may be two distinct intuitions in the vicinity that Iâd want to accommodate:
(1) Something about the intrinsic badness of (undesired) death.
(2) Counting both person-directed and undirected (âimpersonalâ) moral reasons.
The Intrinsic Harm of Death
Most of the harm of death is comparative: not bad in itself, but worse than the alternative of living on. Importantly, we only have reason to avoid comparative harms in ways that secure the better alternative. To see this, suppose that if you save a childâs life, theyâll live two more decades and then die from an illness that robs them of five decades more life. That latter death is then really bad for them. Does it follow that you shouldnât save the childâs life after all (since it exposes them to a more harmful death later)? Of course not. The later death is worse compared to living the five decades extra, but letting them die now would do them even less good, no matter that the early death â in depriving them of just two decades of life â is not âas badâ (comparatively speaking) as the later death would be (in a different context with a different point of comparison).
So we should not aim to minimize comparative harms of this sort: that would lead us badly astray. But itâs a tricky question whether the harm of death is purely comparative. In âValue Receptaclesâ (2015, p. 323), I argued that it plausibly is not:
Besides preventing the creation of future goods, death is also positively disvaluable insofar as it involves the interruption and thwarting of important life plans, projects, and goals. If such thwarting has sufficient disvalue, it could well outweigh the slight increase in hedonic value obtained in the replacement scenario [where one person is âstruck down in the prime of life and replaced with a marginally happier substituteâ].
Thwarted goals and projects may make death positively bad to some extent. But the extent must be limited. However tragic it is to die in oneâs teens (say), I donât think one could plausibly say that itâs so bad as to render the personâs life overall not worth living. The goods of life can fairly easily outweigh the harm of death, I believe.
Itâs a tricky question where exactly to draw the line here. Suppose a couple undergoing fertility treatment learns that all of their potential embryos have a genetic defect that would inevitably result in painless death while the child is still very young. That obviously gives the parents strong prudential reasons to refrain from procreating and suffering the immense grief that would soon follow. But if we bracket othersâ interests, and focus purely on the interests of the potential child themselves: is it ever the case that an overall-happy life, however short, is not worth living, purely due to the fact of death? I could, of course, imagine a painful death outweighing the happiness of a very short life. But suppose the death is painless, or at any rate is nowhere near to outweighing the prior joy the life contains. Yet it does thwart the childâs plans and projects. Is that so bad that it would have been better for them to never exist at all? I find that hard to believe.
For another test: imagine a future society that uses artificial wombs to procreate (and parents arenât notified until the entire process is successfully completed). Suppose some fetuses have a congenital condition that causes them to painlessly die almost immediately after first acquiring sentience (or whatever is required for morally relevant interests of a sort that makes death harmful for them). How much should the society be willing to invest in diagnostic testing to instead allow the defective embryos to be aborted prior to acquiring moral interests? (Or, at greater cost, to test the gametes prior to fertilization?) Could preventing short but painless existence ever take priority over other societal goals like saving lives and reducing suffering?
We probably canât give that much weight to the intrinsic harm of (painless) death, if itâs never enough to make non-existence look especially desirable in comparison. So I think we may need to look elsewhere to find stronger reasons.
Person-Directed and Undirected Reasons
Much population ethics discourse sets up a false dichotomy between the two extremes of impersonal total utilitarianism and narrow person-affecting views on which weâve no reason to bring happy lives into existence. I find this very strange, since a far more intuitive middle-ground view would acknowledge that we have both person-directed and undirected (or âimpersonalâ) reasons.
Failing to create a person does not harm or wrong that individual in the way that negatively affecting their interests (e.g. by killing them as a young adult) does. Contraception isnât murder, and neither is abstinence. Person-directed reasons explain this common-sense distinction: we have especially strong reasons not to harm or wrong particular individuals.
But avoiding wrongs isnât all that matters. Thereâs always some (albeit weaker) reason to positively benefit possible future people by bringing them into a positive existence, even though it doesnât wrong anyone to remain childless by choice.
And when you multiply those individually weak reasons by zillions, you can end up with overwhelmingly strong reasons to prevent human extinction, just as longtermists claim. (This reason is so strong it would plausibly be wrong to neglect or violate it, even though it does not wrong any particular individual. Just as the non-identity problem shows that one outcome can be worse than another without necessarily being worse for any particular individual.)
On this hybrid view, which I defend in more detail in âRethinking the Asymmetryâ (2017), we are warranted in some degree of partiality towards the antecedently actual. We have weak impersonal reasons to bring an extra life into existence, while we have both impersonal and person-directed reasons to aid an existing individual (or a future individual who is certain to exist independently of our present decision).
Conclusion
I think this hybrid view is very commonsensical. We all agree that you can harm someone by bringing them into a miserable existence, so thereâs no basis for denying that you can benefit someone by bringing them into a happy existence. It would be crazy to claim that there is literally no reason to do the latter. And there is no theoretical advantage to making this crazy claim. (As I explain in âPuzzles for Everyoneâ, it doesnât solve the repugnant conclusion, because we need a solution that works for the intra-personal case â and whatever does the trick there will automatically carry over to the interpersonal version too.) So the narrow person-affecting view really does strike me as entirely unmotivated.
But, as indicated above, this very natural hybrid view still entails the basic longtermist claim that weâve very strong moral reasons to care about the distant future (and strongly prefer flourishing civilization over extinction). So the notion that longtermism depends on stark totalism is simply a mistake.
Totalism is on the right track when it comes to many big-picture questions, but it is an oversimplification. Failing to create is importantly morally different from killing. We have especially stringent reasons to avoid the latter. (Itâs an interesting further question precisely how much extra weight we should give to saving lives over creating lives.) But we still have some moral reason to want good lives to come into existence, and that adds up to very strong moral reasons to care about the future of humanity in its entirety.
You argue that the value added by saving a life is separable into two categories:
Person-directed: The value added by positively affecting an existing personâs interests.
Undirected: The value added simply by increasing the total amount of happy life lived.
Letâs define the âcoefficient of undirected valueâ C between 0 and 1 to be the proportion of value added for undirected reasons, as opposed to person-directed reasons. The totalist view would set C=1, arguing that there is no intrinsic value to helping a particular person. The person-affecting view would set C=0, arguing that it is only coherent to add value when it positively affects an existing person. You argue that this is a false dichotomy, and that C should be âlow,â i.e. giving low moral weight to interventions which only produce undirected value (e.g. increasing fertility) relative to interventions which produce both categories of value (e.g. saving a life).
I think the totalist view should be lent more credence than you lend it in your post, and that C should be adjusted upwards accordingly by moral uncertainty to be âhigh.â I would endorse the implication that causing a baby to be born who otherwise would not is âclose to as goodâ as saving a life.
Consider choosing between the following situations (in the vein of your postâs discussion of the intrinsic harm of death):
A woman wants a child. You use your instant artificial womb to create an infant for her.
A woman just gave birth to an infant. The infant is about to painlessly die due to a genetic defect. You prevent that death.
For the sake of argument, letâs assume that the womanâs interests are identical in both cases. (i.e. the sadness Woman 1 would have had if you didnât make her a child is the same as the sadness Woman 2 would have had if her child painlessly died, and the happiness of both women from being able to raise their child is the same.)
To me, it seems most intuitive that one should have little to no preference between Case 1 and Case 2. The outcomes for both the woman and the child are (by construction) identical. Of course, the value added in Case 1 is undirected, since the child doesnât yet exist for its interests to be positively affected by your decision, and the value added in Case 2 includes both directed and undirected components. If we follow this intuition, we must conclude that C=1, or C is very close to 1. Even if you still have a significant intuitive preference for Case 2, letâs say youâre choosing between two occurrences of Case 1 and one occurrence of Case 2. Many now would switch to prefer the two occurrences of Case 1, since now we have two happy mothers and children versus one. However, this still implies C>0.5. If we accept the idea that Case 1 is close to as good as Case 2, then it seems hard to escape the conclusion that C is âhigh,â and we should adjust the way we think about increasing fertility accordingly.
Let me know what you think!
Interesting! Thanks for this.
I should clarify that Iâm not committed to C being âlowâ; but I do think it should be somewhere between 0 and 1 (rather than at either extreme). I donât have any objection to C>0.5, for example, though I expect many people would find it more intuitive to place it somewhat lower. Iâd probably be most comfortable somewhere around 0.5 myself, but I have very wide uncertainty on this, and could probably easily be swayed anywhere in the range from 0.2 â 0.8 or so. It seems a really hard question!
My thought is that what we (should) care about may vary between the cases, and change over time (as new people come into existence). Roughly, the intuition is that we should care especially about individuals who do or will exist (independently of our present actions). So once a child exists (or will exist), we may have just as much reason to be thankful for their creation as we do for their life being saved; so I agree the two options donât differ in retrospect. But in prospect, we have (somewhat) less reason to bring a new person into existence than to save an already-existing person. And I take the âin prospectâ perspective to be the one thatâs more decision-relevant.
Thanks for the clarification, and for your explanation of your thought process!
Hi Richard,
Thanks for sharing your thoughts.
It is quite unclear to me whether total utilitarianism treats saving lives and creating new lives as equivalent, because all else seems far from equal. For example:
Saving a life prevents the suffering of death, and that of many people besides the person whose life was saved.
Saving a life prevents resources from being wasted.
The net effect of saving and creating a life on population size may well be different, as lives are saved when they were a certain positive age, but are created with age 0.
If the goal is testing total utilitarianism, I believe we should search for situations in which total utility is constant, but we still think one of them is better than the other. I do not think this can be achieved with real world examples, as reality is too complex. So I think it is better to consider thought experiments. For instance:
100 people live for 100 years with an annual utility per capita of 10.
100 people live for 50 years with an annual utility per capita of 10, and thanks to a life-saving intervention are able to live for 50 more years with an annual utility per capita of 10.
100 people live for 50 years with an annual utility per capita of 10, and then 100 lives are created and live for 50 years with an annual utility per capita of 10.
All situations have a total utility of 100 k. The 1st does not involve saving nor creating lives, the 2nd saving lives, and the 3rd creating lives. However, I would say they are morally identical.
I may be missing something. Thoughts are welcome!
You may need to imagine yourself inside the world in order for person-directed reasons to get a grip. Suppose that youâre at year 49, and can choose whether to realize the 2nd or 3rd outcome next year. That is, you can save everyone (for 50 more years), or you can let everyone die while setting in motion a âreplacementâ process (to create new people with 50 years of life). It seems to me that thereâs some moral reason to prefer the former option!
Thanks for replying!
I can see why my intuitions would point to the 2nd option being better, but this can be explained by them not internalising the conditions of the thought experiment.
If I am at the end of year 49, and can choose whether to realise the 2nd or 3rd outcome after the end of year 50, I would intuitively think that:
Years 51 to 100 would have the same utility (50 k) for both the 2nd and 3rd options.
Year 50 would have less utility for the 3rd option than for the 2nd. This is because everyone would die at the end of year 50 in the 3rd option, and dying sounds intuitively bad.
However, the 2nd point violates the condition of equal utility. To maintain the conditions of the thought experiment, we would have to assume year 50 to contain the same utility in both options. In other words, the annual utility per capita of 10 would have to be realised every year, instead of simply being the mean annual of the 1st 50 years. To better internalise these conditions, we can say everyone would instantaneously stop to be alive (instead of dying) at the end of year 50 in the 2nd option. In this case, both options seem morally identical to me.
Thinking about it, one life could be described as a sequence of moments where one instantaneously stops to be alive and then is created.
I do think a key issue here is whether or not thatâs the right way to think about it. As I wrote in the âPersonal Identityâ chapter of Parfitâs Ethics (2021):
Thatâs an extremely revisionary claim, and not one I think we should accept unless itâs unavoidable. But it is entirely avoidable (even on a Parfitian account of personal identity). We can instead think that psychological continuants of existing persons have an importantly different moral status, in prospect, from entirely new (psychologically disconnected) individuals. We may think we have special reasons, grounded in concern for existing individuals, to prefer that their lives continue rather than being replacedâeven if this makes no difference to the impersonal value of the world.
That said, if your intuitions differ from mine after considering all these cases, then thatâs fine. We may have simply reached a bedrock clash of intuitions.
Thanks for sharing.
I get the point, but the analogy is not ideal. To ensure total utility is similar in both situation, I think we should compare:
Doing nothing.
Killing and reviving someone who is in dreamless sleep.
Killing one person while reviving another may lead to changes in total utility, so it does not work so well as a counterexample in my view.
Being killed and revived while awake would maybe feel strange, which can also change total utility, so an example with dreamless sleep helps.
Ideally, the moments of killing and reviving should be as close in time as possible. The further apart, the more total utility can differ. Dreamless sleep also helps here, because the background stream of thought is more constant. If I was instantly killed while sitting at the sofa at home with some people around me, and then instantly revived a few minutes later, I may find myself surrounded by worried people. This means the total utility may well have changed.
Saying the 2 situations above are similar does not sound revisionary to me (assuming we could ensure with 100 % certainty that the 2nd one would work).
Likewise, and thanks for engaging!
One quick clarification: If someone is later alive, then they have not previously been âkilledâ, as I use the term (i.e. to mean the permanent cessation of life; not just temporary loss of life or whatever). I agree that stopping someoneâs heartbeat and then starting it again, if no harm is done, is harmless to that individual. What Iâm interested in here is whether permanently ending someoneâs life, and replacing them with an entirely new (psychologically disconnected) life, is something we should regard negatively or with indifference, all else equal.
Ah, sorry, that makes sense. I can also try to give one example where someone dies permanently. For all else to be equal, we can consider 2 situations where only one person is alive at any given time (such that there are no effects on other persons):
Word A contains 1 person who lives for 100 years with mean annual utility of 10.
World B contains:
1 person X who lives for 50 years with mean annual utility of 10, and then instantly dies.
1 person Y who is instantly created when person X instantly dies, and then lives for 50 years with mean annual utility of 10.
Both worlds have utility of 1 k, and feel equally valuable to me.
I donât understand the positive duty to procreate which seems to be an accepted premise here?
Morality is an adverb not an adjective.
Is a room of 100 people 100x more âmoralâ than a room with 1 person. Whatâs wrong with calling that a morally neutral state? (Iâm not totalling up happiness or net pleasure or any of that weird stuff).
Only when forced into a trolley problem when we have actual decisions do our decisions, e.g. kill 1 person or 100 people, does the number of people have significance.
I donât think thereâs a âduty to procreateâ. I wrote that âThereâs always some reason to positively benefit possible future people by bringing them into a positive existence, even though it doesnât wrong anyone to remain childless by choice.â In other words: itâs a good thing to do, not a duty. Some things are important, and worth doing, even though morally optional.
Is a world containing happy lives better than a barren rock? As a staunch anti-nihilist, I think that good lives have value, and so the answer is âyesâ.
Note that I wouldnât necessarily say that this world is âmore moralâ, since âmoralâ is more naturally read as a narrow assessment of actions, rather than outcomes. But we should care about more than just actions. The point of acting, as I see it, is to bring about desirable outcomes. And I think we should prefer worlds filled with vibrant, happy lives over barren worlds. Thatâs not something Iâm arguing for here; just a basic premise that I think is partly constitutive of having good values.
I think I understand and that makes sense to me.
Hi Richardâthis all makes a lot of sense. Gustav Alexandrie and I have a model of âperspective-weighted utilitarianismâ which also puts intermediate weight on potential people and has some of the same motivations /â implications. I presented it at the June GPI workshop and would be happy to discuss.
-Julian
Sounds great! If/âwhen you have a public draft available, please do share the link!
Richardâinteresting post. I think this hybrid approach seems more or less reasonable.
I do think the dichotomy between âperson-directedâ and âundirectedâ concerns is a bit artificial, and it glosses over some intermediate cases in ways that over-simplify the population ethics.
Specifically, any given couple considering whether to have children, or whether to allow a particular fetus to reach term (versus aborting it), is not exactly facing a dilemma about a currently existing person with particular traitsâbut they arenât exactly facing an âundirectedâ dilemma about whether to add another vague abstract genetic person to the total population either.
Rather, theyâre facing a decision about whether to create a person whoâs likely to have certain heritable traits that represent some Mendelian combination of their genes. Theyâre facing a somewhat stochastic distribution of possible traits in the potential offspring, and that complicates the ethics. But when assessing the value of any existing life (e.g. the kid at risk of malaria who might live another 50 years if they survive), weâre also facing a somewhat stochastic distribution of possible future traits that might emerge decades in the future.
In other words, pace Derek Parfit, there might be almost as much genetic and psychological continuity between parent and child as between person X at time Y and person X at time Y + many decades. In neither case does the âperson-directedâ thinking quite capture the malleable nature of human identity within lives and across generations.
Does a positive obligation exist to procreate?
While controversies surround total utilitarianism and the Repugnant Conclusion, what about the ethical implications of sperm donation? Given that it typically entails negligible costs and results in creating content lives in developed nations, could sperm donation be considered a moral duty? Despite concerns about overpopulation and its impact on climate change, could individual actions be akin to a Prisonerâs Dilemma, where meaningful change requires large-scale government intervention and individual actions do not matter at all on a large scale?
Regarding meat consumption, when does the act of creating life outweigh the potential for negative consequences, such as dietary choices? If refraining from creating life is justified on the basis of potential meat consumption (as seen in vegan antinatalist perspectives), does it logically follow that it is morally acceptable to kill non-vegans due to their meat consumption?
Finally, you said that saving a life is more important than creating one, though creating one has some relevance. So how many lives created is equal to one life saved? What is the break-even point?
Thanks.
Thanks for this, Richard. Very thoughtful.
However, after being a ~total utilitarian for decades, Iâve come to realize it is beyond salvage. As I point out in the chapter âBiting the Philosophical Bulletâ here.
Take care!