The Repugnant Conclusion Isn’t
There is nothing bad in each of these lives; but there is little happiness, and little else that is good. The people in Z never suffer; but all they have is muzak and potatoes.
- Derek Parfit, Overpopulation and the Quality of Life
The image of World Z provokes an unsettling cognitive dissonance. It forces us to confront the possibility that any degree of happiness, no matter how magnificent, can be outweighed by arbitrarily small pleasures multiplied across a sufficiently large population. Imagining this kind of mediocrity, we can hardly endorse it over a small yet ecstatic utopia.
And yet, I feel strongly that this perceived tension is due entirely to a failure of the imagination. When Parfit says “muzak and potatoes”, perhaps you conjure up the image of a medieval European peasant, covered in mud, living in squalor, only just barely getting by.
But read again more carefully: “There is nothing bad in each of these lives”.
Although it sounds mundane, I contend that this is nearly incomprehensible. Can you actually imagine what it would be like to never have anything bad happen to you? We don’t describe such a as mediocre, we describe it as “charmed” or “overwhelmingly privileged”.
After all, each of our lives are absolutely filled with bad things. Some of these are obvious (injury, illness, the loss of a loved one), but mostly they just exist as a kind of dull background pain we’ve grown to accept. The bad things are, as Simone Weil put it, the “countless horrors which lie beyond tears”.
In stark contrast, consider Parfit’s vision of World Z both seriously and literally.
These are lives with no pain, no loneliness or depression, no loss or fear, no anxiety, no aging, no disease, nor decay. Not ever a single moment of sorrow. These are lives free entirely from every minor ache and cramp, from desire, from jealousy, from greed, and from every other sin that poisons the heart. Free from the million ills that plague and poke at ordinary people.
It is thus less the world of peasants, and closer to that of subdued paradise. The closest analog we can imagine is perhaps a Buddhist sanctuary, each member so permanently, universally and profoundly enlightened that they no longer experience suffering of any kind.
And that’s not all! Parfit further tells us that their lives are net positive. And so in addition to never experiencing any unpleasantness of any degree, they also experience simple pleasures. A “little happiness”, small nearly to the point of nothingness, yet enough to tip the scales. Perhaps the warmth of basking under a beam of sun, the gentle nourishment of simple meals, or just the low-level background satisfaction of a slow Sunday morning.
Properly construed, that is the world Parfit would have us imagine. Not a mediocre world of “muzak and potatoes”, but a kind of tranquil nirvana beyond pain. And that is a world I have no problem endorsing.
This is how Parfit formulated the Repugnant Conclusion, but the way it’s usually referred to in population ethics discussions about the (de)merits of total symmetric utilitarianism, it need not be the case that the muzak and potatoes lives never suffer.
The real RC that some kinds of total views face is that world A with lives of much more happiness than suffering is worse than world Z with more lives of just barely more happiness than suffering. How repugnant this is, for some people like myself, depends on how much happiness or suffering is in those lives on each side. I wrote about this here and here.
I broadly agree that “what does a life barely worth living look like” matters a lot, and you could imagine setting it to be high enough that the repugnant conclusion doesn’t look repugnant.
That being said, if you set it too high, there are other counterintuitive conclusions. For example, if you set it higher than people alive today (as it sounds like you’re doing), then you are saying that people alive today have negative terminal value, and (if we ignore instrumental value) it would be better if they didn’t exist.
This seems entirely plausible to me. A couple jokes which may help generate an intuition here (1, 2)
You could argue that suicide rates would be much higher if this were true, but there are lots of reasons people might not commit suicide despite experiencing net-negative utility over the course of their lives.
At the very least, this doesn’t feel as obviously objectionable to me as the other proposed solutions to the “mere addition paradox”.
Yeah.
Here’s an analogy:
Someone goes on a hike with others. They’re cold and their feet hurt. They decide to continue the hike to the destination instead of turning around 30% in. They’re not saying their conscious experience during the hike is positive, but they prefer to continue because the hike isn’t (just) about their moment-to-moment experience!
Likewise, with life, many people want to continue their lives for reasons other than “isn’t it wonderful what I experience moment-to-moment?” For instance, we have things we’re curious about, things to look forward to, projects to finish, bucket list items to tick off.
Of course, some people really do live for positive experiences (and are happy on net) – that’s perfectly fine.
Gotta love utilitarian moral philosophers, who will do things like declare that their lives are net-negative and all people ought rationally to commit suicide, because this “doesn’t feel as obviously objectionable to me as the other proposed solutions”! :P
(To be extra clear—I am joking, I do in fact love moral philosophers despite their quirks, I don’t think that the answers to these odd population-ethics questions are obvious even though like everyone I have my opinions and gut reactions.)
I think the main arguments against suicide are that it causes your loved ones a lot of harm, and (for some people) there is a lot of uncertainty in the future. Bracketing really horrible torture scenarios, your life is an option with limited downside risk. So if you suspect your life (really the remaining years of your life) is net-negative, rather than commit suicide you should increase variance because you can only stand to benefit.
Yeah, I don’t think it’s clearly unreasonable (though it’s not my intuition).
I agree that suicide rates are not particularly strong evidence one way or the other.
Great point. But note that if lives of monk-like tranquility are neutral, that makes the Mirrored Repugnant Conclusion harder to accept:
The total view in population ethics implies this Mirrored Repugnant Conclusion.
If lives of monk-like tranquility are neutral, then lives of monk-like tranquility plus a mosquito bite are barely bad, and so the total view implies:
Yeah, it’s difficult to intuit, but I think that’s pretty clearly because we’re bad at imagining the aggregate harm of billions (or trillions) of mosquito bites. One way to reason around this is to think:
- I would rather get punched once in the arm than once in the ribs, but I would rather get punched once in the ribs than 10x in the arm
- I’m fine with disaggregating, and saying that I would prefer a world where 1 person gets punched in the gut to a world where 10 people get punched in the arm
- I’m also fine with multiplying those numbers by 10 and saying that I would prefer 10 people PiG to 100 people PiA
- It’s harder to intuit this for really really big numbers, but I am happy to attribute that to a failure of my imagination, rather than some bizarre effect where TU only holds for small populations
- I’m also fine intensifying the first harm by a little bit so long as the populations are offset (e.g. I would prefer 1 person punched in the face to 1000 people punched in the arm)
- Again, it’s hard to continue to intuit this for really extreme harms and really large populations, but I am more willing to attribute that to cognitive failures and biases than to a bizarre ethical rule
Etc etc.
I don’t think you’re forced to say that if a life with x utility is neutral, a life with x − 1 utility is bad. It seems to me that the most plausible version of the OPs approach would have a very wide neutral band.
Yes, nice point. We could depart from the total view and go for a neutral band. But it’s worth noting that this move comes with problems of its own.
The way you describe it, it sounds downright appealing!
You could even argue that the quality of life in such an instantiation of the repugnant conclusion is, as far as people’s experiences are concerned, no worse than the quality of life in a eudaimonic paradise. Not everyone will agree with this intuition, but I’d say that the more intense pleasures seem appealing to us because they connect to life goals or to addictive tendencies and thrill seeking, which we’d no longer bother with if all our life goals were taken care of and we were permanently in a calm and restful state where everything feels fine.
That said, I don’t think evaluating lives purely based on the experiences they contain is of much use in most contexts. For instance, many people wouldn’t want to enter solipsistic experience machines (whether they’re built around eternal contentment or a more adventurous ideal life) if that means giving up on having authentic relationships with loved ones.
To elaborate more on the theme of “contentment as equally good as intense pleasure,” here some excerpts from my article on tranquilism:
Edit: I decided that endnote 8 is worth quoting here, too:
Thanks for the link! I knew I had heard this term somewhere a while back, and may have been thinking about it subconsciously when I wrote this post.
R.e.
> For instance, many people wouldn’t want to enter solipsistic experience machines (whether they’re built around eternal contentment or a more adventurous ideal life) if that means giving up on having authentic relationships with loved ones.
I just don’t trust this intuition very much. I think there is a lot of anxiety around experience machines due to:
- Fear of being locked in (choosing to be in the machine permanently)
- Fear that you will no longer be able to tell what’s real
And to be clear, I share the intuition that experience machines seem bad, and yet I’m often totally content to play video games all day long because it doesn’t violate those two conditions.
So what I’m roughly arguing is: We have some good reasons to be wary of experience machines, but I don’t think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility.
I agree that some people don’t seem to give hedonism a fair hearing when discussing experience machine thought experiments. But also, I think that some people have genuine reservations that make sense given their life goals.
Personally, I very much see the appeal of experience machines. Under the right circumstances, I’d be thrilled to enter! If I was single and my effective altruist goals were taken care of, I would leave my friends and family behind for a solipsistic experience machine. (I think I do care about having authentic relationships with friends and family to some degree, but definitely not enough!) I’d also enter a non-solipsistic experience machine if my girlfriend wanted to join and we’d continue to have authentic interactions (even if that opens up the possibility of having negative experiences). The reason I wouldn’t want to enter under default circumstances is because the machine would replace the person I love with a virtual person (this holds even if my girlfriend got her own experience machine, and everyone else on the planet too for that matter). I know I wouldn’t necessarily be aware of the difference and that things with a virtual girlfriend (or girlfriends?) could be incredibly good. Still, entering this solipsistic experience machine would go against the idea of loving someone for the person they are (instead of how they make me feel).
I wrote more experience machine thought experiments here.
I don’t think there’s such a thing as “the ethical value of a life,” at least not in a well-defined objective sense. (There are clearly instances where people’s lives aren’t worth living and instances where it would be a tragedy to end someone’s life against their will, so when I say the concept “isn’t objective,” I’m not saying that there’s nothing we can say about the matter. I just mean that it’s defensible for different people to emphasize different aspects of “the value of a life.” [Especially when we’re considering different contexts such as the value of an existing or sure-to-exist person vs. the value of newly creating a person that is merely a possible person at the time we face the decision.])
My understanding is that the people in the repugnant conclusion have lives that are slightly net positive which means those lives could have suffering as long as the good slightly outweighs the suffering. For example, a life could have slightly net positive value if it had 11 positive experiences for every 10 negative experiences.
This just seems way better than a neutral life.
If you’re going to say a neutral life has no bad things then fine, but for me it would also then have to have no good things. So I don’t think a Buddhist monk has a neutral life as they do experience lots of bliss.
A neutral life would be more like someone in a life-long coma. Such a life would have no good or bad experiences. Or a life in which the good equals the bad. I say more about what I think a neutral life is here.
This is the standard objection that the line we draw betwen a life worth living and one not is actually high enough that the RC is not repugnant. I think this is plausible but it means accepting that many people today have lives not worth living which could be considered quite harsh—many of the people alive in poverty today believe their own lives are worth living. I cant see how holding this argument is not the same as holding that one knows better than they do abotu the value of their own lives.
[TL;DR: I guess taking uncertainty seriously takes the sting out of population ethics. And it’s usually a good heuristics for counterintuitive quasi-fanatical conclusions.]
Thanks for the post. But I don’t think it works. Your point is that there should be no controversy over RC, because all it implies is that zillions of people living a tranquil nirvana is better than 10 bi (or n) people living in blissful paradise...? It looks like you basically just set a higher bar for what counts as positive value. One could argue along the same line saying that 2 people in bliss is worse than 1k living in nirvana.
The problem is:
Parfit called it “repugnant” for a reason, and he tried to find arguments against it. I don’t see the point of making it kind of trivial… Actually, the philosophically interesting point is to explain why it is controversial. I suspect easy answers tend to be wrong here.
I don’t live in a nirvana-like state, but I still think my life has positive value. I can totally reckon there are lives of negative value, and I can imagine my life becoming one. But it’s interesting to acknowledge that other people think about this differently.
I invite you to take uncertainty seriously. Some argue that, because of that, RC doesn’t have “any probative force”, particularly if some comparisons are highly imprecise or impossible
My personal take so far is that RC is right: there is a *logically possible* world w with a huge population m with lives barely worth living which is better than a world with n people living in ultimate bliss. For instance, I’m pretty confident that 10 bi living in nirvana is better than 2 people living in bliss; just like 10^18 people in nirvana is probably better than 10bi living in bliss.
But as the comparison between the corresponding worlds gets harder, for a determinate world w’, it might become more unlikely that you can prove that U(w ) > U(w’ ) – i.e., that you can justifiably believe that such-and-such w and w’ are those worlds. Uncertainty increases with each step in the chain of your argument.
Plus, I think that a good (and old) heuristics for consequentialists to avoid counterintuitive solutions is to consider uncertainty, and check if the conclusion remains sound – especially if you have something like “inductive steps”. E.g.: Sure, a world w where 1 person is tortured for 50y for n others to live in bliss is better than our current world… but how certain can you be that such-and-such precise world is w?
I have some moral uncertainty regarding totalism vs. averageism, and have at times vacillated between the two. However, one of the points that has led me to increasingly favor totalism is a rebuttal to the veil of ignorance argument in favor of averageism (I.e., “wouldn’t you prefer to be a random person in [averageist paradise] vs [totalist paradise]?”).
The rebuttal is partially just that averageism arbitrarily ignores people that don’t exist in its calculations. If you can get an averageist to agree that such non-people should be considered, then it becomes mathematically obvious that the repugnant conclusion is at least less repugnant than the alternative, which is “some number of people exist and live happy lives, but some orders-of-magnitude-larger number of people just have completely net-zero existences (whereas they could have had slightly net-positive lives), completely swamping the average.” In other words, whenever a supposed averageist paradise is described, you should basically just add in what I call “grey people”: a number of purely net-zero-experience people who would have existed in the totalist alternative world.
Thus, this seems to largely defeat the “repugnant conclusion” objection: you can’t call it more repugnant than the alternative even according to the average wellbeing.
Of course, getting averageists to accept “non-people should be considered” seems to be the far trickier part. I was already somewhat open to this idea so I was somewhat probably easier to persuade than most people. However, some of the points here that persuaded me were:
We are naturally biased in favor of caring about moral ideas which validate/benefit us, and we can only examine this question if we already exist. Thus, our intuitions and feelings will probably be biased in favor of frameworks that only care about existing people.
Why shouldn’t we care about potential people? (This is especially important to pair with the previous point.) It seems like the onus should be on averageists, since totalists can at least point out “you would prefer to be a person who exists with a net-positive life to having no existence at all.” (Admittedly, there may be a flaw/circularity in this assumption, but circularities are sometimes hard to avoid in ethics (and descriptive logic), and I’m not confident it is actually problematically circular: is it really unreasonable to assume that a rational person, presented with a “never exist” button and a “live millions of extremely happy years with no suffering” button would prefer the latter? If not, there doesn’t seem to be a reason to think that changes at any point before the experience in the latter option diminishes to “live a pure net-zero-happiness life”.) One rebuttal to this which I think I often returned to (perhaps subconsciously or indirectly) when I defended averageism was “but the repugnant conclusion!”—but, as I describe above, that does not seem to be a problem if you accept the idea that potential people matter.
It seems that averageism has its own extreme ends: according to it, a world with only 1 person suffering a torturous life is worse than a world with billions of people suffering slightly-less-torturous lives on average. Of course, if one doggedly insists that averages are all that matter, then this is all consistent. But it seems hard to reject the intuition that there is some moral relevance in the aggregate experience vs. the average experience.
Interesting post, thanks!
As far as I can tell, it is easy to understand why the Repugnant Conclusion is repugnant. It explores scope neglect, a very well-known cognitive bias, to a great extent.
I agree that scope neglect probably explains a bit here, but I don’t think that’s all. For instance, nobody would deny that 2 people living in bliss is a worse state than 1bi in nirvana. And even population ethicists feel uncomfortable with RC.
Besides my mentioning of uncertainty above, I also guess (very tentatively) that some other factors might mess up with our intuitions:
i) scarcity concerns: we evolved under resource scarcity, which biases us towards lower populations;
ii) social norms might bias us against RC;
iii) contractualist reasoning: if you were in something like an Original Position and had to choose which world you would prefer to live in, you’d pick the “low pop living in bliss” world, of course (from a selfish POV, at least). Similarly, we might say that living in a country with a high avg HDI (e.g., Sweden) is better than another one with lower avg HDI (e.g., Nigeria) - because avg HDI here is a good predictor for how good is the life of an individual living in that country. But I guess this just shows contractualist reasoning is unsuited for population ethics: we are interested in how good a place is all things considered, and not in “how good is this place for those who live here”.
P.S.: Something that puzzles me is that RC seems to be analogous to the problem of fanaticism in Pascalian scenarios—and yet I don’t see this analogy being widely explored.
Adepts of total hedonic utilitarians (e.g. me) would support this, as long as bliss is sufficiently better than nirvana. I it easy to see (if not feel) that 10^9 is much larger than 2, but “bliss” and “nirvana” look similar because they are somewhat vague.
I agree the factors you mention also explain why the RC is often seen as repugnant. That being said, would you say they are evidence against the total view?
I think it is a little bit different. If someone asks me for 100 T$ (similar to global GDP) in return for 1 $, I am happy with rejecting the offer because the variance of my prior is much lower than that of the offer. In the RC, which is just a tought experiment, I think we are supposed to consider that both scenarios (e.g. 2 people in bliss, and 1 G in nirvana) are certain (i.e. have null variance), so bayesian considerations should arguably not play a role.
Thanks for this reply, meu caro. My remarks:
1. Though “bliss” and (tranquil) “nirvana” are vague, and some people might equate them, in the text being discussed they are made a bit more precisified… But I guess we both agree that it is still far away from making them precise terms—especially because we are not very good at measuring welfare. I consider this as evidence that uncertainty plays a role in our “repugnant” intuitions.
2. Allow me to be brief and tentative: I don’t consider my remarks as evidence against the total view. I suspect the total view is the right theory of value / good (though I distinguish this from a theory of justice / duty—which is another way of answering “what should we do?”). I think that RC reasoning is probably correct—but it is hard to apply in uncertain comparison, or might be quite trivial (and so not very “repugnant”) in certain ones.
3. I agree that RC reasoning does not involve uncertainty / risk / probabilities. But I find the premises and its steps quite reminiscent of some “low probability—high expectancy” cases—so that I suspect the formal arguments are related (besides the fact that both conclusions seem to be entailed by expected utility theory). When I do have the time to engage with the literature, I’ll start with Nebel’s Intrapersonal Addition Paradox, and Kosonen’s solution.
Thanks for clarifying!