My Skeptical Opinion on the Repugnant Conclusion

Epistemic/​Research Status

I’m not very well studied in population ethics. You can view this as a quick opinion piece. I know that similar points to mine have been made before in the literature, but thought it could still be valuable to provide my personal take. I wrote this mainly because I’ve kept hearing the Repugnant Conclusion be brought up as an attack on what I see as very basic decision-making, and wanted to better organize my thoughts on the matter. I used Claude to help rewrite this.

Summary

The Repugnant Conclusion has been a topic of frequent discussion and debate within the Effective Altruism (EA) community and beyond, with numerous dedicated posts on the EA Forum. It has often been used to question what I believe to be straightforward, fundamental questions about making trade-offs. However, I argue that the perceived “repugnancy” of the Repugnant Conclusion is often a result of either poorly chosen utility function parameters or misunderstandings about the workings of simple utility functions.

In my view, the undesirability associated with the Repugnant Conclusion is not inherent to the concept itself but rather arises from our intuitive discomfort with certain extreme scenarios.

Instead of dedicating significant efforts to radical approaches designed to circumvent the Repugnant Conclusion, I suggest that the EA and greater intellectual communities should focus on estimating realistic utility functions, leveraging the knowledge and methods found in economics and health science. I have been mostly unimpressed with much of what I’ve read of the debate on this topic within Population Ethics, and much more impressed by discussions on related topics within formal science and engineering.

A Dialogue

Alice: “So, here’s a choice. We could either (A) create 10 humans that are happy millionaires, or (B) 100 humans that are pretty happy, but not quite as happy as the millionaires.”

Bob: “Weird question, but sure. I’ll take the 100 humans.”

Alice: “That works. Now consider a new option, (C). Instead, we can make 1,000 humans that are fairly happy. Say, these are middle-income, decent lives, but not much fame. Maybe they would rate their lives at an 8.1/​10, instead of an 8.5/​10.”

Bob: “That still sounds preferable. I’ll go for the 1,000 humans then.”

Alice: “Great. New question. We can actually go all the way to option (D), to create 1 million humans. These will be in mild poverty this time, to preserve resources. Their lives are fairly plain. They would rate their lives an average of 6.2/​10. They still would prefer to live, but it’s pretty slim.”

Bob: “This sounds less great, but based on my intuitions and some math, I think it’s still worthwhile.”

…some time passes…

Bob: “You know, I actually hate this last option. It seems clearly worse than the previous ones.”

Alice: “Okay, do you want to then choose one of the previous ones?”

Bob: “No.”

Alice: “Um… so what do you want to do? Do you want to reconsider the specific of how you should trade-off population quantity vs. quality?.”

Bob: “No. My dissatisfaction with my final choice here demonstrates that the entire system of me choosing options is flawed. The very idea that I could make choices has led me to this last choice that I dislike, so I’ve learned that we shouldn’t be allowed to make choices like this.”

Alice: “We do need you to choose some option.”

Bob: “No. Instead, I’ll request that philosophers come up with entirely new ways of comparing options. I think that D is better than C, but I also think that D is worse than C. There must be a flaw in the very idea of deciding between options.”

Alice: “But let’s start again. Do you think that option B is still better than option A?

Bob: “Yes, I’m still sure.”

Alice: “And you think that option C is still better than option B?”

Bob: “Yes, I’m still sure.”

Alice: “So can you then choose your favorite between C and D?”

Bob: “No. I feel like I’m pressured to choose D, but C seems much better to me. Instead of choosing one or thinking through these trade-offs directly, I think we need a completely new theory of how to make decisions. It’s the very idea of preferring some populations over others in ways like this that’s probably the problem.”

My (Quick) Take:

The Repugnant Conclusion presents a paradox in population ethics that challenges our intuitions about quality vs. quantity of life. As a utilitarian, I think it’s important to address these concerns, but I don’t at all see the need to discarding the framework of welfare trade-offs entirely, as some might suggest.

The Trade-Off Problem

Most people intuitively understand the need for trade-offs. For instance, consider the extreme yet analogous question:

“Would you prefer to save one person at a wellbeing level of 9.4/​10 or 1,000,000 people at a wellbeing level of 9.38/​10?”

The obvious solution is to save the larger number of people with slightly lower wellbeing. This illustrates that trade-offs can be made even when dealing with high stakes.

Handling the Extremes

If you find yourself at option D (1 million people with lives barely worth living) and it feels wrong, you can simply revert to option C (1,000 people with decent lives). The discomfort with extreme options doesn’t invalidate the entire concept of welfare trade-offs. In real-world scenarios, extreme options are rarely the optimal choice.

The Utility Function

I think it is worthwhile to be precise about specific properties of our utility function of preferences over a population, for decision-making. Note that there are many kinds of utility functions, so by using a “best guess at a utility function made for this specific problem, to be used for decisions”, doesn’t need to have much at all to do with other utility functions, like one’s “terminal utility function”.

Key Specific Claims

  1. Monotonically Increasing Utility: The relationship between the number of happy people and total utility is monotonically increasing, if not strictly increasing, all other things being equal.

  2. Average Happiness and Utility: The relationship between average happiness and total utility is monotonically increasing, if not strictly increasing, all other things being equal.

  3. Threshold of Preferability: There exists a threshold where a human life is considered preferable or not. For example, if asked, “Would you prefer a human come into existence with endless pain at level X, all other things being equal?”, there is some level X for which we would say no.

  4. Axiom of Trade-offs: For any given level of individual welfare above the threshold of preferability, there exists some number of individuals at that level whose existence would be preferable to a single individual at a higher level of welfare, all else being equal.

  5. Extreme Conclusions: If certain conclusions seem odd at the extremes, it’s more likely that the specific parameters or intuitions are mistaken rather than the claims above.

Note that a von Neumann-Morgenstern utility function would imply (4). I think that such a function is an easy claim to make, though I know there are those who disagree.

Note: This is very arguably just a worse version of Arrhenius’s impossibility theorems. I suggest reading about those if you’re curious in this topic. Here’s a summary, from Claude.

Arrhenius’ Impossibility Theorem states that no population axiology can simultaneously satisfy the following six intuitively plausible desiderata:

  1. The Egalitarian Dominance Condition: If population A has higher welfare than population B at all levels, then A is better than B.

  2. The Dominance Addition Condition: Adding people with positive welfare is better than adding people with negative welfare.

  3. The Non-Sadism Condition: An addition of any number of people with negative welfare is worse than an addition of any number of people with positive welfare.

  4. The Mere Addition Condition: Adding people with positive welfare does not make a population worse, other things being equal.

  5. The Normalization Condition: If two populations are of the same size, the one with higher average welfare is better.

  6. The Avoidance of the Repugnant Conclusion: A large population with lives barely worth living is not better than a smaller population with very high positive welfare.

Arrhenius proves that these conditions are incompatible, meaning that any theory that satisfies some of them must necessarily violate others. This impossibility result poses a significant challenge to population ethics, as it suggests that our moral intuitions in this domain may be irreconcilable.

Other philosophers, such as Tyler Cowen and Derek Parfit, have also proposed similar impossibility theorems, each based on a slightly different set of moral principles.

Some Quick Takes on Population Ethics and Population Axiology

Some claim that the “Repugnant Conclusion” represents a need for fundamental changes in the way that we make tradeoffs. I find this unlikely. We make decisions using simple von Neumann utility function tradeoffs in dozens of industries, with lots of money and lives at stake, and I think we can continue to use those techniques here.

Again, I see the key question as one of where to set specific parameters, like the human baseline. This is a fairly contained question.

I think that a lot of the proposed “solutions” in Population Ethics are very unlikely to be true. They mostly seem so bad to me that I don’t understand why they continue to be argued, for or against. I’m paranoid that Effective Altruists have spent too much time debating bad theories here.

See an overview of these reasons at the Stanford Encyclopedia of Philosophy, or in Population Axiology by Hilary Greaves.

  1. Averagism seems clearly false.

  2. Variable value principles seems very weird and unlikely. However, it could still fit within a fairly conventional utility function, so I won’t complain too much.

  3. Critical level principle. The idea is that a person’s life contributes positively to the value of a population only if the quality of the person’s life is above a certain positive critical level.” The basic idea that there is some baseline makes sense to me. Again, I’m happy for discussions onto where exactly it is set. Again, I would see this is a minor change, so much that it seems to almost, if not exactly, be arguing about semantics.

  4. Person-affecting theories. I find them unlikely, and I also don’t think they address the actual “repugnant conclusion” question. Just can just change “you can create population X” to statements like, “Imagine that population X exists, and you are asked about killing them.

  5. Rejections of transitivity. This seems very radical to me, and therefore unlikely. I understand this to be basically throwing away the idea of us being able to make many simple tradeoffs, like options A vs. B. above. I don’t think we need an incredibly radical take that would require us to throw out lots of basic assumptions in economics, in order to get around the basic weirdness of the Repugnant conclusion.

  6. Accepting the impossibility of a satisfactory population ethics. One listed option seems to be to give up and to assume that there’s just a fundamental problem of ethics. From what I can tell, the basic argument is, “It seems very clear that the prerequisites that require the Repugnant Conclusion are true. We also think that the Repugnant Conclusion represents a severe problem with morality.” I agree that the Repugnant Conclusion of some form is likely to be an optimal decision, but don’t at all think that it represents a severe problem.

Perhaps worse than these are some solutions I hear by non-philosophers. I’ve heard many people think about the Repugnant Conclusion briefly, and assume that the logical implication is to reject many of the basic premises of making decisions to maximize human welfare. To me, this follows the failure mode of rejecting one unintuitive premise, with a dramatically more unintuitive premise.