Is the reasoning of the Repugnant Conclusion valid?

Introduction

In population ethics, the Repugnant Conclusion is the observation that, according to certain theories, a large population of people whose lives are barely worth living is preferable to a small population with higher standards of living. (A very good summary is given by the Stanford Encyclopaedia of Philosophy, which also details many possible responses.)

I theorise yet another response. I am not certain that populations (say A,B) should be compared simply by ordering the sum total of each population’s welfare. I propose instead that they should be compared by considering how the welfare would change if one population were transformed into the other (e.g. A --> B).

By merging this axiom with elements resembling other more established ideas such as a person-affecting view, I believe that this can lead to a comprehensive theory which avoids many of the serious flaws that can affect alternatives. I describe one such example below.

Notation

Let and denote very small and very large positive quantities respectively. I will also use to refer to a god who can alter populations instantaneously at will.

Real and imaginary welfare

I will use real to describe people that currently exist and imaginary to describe people that do not currently exist.

I also want to define what I will call real and imaginary welfare. This is a little bit more involved.

Real welfare

  • gain of welfare for real people (+ve real welfare)

  • loss of welfare for real people (-ve real welfare)

  • creation of imaginary people with negative welfare (-ve real welfare)

Imaginary welfare

  • creation of imaginary people with positive welfare (+ve imaginary welfare)

The idea here is then that real welfare outranks imaginary welfare—given two choices we should prefer the one that improves net real welfare more than the other.

Three examples are given below (where denotes a change in real welfare by and change in imaginary welfare by ).

We might also suggest that no proportion of a real population with positive welfare should end up with negative welfare, and that given these constraints, welfare is ideally distributed as equally as possible. I do not believe that these additional improvements are detrimental to the core argument here.

Axiom of Comparison

We will now use the following axiom to compare two different populations, A and B.

Let A,B be two different populations. Then, A is better than B if and only if there exists a way to instantaneously transform A into B with an action that is better than the status quo at .

Remark: if A is not better than B, this does not necessarily entail that B is better than A! In other words, order matters.

The Repugnant Conclusion fails

In the following reasoning, real populations will be illustrated with opaque boxes and imaginary populations with transparent boxes.

The standard argument for the Repugnant Conclusion begins by comparing A and A+. Since A+ has higher total welfare than A, the reasoning is that A+ is better than A. But then, B has the same population as A+ with higher total welfare and less inequality, so B is better than A+, and since A+ is better than A, B is better than A.

If we accept this reasoning, then by induction, it follows that we cannot deny that Z is better than A. This is rather repugnant.

So what happens if we instead invoke the Axiom of Comparison? Let us consider the first comparison between A and A+.

We can’t just compare total welfare—according the axiom, we need to find a way “to get from” A to A+. In particular, the population of A needs to increase instantaneously. (That’s fine, we can ask to do this.) But by definition, these newly-created people are imaginary.

You might be able to see where this is going now. We do have that A’ is better than A, since it has higher imaginary welfare for the same real welfare. But A’ is not the same as A+.

Let us finish this argument for completeness. We will now consider how to get from A’ to B (bearing in mind that we started at A).

Note that B is not better than A’, since the real population of A’ (originally A) must lose real welfare to transform into B. Hence, B is not better than A’ and therefore is not necessarily better than A. Thus, the Repugnant Conclusion fails.

Direct comparison between A and B

We have shown that the standard chain of reasoning that leads to the Repugnant Conclusion fails, given the axiom stated earlier. However, this does not necessarily assure that B is not better than A, so let us compare the two directly.

Once again, to get from A to B, we must lose real welfare and only gain imaginary welfare. Therefore, B is not better than A.

But what if we try to get from B to A?

Note that part of the population gains a small gift of imaginary welfare, but rather more dramatically, half of the population has disappeared(!) corresponding to a dire loss of real welfare. Thus, A is not better than B.

This can be interpreted in a very natural way. Suppose we were to suggest to the populations of A and B that transforms either population into the other. Then, both populations should, in fact, refuse. Both populations should prefer the status quo.

Is the Repugnant Conclusion really avoided?

You might now have thought of the following counterargument. Suppose that there is currently no real population, and we are debating with whether we should create A or Z. According to the rules above, we should create Z. Is this not repugnant?

I say that it is not. The difficulty of the Repugnant Conclusion, for me at least, is that it suggests that a real population should strive to become larger and more miserable on average. But the people of Z wouldn’t be becoming more miserable—they’d simply be coming into existence.

Lastly, remember that populations are not static. Once we create Z (and it becomes real), who’s to say it won’t flourish and become Z+?

Indeed, if the population of Z chooses to believe in the Axiom of Comparison, then they need not come to any Repugnant Conclusions about their future :-)

Flaws that are avoided

Here are some of the flaws that I believe are avoided by this formulation.

No dependence on ad hoc utility functions

Transitivity is not rejected

No problems due to averaging

If one supports the average principle, this may lead to the conclusion that it is better to have millions being tortured than one person being tortured in a slightly worse way (since the average welfare of that one person is slightly worse).

No Sadistic Conclusion

This is an issue for some theories where adding a small number of people with negative welfare is preferable to adding a large number of people with small positive welfare.

Edit: following a helpful comment from Max Daniel, it now seems to me that a negative utilitarian aspect of this formulation which was originally included can be removed, still without leading to the Repugnant Conclusion. Thus, I believe that the following flaw is no longer an issue. I will keep what was written here for now, just in case I have made a fatal error in reasoning.


Remaining flaws or implications

I have identified one possible flaw, although, of course, that doesn’t mean that more can’t be found.

Suppose that we have a population X consisting of 1 person with welfare and 999 people with welfare 0. Then, according to the rules above, given the choice of bringing the first person up to neutral welfare or improving the welfare of the 999 other people by , we should prefer the former, as it is “real” rather than “imaginary”.

This is not necessarily repugnant, but if you were part of the 999, you would feel a little miffed perhaps, so let’s examine this a bit more. I provide some counter-arguments to the flaw.

It isn’t a flaw

If you are firmly a negative utilitarian, then this isn’t a flaw. Maybe it really should be considered better that everyone’s life is first made worthy of living before giving large amounts of welfare to people who don’t truly need it.

It is too implausible

Is it really even plausible that we can create large amounts of welfare for 999 people, but only a tiny amount of welfare for 1 person? Perhaps not all theories need to work in such extremes.

There is a bias in the reading

Consider two scenarios. In the first, you and make this decision without ever telling the population X. In the second, however, you let the people of X know about that you are undecided about the two options before you make the decisions.

In the first scenario, if you choose to help the 1 person with negative welfare, the other 999 people have no idea that they were rejected and therefore can’t feel bad about it.

But in the second scenario, if, again, you choose to help the 1 person with negative welfare, the 999 people will be aware that they were rejected. This may even cause a loss of real welfare as they realise that they missed out. Thus, in this scenario, it could be the correct decision for you and to improve the welfare of the 999.

Thus, to summarise this final point and the entire post: I believe that it’s not about the result—it’s about how you get there.


Closing remarks

I welcome and encourage any criticism. I believe that the Axiom is an original idea, but I certainly wish to be corrected if I am mistaken.

There are a few additional remarks I had to make, but I have omitted them so as to avoid this post being even longer. Thanks for reading.