Population ethics: In favour of total utilitarianism over average

While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people...

This post will argue that within the framework of hedonic utilitarianism, total utilitarianism should be preferenced over average utilitarianism. Preference utilitarianism will be left to future work. We will imagine collections of single experience people (SEPs) who only have a single experience that gains or loses them a certain amount of utility

Both average and total utilitarianism begin with an axiom that seem obviously true. For total utilitarianism this axiom is: “It is good for a SEP with positive utility to occur if it doesn’t affect anything else”. This seems to be one of the most basic assumptions that one could choose to start with—it’s practically equivalent to “It is good when good things occur”. However, if it is true, then average utilitarianism is false, as a positive, but low utility SEP may bring the average utility down. It also leads to the sadistic conclusion, that if a large number of SEPs involve negative utility, we should add a SEP who suffers less over adding no-one. Total utilitarianism does lead to the repugnant conclusion, but contrary to perceptions, near zero, but still positive utility is not a state of terrible suffering like most people imagine. Instead, it is by definition a life that is still good and worth living overall.

On the other hand, average utilitarianism starts from its own “obviously true” axiom, that we should maximise the average expected utility for each person independent of the total utility. We note that average utilitarianism depends on a statement about aggregations (expected utility), while total utilitarianism depends on a statement about an individual occurrence that doesn’t interact with any other SEPs. Given the complexities with aggregating utility, we should be more inclined towards trusting the statement about individual occurrences, then the one about a complex aggregate. This is far from conclusive, but I still believe that this is a useful exercise.

So why is average utilitarianism flawed? The strongest argument for average utilitarianism is the aforementioned “obviously true” assumption that we should maximise expected utility. Accepting this assumption would reduce the situation as follows:

Original situation → expected utility

Given that we already exist, it is natural for us really want the average expected utility to be high and for us to want to preference it over increasing the population seeing as not existing is not inherently negative. However, while not existing is not negative in the absolute sense, it is still negative in the relative sense due to opportunity cost. It is plausibly good for more happy people to exist, so reducing the situation as we did above discards important information without justification. Another way of stating the situation is as follows: While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people. This situations can be represented as followed:

Original situation → (expected utility, number of SEPs)

Since this is a tuple, it doesn’t provide an automatic ranking for situations, but instead needs to be subject to another transformation before this can occur. It is now clear that the first model assumed away the possible importance of the number of SEPs without justification and therefore assumed its conclusion. Since the strongest argument for average utilitarianism is invalid, the question is what other reasons are there for believing in average utilitarianism? As we have already noted, the repugnant conclusion is much less repugnant than it is generally perceived. This leaves us with very little in the way of logical reasons to believe in average utilitarianism. On the other hand, as already discussed, there are very good reasons for believing in total utilitarianism, or at least something much closer to total utilitarianism than average utilitarianism.

I made this argument using SEPs for simplicity, but there’s no reason why the same result shouldn’t also apply to complete people. I’ll also note that according to the Stanford Encyclopedia of Philosophy, average utilitarianism hasn’t gained much favour within the philosophical literature. One of the most common counter-arguments is called the sadistic conclusion, sadly I couldn’t find a good link for explaining this, so I’ll leave you to Google it yourself.

Cross-posted to Less Wrong