Population ethics: In favour of total utilitarianism over average
While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people...
This post will argue that within the framework of hedonic utilitarianism, total utilitarianism should be preferenced over average utilitarianism. Preference utilitarianism will be left to future work. We will imagine collections of single experience people (SEPs) who only have a single experience that gains or loses them a certain amount of utility
Both average and total utilitarianism begin with an axiom that seem obviously true. For total utilitarianism this axiom is: “It is good for a SEP with positive utility to occur if it doesn’t affect anything else”. This seems to be one of the most basic assumptions that one could choose to start with—it’s practically equivalent to “It is good when good things occur”. However, if it is true, then average utilitarianism is false, as a positive, but low utility SEP may bring the average utility down. It also leads to the sadistic conclusion, that if a large number of SEPs involve negative utility, we should add a SEP who suffers less over adding no-one. Total utilitarianism does lead to the repugnant conclusion, but contrary to perceptions, near zero, but still positive utility is not a state of terrible suffering like most people imagine. Instead, it is by definition a life that is still good and worth living overall.
On the other hand, average utilitarianism starts from its own “obviously true” axiom, that we should maximise the average expected utility for each person independent of the total utility. We note that average utilitarianism depends on a statement about aggregations (expected utility), while total utilitarianism depends on a statement about an individual occurrence that doesn’t interact with any other SEPs. Given the complexities with aggregating utility, we should be more inclined towards trusting the statement about individual occurrences, then the one about a complex aggregate. This is far from conclusive, but I still believe that this is a useful exercise.
So why is average utilitarianism flawed? The strongest argument for average utilitarianism is the aforementioned “obviously true” assumption that we should maximise expected utility. Accepting this assumption would reduce the situation as follows:
Original situation → expected utility
Given that we already exist, it is natural for us really want the average expected utility to be high and for us to want to preference it over increasing the population seeing as not existing is not inherently negative. However, while not existing is not negative in the absolute sense, it is still negative in the relative sense due to opportunity cost. It is plausibly good for more happy people to exist, so reducing the situation as we did above discards important information without justification. Another way of stating the situation is as follows: While it may be intuitive to reduce population ethics to a single lottery, this is incorrect; instead, it can only be reduced to n repeated lotteries, where n is the number of people. This situations can be represented as followed:
Original situation → (expected utility, number of SEPs)
Since this is a tuple, it doesn’t provide an automatic ranking for situations, but instead needs to be subject to another transformation before this can occur. It is now clear that the first model assumed away the possible importance of the number of SEPs without justification and therefore assumed its conclusion. Since the strongest argument for average utilitarianism is invalid, the question is what other reasons are there for believing in average utilitarianism? As we have already noted, the repugnant conclusion is much less repugnant than it is generally perceived. This leaves us with very little in the way of logical reasons to believe in average utilitarianism. On the other hand, as already discussed, there are very good reasons for believing in total utilitarianism, or at least something much closer to total utilitarianism than average utilitarianism.
I made this argument using SEPs for simplicity, but there’s no reason why the same result shouldn’t also apply to complete people. I’ll also note that according to the Stanford Encyclopedia of Philosophy, average utilitarianism hasn’t gained much favour within the philosophical literature. One of the most common counter-arguments is called the sadistic conclusion, sadly I couldn’t find a good link for explaining this, so I’ll leave you to Google it yourself.
I am struggling to comprehend the second half of your post. Sorry! Can you clarify exactly why you believe that you have effectively invalidated average utilitarianism one proposition at a time, and the reasons you are alluding to as ‘already discussed’ in favour of total utilitarianism?
Also, I have said this before, but should a forum for effective altruism be a place in which to discuss what are—from the outside—the minutia of highly obscure moral theories? This is supposed to be a normatively ecumenical movement focused upon the efficacy of charitable giving, is it not? I doubt this is helpful in cultivating that kind of diversity.
Average utilitarianism vs total utilitarianism isn’t minutiae, there’s actually a pretty massive difference in the entire way we think about morality between those two systems.
“Both average and total utilitarianism begin with an axiom that seem obviously true. For total utilitarianism this axiom is: “It is good for a SEP with positive utility to occur if it doesn’t affect anything else...” is the part you want. I probably should have formalised this a bit more.
Also, if you follow the link to Less Wrong, I give a seperate and more formal argument in the second section. I removed that argument because I decided that, while convincing, the argument I gave had no philosophical advantages over (a more formulised version) if the argument that I did give on this page.
I realise the difference between average and total utilitarianism, but in the context of the the whole history of moral and political thought the gap between the two is infinitesimal as compared to the gap between the utilitarian framework in which the debate operates and alternative systems of thought. There is no a priori reason to think that the efficacy of charitable giving should have any relation whatsoever to utilitarianism. Yet it occupies a huge part of the movement. I think that is regretful not only because I think utilitarianism hopelessly misguided, but because it stifles the kind of diversity which is necessary to create a genuinely ecumenical movement.
I am still struggling to follow any line of reasoning in the second half of what you have written. Why is that quote the part I want? What is it supposed to be doing? Can you summarise what you are doing in one paragraph of clear language?
I’ll give you one example where it makes a difference. Take for example factory farming—if we care about average utility, then it is clearly bad as the conditions are massively pulling down the average. If we care about total utility, then it is possible that the animals may have a small, but positive utility, and that less animals would exist if not for factory farming, so it’s existence might work out as a positive.
Re: other questions. I’ll probably rewrite and repost a more refined version of my argument at some point, but that is work for another day.
Perhaps I have not been clear enough. I am not disputing that average and total utilitarianism can lead to radically different practical conclusions. What I am saying is that the assumptions which underlie the two are far closer together than the gap between that common framework and much of the history of moral and political thought. From the point of view of the Spinozian, Wittgensteinian, Foucauldian, Weberian, Rawlsian, Williamsian, Augustinian, Hobbesian, the two are of the same kind and equally alien for being so. You are able to have this discussion exactly because you accept the project of ‘utilitarianism’. Most people do not.
This is only obviously true if you evaluate average/total at a given time. Population ethicists tend to consider the population in a whole universe history. And in a big enough world, if you can only make changes at the margin then average utilitarianism is the same as critical-level total utilitarianism (where the critical level is set by the average of the population). Then it’s again possible that the animals have a positive contribution.
That’s interesting, I’ve never really thought about temporality, but I don’t see any reason why a future person would be valued less.
That said, I see critical level utilitarianism flawed for very similar reasons. I’ll probably write about it some time.
I think the argument is that, a priori, utilitarians think we should give effectively. Further, given the facts as they far (namely that effective donations can do an astronomical amount of good), there are incredibly strong moral reasons for utilitarians to promote effective giving and thus to participate in the EA movement.
I do find discussions like this a little embarrassing but then again they are interesting to the members of the EA community and this is an inward-facing page. Nonetheless I do share your fears about it putting outsiders off.
I agree that given the amount of good which the most effective charities can do, there are potentially strong reasons for utilitarians to donate. Yet utilitarians are but a small sub-set of at least one plausible index of the potential scope of effective altruism: any person, organisation or government which currently donates to charity or supports foreign aid programmes. In order to get anywhere near that kind of critical mass the movement has to break away from being a specifically utilitarian one.
Interesting piece. I too reject the average view, but I’m currently in favour of prior-existence preference utilitarianism (the preferences of currently existing beings and beings who will exist in the future matter, but extinction, say, isn’t bad because it prevents satisfied people from coming into existence) over the total view. I find it to be quite implausible that people can be harmed by not coming into existence, although I’m aware that this leads to an asymmetry, namely that we’re not obligated to bring satisifed beings into existence but we’re obligated not to bring lives not worth living into existence. One way to resolve that is some form of negative-leaning view, but that has problems too so I’m satisfied with living with the asymmetry for now.
Nonetheless, I agree that the Repugnant Conclusion is a fairly weak argument against the total view.
You aren’t harmed by not being brought into existence, but there is an opportunity cost, that is, if you would have lived a life worth living, that utility is lost.
I approach utilitarianism more from a framework that, logically, I should be maximising the preference-satisfaction of others who exist or will exist, if I am doing the same for myself (which it is impossible not to do). So, in a sense, I don’t believe that preference-satisfaction is good in itself, meaning that there’s no obligation to make satisfied preferrers, just preferrers satisfied. I still assign some weight to the total view, though.
A bit more introduction might’ve been useful here. Interesting post.
Whatever the problems with the total view, a straight average view is a completely non-starter.
I mean, the sadistic conclusions removes any intuitive appeal immediately.
Note that some clever people disagree with this (http://blog.practicalethics.ox.ac.uk/2014/02/embracing-the-sadistic-conclusion/):
I started writing about how bad his argument is, but then I noticed that in the comments he clarifies that he doesn’t actually embrace the sadistic conclusion, and instead seems to merely think that it’s “not as bad as the repugnant conclusion”, and doesn’t present an overall coherent view on population ethics.
As the author, Stuart, points out, he strongly disagrees with the sadistic conclusion as Rob is describing it:
I think the argument for that decision not being “sadistic”, but the least bad option is reasonable if he can win the object level argument.
However, his explanation of why a lot of people living lives worth living is bad is flawed as he constructs people with a life barely worth living, then appeals to their status as an underclass to encourage us to emotively push this below the life worth living line. Unfortunately, any underclass status needs to be included in the utility calculation when it is determined whether or not a life is worth living.
While these are good points, I wonder if anyone would disagree—I don’t know if anyone really accepts this version of average utilitarianism as many people seem to tend towards maximizing average utility only for the set of existing people. Though I suppose that wouldn’t be any different from maximizing total utility for the set of existing people. But the big, $64,000 question in ethics (upon which major decisions depend) is whether we have an obligation to create as many ecstatic people as we can. There’s so many ways of doing math on utilitarianism that it may be more fruitful to examine those kinds of issues directly rather than starting from complete moral theories.
Also, I had a somewhat difficult time following your point in the last sections, so you might like to review it and clarify the idea.
What do you mean by maximising average utility for existing people only? As you’ve noted, with a fixed number of people average and total utilitarianism are identical. It is only when we consider whether we should create (or destroy!) people that average and total utilitarianism come into play.
I’ve argued that if someone gets positive utility, then the universe is better when they exist. If I wanted to reduce this argument to a slogan, it would be “Good things are good”. As soon as it is accepted that average utilitarianism is flawed, most of the incentive to try to optimise things other than total utility go away. There exist a large number of strange utility functions, but the arguments for these seem to be rather unpersuasive.
Also, which parts in particular were hard to understand?
You’ve argued that average utilitarianism is problematic, but that’s not the same as giving an actual argument that we are obligated to increase future populations.
They may not be persuasive, but they are not necessarily implausible, and the problem may be with utilitarianism in general rather than simply with average utilitarianism.
The section titles aren’t clear and the two paragraphs under them don’t have clear lines of reasoning.
If it is good for someone to experience a life worth living, then surely we would want as many people as possible to experience this.