Two Types of Average Utilitarianism

Was doing a bit of musing and thought of an ethical concept I have not heard discussed before, though I’m sure it has been written about by some ethicist.

It concerns average utilitarianism, a not-very-popular philosophy that I nonetheless find a bit plausible; it has a small place in my moral uncertainty. Most discussions of average utilitarianism (averagism) and total utilitarianism (totalism) begin and end with discussions of the Repugnant and Sadistic conclusions. For me, such discussions lead averagism seeming worse, but not entirely forgettable, relative to totalism.

There is more intricacy to average utilitarianism, however, that I think is overlooked. (Hedonic) total utilitarianism is easily defined: assuming that each sentient being s at point in time t has a “utility” value u(s,t) representing (amount of pleasure—amount of pain) in the moment, total utilitarianism is just:

Average utilitarianism requires specification of an additional value, the moral weight of an individual at a point in time w(s,t), corresponding to a sentient being’s capacity for pleasure and pain, or their “degree of consciousness”. Averagism is then (I think?) ordinarily defined as follows, where at any given time you divide the total utility by the total moral weight of beings alive:

Laying out the view like this makes clear another flaw, a flaw which is in my view worse than anything discussed in the Repugnant vs. Sadistic conclusion arguments: utility isn’t time-independent. That is, if a population grows over time to (e.g.) 10x the size, each being’s pain and pleasure counts 10x less than beings that came earlier.

This leads to some really bad conclusions. Say you had a task that the above population needs to accomplish that will require immense suffering by one person. Instead of trying to reduce this suffering, this view says that you can dampen it simply by having this person born way in the future. The raw suffering that this being will experience is the same, but because there happen to be more people alive, this suffering just doesn’t matter as much. In a growing population, offloading suffering to future generations now becomes an easy get-out-of-jail free card in ways that only make sense to someone who treats ethics as a big game.

After some thinking, I realized that the above expression is not the only way you can define averagism. You can instead divide the total amount of utility that will ever exist by the total amount of moral weight that will ever exist:

This expression destroys the time dependency discussed above. Instead of downweighing an individual’s utility by the amount of other beings that currently exist, we instead downweigh it by the amount of other beings that have ever existed. We still avoid the Repugnant Conclusion on a global scale (which satisfies the “choose one of these two worlds” phrasing ordinarily used), though on local timescales you have a lot of repugnant behavior remaining that you don’t with the previous definition.

The time-invariant expression also puts a bit of a different spin on average utilitarianism. By the end of the last sentient life, we want to be able to claim that the average sentient being was as happy as possible. If ever we get average utilitarianism to a level we can never match, the best option is just to have no more sentient life, to “turn off the tap” of sentience before the water gets too diluted with below-average (even if positive) utility. We are also obligated to learn about our history to determine whether ancient beings were miserable or ecstatic, to see at which level of utility it is still worth having life.

...Or at least, in theory. In practice, of course, it’s really hard to figure out what the actual implications of different forms of averagism are, given how little we know about wild animal welfare and given the correlation between per-capita prosperity and population size. That being said, I think this form of averagism is at least interesting and merits a bit of discussion. I certainly don’t give it too much credence, but it has found a bit of weight in my moral uncertainty space.