Yet Another Response to the Repugnant Conclusion: Strict (Leximin) Prioritarianism

TL;DR: To compare two world states, first compare the utilities of the unhappiest individual of each world; if they are equal, move on to the second-unhappiest individuals; and so on.

For example:[1][2]

But also, counterintuitively:

Edit 2025-11-09: After some discussion in the comments and some further thought, I’ve removed the “leximinmax” approach and I now stick with the initial “leximin” approach. I’ve also restructured the text and added some further thoughts to argue for this decision.

Abstract

In this post I formally define my current preferred ethical system Strict Prioritarianism, which is designed to preserve the mathematical clarity of utilitarian reasoning while rejecting the classic repugnant conclusions of mere-addition and average utilitarianism.

Classical utilitarianism violates ethical intuitions by summing utilities across individuals—as if happiness and suffering could cancel across minds. I reject this: utility is intrinsically individual, and there is no coherent notion of “collective utility.”

To replace aggregation by summation, I propose Strict (Leximin) Prioritarianism: compare worlds lexically, from the most miserable individual to the happiest. This approach avoids the classic repugnant conclusions and gives priority to the worst-off—while maintaining a purely comparative, non-aggregative structure. However, it has counterintuitive implications of its own, namely the violation of the Mere Addition Principle (adding a life worth living should not worsen the world). I argue that this is still preferable to any of the other repugnant conclusions, so I don’t see it as a reason to reject this theory.

My goals in publishing this post are:

  • Share these ideas and get some feedback.

  • See if someone already knows some literature that contains these ideas.

  • Ask for more “counterexamples”, i.e. counterintuitive or undesirable implications of this theory.

  • Ask for suggestions for alternative nomenclature.

Preliminaries

In the framework of utilitarianism, we can study world states at an instant in time (synchronic) and world states as they vary across time (diachronic). We will start with synchronic considerations, as they are simpler. But keep in mind that diachronic considerations are what really matters: we care about how our actions will affect the future over time, not just at one particular instant in the future. Since this post is already too long, I will focus only on the synchronic model; I intend to write about the diachronic model in a later post.

Definitions and assumptions

Some definitions:

  • An individual means any sentient mind.

  • I use the term utility to refer to a mathematical model of wellbeing /​ value /​ happiness /​ suffering.[3]

  • A utility set is:

    • conceptually, the set of values that utility could take;

    • formally, I define it as a Dedekind-complete pointed totally preordered set, that is, a totally preordered set that admits suprema and infima with a special element that is used to separate “negative”, “neutral” and “positive” elements of [4].

    • An example would be the set of real numbers , which fits the bill. The reason I’m not using directly is because it would seem to imply some potentially undesirable properties, such as implying that utility can be added and subtracted, and that utility is Archimedean: that there is some finite number of ice cream sandwiches that can make up for the loss of a loved one.

  • In the context of a utility set, since we’re talking about wellbeing /​ value /​ etc., I use words like “worse” and “better” (instead of e.g. “smaller” and “larger”) to denote the order relation.

  • By world state I mean a snapshot of the universe at some instant in time[5].

Some assumptions:

  • There will only ever be a finite number of individuals at any given instant in time.

Basic axioms of utilitarian ethics

Before diving into the specific ethical system I want to propose, let’s take a step back and consider which properties any such system should have. Here are my suggestions:

  • (E1) Ethical Pragmatism: The goal of an ethical system should be to allow us to choose which action to take among a set of alternatives—the role of the system is determining which actions are better than others.

  • (E2) Total Preorder of Actions: In particular, given perfect information, an ethical system should define a total preorder on any given set of alternative actions in a specific situation. This means:

    • (E2.1) Transitivity: The comparison between actions resulting from an ethical system should be transitive: if action A is better[6] than B and B is better than C, then A should be better than C.

    • (E2.2) Totality: Among a set of alternative actions in a specific situation, any two actions can be compared to each other: one will be better[6] than the other.

And now some basic axioms specifically for a utilitarianist ethical system:

  • (U1) Existence of Individual Utility: Individuals have, at any given point in time, a level of well-being that can be summarised as an element in some fixed utility set .

  • (U2) Compositional Transparency of Collective Utility: The value of a world state can be summarised as an element in some fixed utility set , and it should only depend on the well-being of the individuals it contains.[7]

What’s wrong with classical utilitarianism?

Here is where my views diverge from classical /​ total /​ mere-addition utilitarianism. Classic utilitarianism states:

The value of a world state is the sum of the utility of the individuals therein.[8]

This leads to the repugnant conclusion, also known as the Mere-Addition Paradox (I’ll refer to this as the classical repugnant conclusion henceforth):

“For any perfectly equal population with very high positive [individual] welfare, there is a larger population with very low positive [individual] welfare which is better, other things being equal.” [merely because the total sum would be bigger] (Parfit 1984)

And the very repugnant conclusion:

For any population where everyone has very high well-being, there exists a better population consisting of two groups: a significant number of people with very negative well-being, and a much larger number of people having barely positive welfare. (Budolfson & Spears 2018)

See also “The Ones Who Walk Away from Omelas” for a work of fiction exploring this scenario.

I reject these repugnant conclusions, therefore I must reject classical utilitarianism.

If the repugnant conclusion is the symptom of classical utilitarianism being wrong, the cause of the disease is perfectly summarized by this quote by Richard Ryder:

consciousness [...] is bounded by the boundaries of the individual. My pain and the pain of others are thus in separate categories; you cannot add or subtract them from each other. They are worlds apart.

Indeed, in an individual, it might happen that a negative event gets “cancelled out” by a positive event of similar magnitude, thus resulting in neutral wellbeing; that is, it is plausible to sum different contributors of wellbeing or suffering together.[9] But across individuals, this certainly doesn’t make any sense: my happiness doesn’t cancel out your suffering, and vice-versa. They simply coexist.

Hence, sentience, and thus utility, only makes sense at the level of the individual. There is no such thing as collective utility. One can only gather metrics that intend to summarize some information about the distribution of individual utilities. And the sum of utilities is a very bad metric—as highlighted by the repugnant conclusion and other similar undesirable conclusions.

The average principle

As an illustrative example, one of the first alternatives that might occur to you to aggregate individuals’ utilities is to compute an average instead of a total. This avoids the specific repugnant conclusion cited above. This approach, however, has repugnant conclusions of its own. As written in the Standford Encyclopedia of Philosophy article on the repugnant conclusion:

Despite these advantages, average utilitarianism has not obtained much acceptance in the philosophical literature. This is due to the fact that the principle has implications generally regarded as highly counterintuitive. [...] the principle [...] implies that for a population consisting of just one person leading a life at a very negative level of well-being, e.g., a life of constant torture, there is another population which is better even though it contains millions of lives at just a slightly less negative level of well-being (Parfit 1984). That total well-being should not matter when we are considering lives worth ending is hard to accept. Moreover, average utilitarianism has implications very similar to the Repugnant Conclusion (see Sikora 1975; Anglin 1977).

So this is not a solution in my view.

Strict (Leximin) Prioritarianism

Here I explain my preferred solution to the repugnant conclusion, which I call “strict prioritarianism” or “leximin prioritarianism”.[10]

Firstly, prioritarianism is a broader term encompassing all views that say that, to some capacity, “social welfare orderings should give explicit priority to the worse off”. Then, by leximin prioritarianism I mean base utilitarianism where the aggregation of individual utilities is as follows: instead of comparing the sum of individual utilities when comparing alternative world states, we compare the utilities of the least well-off individual of each world; if they are equal, we compare the second-least well-off individual; and so on.

Formally:

  • is simply defined as , the set of all finite multisets with values in , i.e. the set of all possible collections of individual utilities. That is, we don’t perform any “aggregation operation”[11]; the magic is in the comparison operation.

  • The (pre)order on is the leximin order. That is: for any two multisets :

    • we compare the worst element of , call it , to the worst element of , call it ; and if (resp. ) we declare (resp. );

    • otherwise, if , we do the same with the second-worst element of and the second-worst element of ;

    • and so on until we exhaust the elements of either or .

    • If we exhaust one of the multisets and we still haven’t declare a winner, we continue as before, except that when we try to grab an element from the exhausted set, we consider it to be .[12]

    More formally:

where denote the elements of in ascending order, and where is taken to be if the index exceeds the number of elements in (likewise for ).

I call this leximin prioritarianism for obvious reasons, or alternatively strict prioritarianism because worse-off individuals have lexical priority over better-off individuals. According to this view, it will always be better to increase the wellbeing of a worse-off individual than that of a better-off individual, no matter the amount by which you increase the latter’s wellbeing[13]: there is no valid exchange rate.[14]

You can easily check that this approach avoids all the problems above (I can expand on this further if people want):

  • It avoids the classical repugnant conclusion

  • It avoids the very repugnant conclusion

  • It avoids the repugnant conclusions stated above for the average approach

Other logical consequences of this approach that I find desirable (you might disagree):

  • It prioritizes increasing the welfare of the least well-off individuals before increasing that of better-off individuals.

  • It prioritizes increasing the welfare of the worst-off existing individuals before bringing new well-off individuals into the world.

  • It prefers a world with no people to a world where even just one person is suffering.[15]

Counterintuitive implications: quality over quantity

Here are some implications of this approach that might be counterintuitive:

  • A world with a very large population of equally happy individuals is worse than a world with just one individual that is slightly happier.

  • Adding a new happy person to a world will worsen the world, unless the world is empty or the happiness of the new person is at least as high as that of the happiest existing person.

    For example, is worse than , even though the only difference is that in the second world we have added an individual with positive utility (2).

I argue that these consequences, while hard to accept, are better than the alternative (the repugnant conclusions discussed above). That is, I still prefer this approach to classical utilitarianism.

Indeed, both implications fall under the umbrella of a radical stance of quality over quantity. In particular, the second implication resonates with the notion that, in Jan Narveson’s words, “we are in favour of making people happy, but neutral about making happy people”. I personally agree with these “quality over quantity” principles.

Apply this fractally?

Notice that we could apply this kind of aggregation at the individual level too: perhaps it’s too simplistic to have one real number aggregating the wellbeing of an individual. Rather, an individual has many different sources of suffering and sources of happiness, which could all be recorded in the individual’s utility value. We could then apply leximin to compare between different possible states of the same individual.

We could go all the way (an infinite number of steps) down and declare that each source of happiness or suffering for an individual can have sub-sources, and each sub-source could have sub-sub-sources, and so on: a fractal structure of utility.

Summary comparison of collective metrics of wellbeing

We can summarize the utility aggregation strategies by the core question they answer:

  • Total: what’s the total wellbeing of the world? (assumes that there is such a thing as “total wellbeing”, which I reject)

  • Average: if I had to live the life of an individual selected uniformly at random among the population, what would be my expected well-being?

  • Leximin: if I had to live the life of an individual selected at random among the population, what level of well-being would I be guaranteed? (And what would be the next-worst case? And the next? …)

  1. ^

    Assuming for illustrative purposes that we can use real numbers to model individual’s utilities—which I think might be a bad choice; see the discussion below.

  2. ^

    The double curly bracket notation is multiset notation: a multiset is an unordered collection of values that allows repetition.

  3. ^

    This abstracts away the issue of what happiness /​ welfare /​ value means in the first place. Under some definitions, “utilitarianism” is solely concerned with “happiness” in some narrow sense of the word; here, instead, “utility” stands for anything you might care about that you can model mathematically (not necessarily with real numbers, just mathematically): happiness, welfare, well-being, pleasure, satisfaction...

  4. ^

    Respectively defined as elements satisfying (), () and (), where denotes the equivalence relation induced by the preorder.

  5. ^

    Or during a small time window, but then we have to define what we mean by “small”.

  6. ^

    By “better than” I mean “no worse than” or “at least as good as”; simplified for clarity.

  7. ^

    I.e. the “utility” of the world state should be a function of the collection of individual utilities in that snapshot; that is, a function , where denotes “the set of finite multisubsets of”.

  8. ^

    This assumes something like to make sense of “sum”.

  9. ^

    And even this is dubious; see the subsection “Apply this fractally?”.

  10. ^

    After a quick search, it seems this has already been thought of before, of course, but right now I can’t find any concrete citations, and the formulation below is my own.

  11. ^

    Or if you want to be pedantic, that operation is the identity on ; see [7].

  12. ^

    Note that this choice is important, it has ethical implications: it enables us to say that, for instance, a world with one happy person is better than a world with no people. In general, it implies that adding a person that is at least as happy as the happiest existing person in a world will not worsen the world. If we remove this part, then we will always be indifferent to adding such a person (and thus the order becomes a preorder, because we can have different worlds between which we have no preference).

    In any case, this does not avoid the violation of the Mere Addition Principle: we will still have that the world {{1, 3}} is better than {{1, 2, 3}}, so we can worsen a world by adding a person with positive well-being.

  13. ^

    At least up until the point where the two become equal.

  14. ^

    As highlighted by Tomi Francis in the comments:

    it’s a lexical view that’s got the same problem that basically all lexical views have: it prioritises tiny improvements [to the unhappiest individual] for some over any improvements, no matter how large, for others.

    But this is intentional, for as soon as you allow some exchange rate, you end up with scenarios where you are allowed to sacrifice the well-being of some for the benefit of everyone else, and once you can do this once you can keep iterating this ad infinitum. I want to reject that.

    Also, in practice it is usually impossible to completely keep track of every single individual, let alone come up with a precise utility value for each; therefore actions that specifically try to target the literal worst-off individual have a low chance of success (improving the leximin score) because they have a low chance of actually having identified the correct individual. Therefore I’d argue that, when taking uncertainty and opportunity cost into account, in practice this system would still prioritize broad interventions most of the time, as long as they affect the lower end of the spectrum (so that they have a higher probability of improving the life of the worst-off individual and thus improving the leximin score).

    Another missing factor that might make this even less of a problem is considering variation through time rather than just snapshots in time (something I intended to write in a separate post). Indeed, broader actions that affect more people in the lower end of the spectrum will probably be prioritised because they will probably have a larger compounding effect than the ‘tiny interventions’, thus reducing the suffering of the worst-off over the course of time (vs interventions that only target whoever is the unhappiest individual right now).

  15. ^

    This is rather extreme and I expect most of the disagreement will come from this side, but I do stand by it. Also keep in mind that this is just the instantaneous evaluation of a world state; when we start taking into account how things might evolve over time, evaluation gets more complex, so this does not necessarily entail, for example, that I must advocate for human extinction as soon as I believe that there exists someone who is suffering.

  16. ^

    Strictly negative and strictly positive, that is. The comparison is indifferent to individuals whose utility is neutral.

  17. ^

    Unfortunately, this terminology might be confusing because you could also argue this is asymmetrical in a way, because negative values, i.e. suffering, are treated differently from positive values, i.e. happiness.

  18. ^

    To be even more precise, one could call it “strict symmetric suffering-focused prioritarianism” to distinguish it from the version in which we compare the positive parts first, but the suffering-focused part is implicit by convention in the word prioritarianism.