I think this post is too long for what it’s trying to do. There’s no need to frontload so many technicalities—just compare finite sequences of real numbers. The other details don’t matter too much.
You’re probably right :) I kind of wanted to write down all my assumptions to be able to point at a self-contained document when I ask myself “what’s my current preferred ethical system”, and I got a bit carried away. Indeed you could explain the gist of it with real numbers, though I think it was worth highlighting that real numbers (with their implicit algebraic and order structure we might be tempted to use) are probably a really bad fit to measure utilities.
If I’ve understood your view correctly, it’s a lexical view that’s got the same problem that basically all lexical views have: it prioritises tiny improvements for some over any improvements, no matter how large, for others.
Yes, and perhaps this is its biggest “repugnant conclusion”, but I think it is far better than the original repugnant conclusions. It’s a feature: as soon as you allow some exchange rate, you end up with scenarios where you are allowed to sacrifice the well-being of some for the benefit of everyone else, and once you can do this once you can keep iterating this ad infinitum. I want to reject that.
Also, in practice it is usually impossible to completely keep track of every single individual, let alone come up with a precise utility value for each; therefore actions that specifically try to target the literal worst-off individual (the ‘tiny improvement’ you mention) have a low chance of success (improving the leximin score) because they have a low chance of actually identifying the correct individual. Therefore I’d argue that, when taking uncertainty and opportunity cost into account, this system would in practice still prioritize broad interventions (the ‘large improvements’) most of the time, and among those, prioritising those that affect the lower end of the spectrum (so that they have a higher probability of improving the life of the worst-off individual).
Another missing factor that might make this even less of a problem is considering variation through time rather than just snapshots in time (something I intended to write in a separate post). If we assume time (or “useful time”, e.g. up until the heat death of the universe) is finite, we can apply leximin of world states (with discrete time steps so as to have a finite amount of world states) over time as the actual metric to rank actions. If we assume time is infinite and sentient beings might exist forever (terrible assumption, but it leads to an elegant model), I propose using the limit inferior of world states instead. Indeed, under either model, broader actions that affect more people in the lower end of the spectrum will probably be prioritised because they will probably have a larger compounding effect than the ‘tiny interventions’, thus reducing the chance of extreme suffering individuals over the course of time.
The key point is this: the reason your view avoids the Repugnant Conclusion is that it recommends populations where there is much less total wellbeing but some people have great lives over populations where there is more total wellbeing but everyone has a mediocre life.
Exactly, and I see that as desirable. As I explain in the post, I reject the need to maximize total wellbeing. You can think of it as an extreme (lexical) version of “quality over quantity”.
Actually, when it comes to comparing same-person populations of people with positive wellbeing, it looks to me like your view always recommends the one with the highest maximum wellbeing. That’s because you’re using the leximax ordering.
Yes. If the maxima are equal, you then look at the second happiest individuals; and so on.
You *can* avoid the Repugnant Conclusion if you’re willing to go down this route. But I suspect most people would think that the cure is worse than the disease.
I’d say that this is way more acceptable than the repugnant conclusion, but I would also bet this is quite an unpopular view. Perhaps people would prefer the pure leximin approach then: in that case, (9, 9) would be better than (1, 10). The problem then is that (1, 2, 10) is worse than (1, 10): by adding a happy person, we’ve made the world worse. Perhaps this is the least worrying conclusion of all the ones we’ve discussed, if you accept a radical version of “we want to make people happy, not make happy people”. And maybe there’s a different “fix” altogether that avoids both issues.
I also think it’s inaccurate to call this view a version of “prioritarianism”—again, unless I’ve misunderstood how it works.
I call it prioritarianism because in both leximin and leximinmax we’re comparing suffering first, and among the sufferers, we prioritize the ones that are suffering the most. However, when we look at the positive side, no one is suffering overall, they are just different levels of happiness. If you keep using leximin, it still prioritizes the people that are least happy. If you use leximax, you are prioritizing the happiest.
Actually, now that I think of it again after the all-nighter, yes leximax on happy people seems bad. Perhaps pure leximin all the way up is the better approach. The only problem is the “birth paradox” outlined above, but perhaps it’s the least of all problems we’ve considered. I’ll try to think if there are other solutions.
No problem. It’s always good to see people getting into population ethics.
>>maybe there’s a different “fix” altogether that avoids both issues.
Unfortunately, there isn’t. Specifically, consider the following three conditions:
1) A condition encoding avoidance of what you call the “birth paradox” (usually called the Mere Addition Principle)
2) Another condition which prevents us from being radically elitist in the sense that we prefer to give LESS wellbeing to people who already have a lot, rather than MORE wellbeing to people who have less. (Notice that this can be stated without appealing to the importance of “total wellbeing”—I just did it earlier as a convenient shorthand.) These conditions are usually called things like “Non Anti-Egalitarianism”, “Pigou-Dalton”, or “Non-Elitism”.
3) Transitivity.
It can be shown that every population satisfying these three conditions implies the Repugnant Conclusion. There’s also a big literature on impossibility results for avoiding the Repugnant Conclusion, and to cut a long story short, these conditions (1), (2) and (3) can be weakened or replaced.
If you’re interested in this topic, I would recommend trying to slog through some of the extensive literature first, if you haven’t already. A good place to start is Hilary Greaves’ Philosophy Compass article, “Population Axiology”.
If you find that article interesting, you can follow up some of the references there. There’s a precise and easy-to-follow rendition of the original Mere Addition Paradox in Ng (1989), “What should we do about future generations? Impossibility of Parfit’s Theory X”. And I think Parfit’s original discussion of population ethics in Reasons and Persons (part 4) is still well worth reading, even if it’s outdated in some regards.
The most important discussion of impossibility theorems is in an unpublished manuscript by Gustaf Arrhenius, “Population Ethics: The Challenge of Future Generations”. The main results in that book are also published in papers from 2003, 2009 and 2011. The 2022 Spears/Budolfson paper, “Repugnant Conclusions”, is also well worth a read in my opinion, as is Jacob Nebel’s “An Intrapersonal Addition Paradox” (2019 I think). Jake also has a very nice paper, “Totalism Without Repugnance”, where he puts forward a lexical view and tries to answer some of the standard objections to them.
To avoid the birth paradox, when comparing two vectors we could first match all individuals that have the same utility in both worlds, we then eliminate these utilities from the comparisons, and then we perform leximin comparison on the remaining utilities from both worlds. I think this solves the birth paradox while preserving everything else.
As you’ve stated the view, I think it would violate transitivity. Consider the following three populations, where each position in the vector denotes the wellbeing of a specific person, and a dash represents the case where that person does not exist: A: (2, 1) A’: (1, 2) B: (2, -)
A is better than (or perhaps equally as good as?) B, because we match the first person with themselves, then settle by comparing the second person at 1 (in A) to non-existence in B. You didn’t say how exactly to do this, but I assume A is supposed to be at least as good as B, since that’s what you wanted to say (and I guess you mean to say that it’s better).
However, A and A’ are equally good.
Transitivity would entail that A’ is therefore at least as good as B, but on the procedure you described, A is worse than B because we compare them first according to wellbeing levels for those who exist in both, and the first person exists in both and is better off in B.
I don’t doubt that the view can be modified to solve this problem, but it’s common in population ethics that solving one problem creates another.
I probably won’t reply further, by the way—just because I don’t go on EA forums much. Best of luck.
Sorry, perhaps I wasn’t clear: I didn’t mean matching by the identity of the individual, I meant matching on just their utility values (doesn’t matter who is happy/suffering, only the unordered collection of utility values matters). So in your example, A and A’ would be identical worlds (modulo ethical preference).
Formally: Let a,b:UI→N be multisets of utilities (world states). (Notice that I’m using multisets and not vectors on purpose to indicate that the identities of the individuals don’t matter.) To compare them, define the multiset a∩b as (a∩b)(u)=min(a(u),b(u)), and define a′=a−a∩b and b′=b−a∩b (pointwise). Then we compare a′ and b′ with leximin.
However, this still isn’t transitive, unfortunately. E.g: A: {{2}} B: {{1, 3, 3}} C: {{3}} Then A≳B and B≳C but A≳/C .
Right now I think the best solution is use plain leximin (as defined in my post) and reject the Mere Addition Principle.
You’re probably right :) I kind of wanted to write down all my assumptions to be able to point at a self-contained document when I ask myself “what’s my current preferred ethical system”, and I got a bit carried away. Indeed you could explain the gist of it with real numbers, though I think it was worth highlighting that real numbers (with their implicit algebraic and order structure we might be tempted to use) are probably a really bad fit to measure utilities.
Yes, and perhaps this is its biggest “repugnant conclusion”, but I think it is far better than the original repugnant conclusions. It’s a feature: as soon as you allow some exchange rate, you end up with scenarios where you are allowed to sacrifice the well-being of some for the benefit of everyone else, and once you can do this once you can keep iterating this ad infinitum. I want to reject that.
Also, in practice it is usually impossible to completely keep track of every single individual, let alone come up with a precise utility value for each; therefore actions that specifically try to target the literal worst-off individual (the ‘tiny improvement’ you mention) have a low chance of success (improving the leximin score) because they have a low chance of actually identifying the correct individual. Therefore I’d argue that, when taking uncertainty and opportunity cost into account, this system would in practice still prioritize broad interventions (the ‘large improvements’) most of the time, and among those, prioritising those that affect the lower end of the spectrum (so that they have a higher probability of improving the life of the worst-off individual).
Another missing factor that might make this even less of a problem is considering variation through time rather than just snapshots in time (something I intended to write in a separate post). If we assume time (or “useful time”, e.g. up until the heat death of the universe) is finite, we can apply leximin of world states (with discrete time steps so as to have a finite amount of world states) over time as the actual metric to rank actions. If we assume time is infinite and sentient beings might exist forever (terrible assumption, but it leads to an elegant model), I propose using the limit inferior of world states instead. Indeed, under either model, broader actions that affect more people in the lower end of the spectrum will probably be prioritised because they will probably have a larger compounding effect than the ‘tiny interventions’, thus reducing the chance of extreme suffering individuals over the course of time.
Exactly, and I see that as desirable. As I explain in the post, I reject the need to maximize total wellbeing. You can think of it as an extreme (lexical) version of “quality over quantity”.
Yes. If the maxima are equal, you then look at the second happiest individuals; and so on.
I’d say that this is way more acceptable than the repugnant conclusion, but I would also bet this is quite an unpopular view.
Perhaps people would prefer the pure leximin approach then: in that case, (9, 9) would be better than (1, 10). The problem then is that (1, 2, 10) is worse than (1, 10): by adding a happy person, we’ve made the world worse. Perhaps this is the least worrying conclusion of all the ones we’ve discussed, if you accept a radical version of “we want to make people happy, not make happy people”. And maybe there’s a different “fix” altogether that avoids both issues.
I call it prioritarianism because in both leximin and leximinmax we’re comparing suffering first, and among the sufferers, we prioritize the ones that are suffering the most. However, when we look at the positive side, no one is suffering overall, they are just different levels of happiness. If you keep using leximin, it still prioritizes the people that are least happy. If you use leximax, you are prioritizing the happiest.
Actually, now that I think of it again after the all-nighter, yes leximax on happy people seems bad. Perhaps pure leximin all the way up is the better approach. The only problem is the “birth paradox” outlined above, but perhaps it’s the least of all problems we’ve considered. I’ll try to think if there are other solutions.
Thanks for taking the time to reply!
No problem. It’s always good to see people getting into population ethics.
>>maybe there’s a different “fix” altogether that avoids both issues.
Unfortunately, there isn’t. Specifically, consider the following three conditions:
1) A condition encoding avoidance of what you call the “birth paradox” (usually called the Mere Addition Principle)
2) Another condition which prevents us from being radically elitist in the sense that we prefer to give LESS wellbeing to people who already have a lot, rather than MORE wellbeing to people who have less. (Notice that this can be stated without appealing to the importance of “total wellbeing”—I just did it earlier as a convenient shorthand.) These conditions are usually called things like “Non Anti-Egalitarianism”, “Pigou-Dalton”, or “Non-Elitism”.
3) Transitivity.
It can be shown that every population satisfying these three conditions implies the Repugnant Conclusion. There’s also a big literature on impossibility results for avoiding the Repugnant Conclusion, and to cut a long story short, these conditions (1), (2) and (3) can be weakened or replaced.
If you’re interested in this topic, I would recommend trying to slog through some of the extensive literature first, if you haven’t already. A good place to start is Hilary Greaves’ Philosophy Compass article, “Population Axiology”.
If you find that article interesting, you can follow up some of the references there. There’s a precise and easy-to-follow rendition of the original Mere Addition Paradox in Ng (1989), “What should we do about future generations? Impossibility of Parfit’s Theory X”. And I think Parfit’s original discussion of population ethics in Reasons and Persons (part 4) is still well worth reading, even if it’s outdated in some regards.
The most important discussion of impossibility theorems is in an unpublished manuscript by Gustaf Arrhenius, “Population Ethics: The Challenge of Future Generations”. The main results in that book are also published in papers from 2003, 2009 and 2011. The 2022 Spears/Budolfson paper, “Repugnant Conclusions”, is also well worth a read in my opinion, as is Jacob Nebel’s “An Intrapersonal Addition Paradox” (2019 I think). Jake also has a very nice paper, “Totalism Without Repugnance”, where he puts forward a lexical view and tries to answer some of the standard objections to them.
Thanks, I’ll take a look at the literature you suggested before thinking further.
Just a quick thought that comes to mind:
To avoid the birth paradox, when comparing two vectors we could first match all individuals that have the same utility in both worlds, we then eliminate these utilities from the comparisons, and then we perform leximin comparison on the remaining utilities from both worlds. I think this solves the birth paradox while preserving everything else.
As you’ve stated the view, I think it would violate transitivity. Consider the following three populations, where each position in the vector denotes the wellbeing of a specific person, and a dash represents the case where that person does not exist:
A: (2, 1)
A’: (1, 2)
B: (2, -)
A is better than (or perhaps equally as good as?) B, because we match the first person with themselves, then settle by comparing the second person at 1 (in A) to non-existence in B. You didn’t say how exactly to do this, but I assume A is supposed to be at least as good as B, since that’s what you wanted to say (and I guess you mean to say that it’s better).
However, A and A’ are equally good.
Transitivity would entail that A’ is therefore at least as good as B, but on the procedure you described, A is worse than B because we compare them first according to wellbeing levels for those who exist in both, and the first person exists in both and is better off in B.
I don’t doubt that the view can be modified to solve this problem, but it’s common in population ethics that solving one problem creates another.
I probably won’t reply further, by the way—just because I don’t go on EA forums much. Best of luck.
Sorry, perhaps I wasn’t clear: I didn’t mean matching by the identity of the individual, I meant matching on just their utility values (doesn’t matter who is happy/suffering, only the unordered collection of utility values matters). So in your example, A and A’ would be identical worlds (modulo ethical preference).
Formally: Let a,b:UI→N be multisets of utilities (world states). (Notice that I’m using multisets and not vectors on purpose to indicate that the identities of the individuals don’t matter.) To compare them, define the multiset a∩b as (a∩b)(u)=min(a(u),b(u)), and define a′=a−a∩b and b′=b−a∩b (pointwise). Then we compare a′ and b′ with leximin.
However, this still isn’t transitive, unfortunately. E.g:
A: {{2}}
B: {{1, 3, 3}}
C: {{3}}
Then A≳B and B≳C but A≳/C .
Right now I think the best solution is use plain leximin (as defined in my post) and reject the Mere Addition Principle.