In collaboration with Christian Tarsney, I’ve developed a new theory of population ethics, which I call the Saturation View. I think that, from a purely intellectual perspective, it’s probably the best idea I’ve ever had. It was certainly great fun to work on.
It’s a bit long for a blog post, so please check out the full draft paper here. (This is a relatively quick first draft to get the idea out there; it’ll get revised and improved.) The rest of this post will give an overview:
In Reasons and Persons, Parfit presented the challenge of developing “Theory X”: a population axiology[1] that dissolves the Mere Addition Paradox, avoiding the Repugnant Conclusion without facing other unacceptable conclusions.[2] I think the Saturation View is a plausible candidate for Theory X.
As background motivation, I think there are fairly strong arguments for total utilitarianism as an axiology. On this view, one outcome is better than another iff the sum total of wellbeing is greater. But there are four sets of implications of the view[3] that I find very unintuitive:
The Very Repugnant Conclusion: Take any population A that consists of some number of extraordinarily well-off lives. For any number n and any negative welfare level z, there is some population Z, consisting of n lives at welfare z and a sufficiently large population of lives that are just barely worth living, such that Z is better than A.
So, for example, a billion galaxies of bliss is worse than a billion billion galaxies of extreme suffering plus some very large number of lives that are barely worth living.
Extreme Fanaticism. Consider a guarantee of some outcome B, where B is extremely good, some outcome Y, where Y is extremely bad, and some probability p. No matter how good B is, no matter how bad Y is, and no matter how small p is, there is some outcome C such that a probability p of C, and a probability (1-p) of Y, is better than the guarantee of B.
So, for example, a billion galaxies of bliss for sure is worse than a 99.99999% chance of a billion billion galaxies of extreme suffering plus a 0.00001% chance of some sufficiently good outcome.[4]
Infinitarian Issues. On the standard understanding of infinity, a population of an infinite number of beings at wellbeing +2 has the same total wellbeing as a population of an infinite number of beings at wellbeing +1; the former seems better than the latter, but the total view doesn’t have that implication. What’s more, the total wellbeing of a population of an infinite number of beings at wellbeing +2 and an infinite number of beings at wellbeing −1 is undefined; this is also true of the total wellbeing of a population of an infinite number of beings at wellbeing +1 and an infinite number of beings at wellbeing −2. But, again, the former seems better than the latter.
Responses to these issues have been proposed,[5] but they come with their own issues.
Monoculture Recommendation. Take some fixed pot of resources that can be used to create lives. There is no population that can be created with those resources that is better than a population that consists only of qualitatively identical replicas of a small number of beings.
And, in practice, it is likely that the best possible future, on many population axiologies, consists almost wholly of a monoculture. Some people have called this “tiling the universe with hedonium”.
The first three problems have been widely discussed in academic philosophy. The last has not. But it turns out that taking the last of these problems seriously ends up giving us the resources to avoid the first three, too. In particular, on one way (but not the only way) of accounting for the value of variety — the Saturation View — we can dissolve the Mere Addition Paradox and offer a principled response to the fanaticism and infinitarian problems, too.
I’m not claiming that you should believe Saturationism outright. In particular, its implications in some highly-negative worlds are hard to stomach, though I think similar implications are unavoidable for any view that avoids fanatical implications. But I believe that the four problems I list are the most major issues for the total view, and, in my view, Saturationism offers a considerably more plausible way of addressing these issues than any alternative to date.
The full draft is here. It does a number of things:
Present the monoculture problem, and give arguments for thinking that variety is intrinsically valuable.
Give an informal statement of Saturationism, and a toy example of the view. The full formal statement is in the appendix.
Show how accepting the value of variety dissolves the Mere Addition Paradox.
Show how Saturationism avoids fanaticism in a particularly plausible way.
Show how Saturationism avoids many problems in infinite-population settings.
Show that Saturationism’s violation of separability (which almost any view other than totalism or the critical-level view will suffer from) is more limited and tamer than other non-separable axiologies.
Discuss the difficulty of handling highly negative-value worlds. (The bulk of this draft focuses on worlds with positive value or only somewhat negative value.)
This is still draft work, and I expect the finished product to end up different in a number of ways. I mainly haven’t done citations, I expect there will be a number of errors I haven’t yet noticed, and I also expect that there are many challenges for the view that I haven’t yet identified. The full paper will be co-authored with Christian Tarsney.
- ^
Axiology concerns which outcomes are better than which others. It doesn’t cover deontic theory, which is about what one ought to do. Unless consequentialism is true, it’s not generally true that one ought to do what produces the best outcome.
- ^
Another requirement was that the theory should solve the non-identity problem, entailing the “no difference view”, but many axiologies do this.
- ^
Note that the second implication only follows if we assume that the betterness relation satisfies the axioms of expected utility theory, or certain weakenings of those axioms, and an ex ante separability principle. However, I find proposed responses to fanaticism via non-standard decision theory (like discounting small probabilities) very unsatisfying, so I’ll accept these axioms for the purpose of this note.
- ^
Combining this with the Very Repugnant Conclusion, on the total view this “sufficiently good outcome” could be a billion billion galaxies of extreme suffering plus some very large number of lives that are barely worth living.
- ^
For example, Toby Ord’s Evaluating the Infinite.
Glad you’re fleshing this out and pushing the community to take variety/diversity more seriously as part of population axiology. I’ve had similar thoughts in this direction, and I think the core intuition is very compelling.
Caveat that I’ve only skimmed maybe a quarter of the full post so far, and I can already see that it goes well beyond the simple claim that “variety matters”: it adds a lot of context, formal structure, and specific assumptions/conditions. So I’m not trying to say this isn’t a much needed contribution.
My reaction is more about framing. I worry that the “new theory” framing + all the new words may make the central intuition feel more novel or exotic than it is. Many people, including/especially many who would not identify as utilitarians or EAs, already have the intuition that the value of a world depends not only on total welfare, but also on the diversity, richness, or non-redundancy of the lives/experiences it contains — roughly, that additional near-duplicate lives have diminishing marginal value.
So I’d find it helpful to separate, as clearly as possible, the widely shared motivating intuition from the more specific Saturationist implementation. Otherwise I worry the jargon makes the view feel more alien or proprietary than it needs to be, when the underlying motivation may actually be quite intuitive to many people.
I haven’t yet got past the 1.4. Arguments for Value of Variety section because I’m just a bit unconvinced.
You could reword the intuition pump section like this:
Imagine some truly terrible moment — say extreme torture. Suppose that this torture is far more terrible than anything that humanity has experienced to date: you or I would give up years of ordinary happy life just to avoid such a peak of despair. But now suppose that this torture is just ever so slightly less bad than some other torture approach that is the worst thing that could conceivably be produced, with the same resources. For example, a radically different device is used which leads to an experience that is ever so slightly more painful.
What is better? The worst possible torture alternating with the slightly less bad torture? Or just the slightly less bad torture for the rest of time?
I suppose I can imagine someone saying the former, but I wouldn’t. I just want less suffering! You can dismiss this rewrite by saying variety is only good if it’s variety of good things, but this would introduce an asymmetry and I’m unsure that is justified. I feel like people say they like variety because we have repeatedly experienced it to be pleasurable, and that introduces a bias that we struggle to avoid when we are asked to judge scenarios that aren’t different in terms of welfare. For the same reason I’m a little unconvinced by the intrapersonal variety argument.
On the realisation-value argument. I don’t really think there is intrinsic value of things being realized. If in the distant arctic some polar bear walks a route that no polar bear has walked before but which is the exact same in every welfare-relevant way, I just don’t really care. Which is another way of me saying, realizing new things can indeed be great, but only when we can enjoy them for being new.
On the benefits for axiology point. This doesn’t so much seem an argument for variety as it seems a direct argument for the saturation view. If the saturation view allows us to avoid lots of other unpalatable conclusions then it may be worth adopting for that alone!
Did you see there’s a section in the post about negative value? It does discuss the first point you raised, that our intuitions about variety suffering aren’t as clear-cut, and that it’s not so obvious how we’d ideally handle these cases:
The Saturation View confuses me: it seems highly contingent on the idea that most societies have reached diminishing marginal returns per additional person for utility, for which I would assume otherwise. How would attributing intrinsic value to diversity be more valuable than total utilitarianism when a society has not hit diminishing returns?