Non-additive axiologies in large worlds

Link post

Abstract

Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say ‘yes’, but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say ‘no’. This distinction is practically important: additive axiologies support ‘arguments from astronomical scale’ which suggest (among other things) that it is overwhelmingly important for humanity to avoid premature extinction and ensure the existence of a large future population, while non-additive axiologies need not. We show, however, that when there is a large enough ‘background population’ unaffected by our choices, a wide range of non-additive axiologies converge in their implications with some additive axiology—for instance, average utilitarianism converges to critical-level utilitarianism and various egalitarian theories converge to prioritiarianism. We further argue that real-world background populations may be large enough to make these limit results practically significant. This means that arguments from astronomical scale, and other arguments in practical ethics that seem to presuppose additive separability, may be truth-preserving in practice whether or not we accept additive separability as a basic axiological principle.

Introduction

The world we live in is both large and populous. Our planet, for instance, is 4.5 billion years old and has has borne life for roughly 4 billion of those years. At any time in its recent history, it has played host to billions of mammals, trillions of vertebrates, and quintillions of insects and other animals, along with countless other organisms. Our galaxy contains hundreds of billions of stars, many or most of which have planets of their own. The observable universe is billions of light-years across, containing hundreds of billions of galaxies—and is probably just a small fraction of the universe as a whole. It may be, therefore, that our biosphere is just one of many (perhaps infinitely many). Finally, the future is potentially vast: our descendants could survive for a very long time, and might someday settle a large part of the accessible universe, gaining access to a vast pool of resources that would enable the existence of astronomical numbers of beings with diverse lives and experiences.

These facts have ethical implications. Most straightforwardly, the potential future scale of our civilization suggests that it is extremely important to shape the far future for the better. This view has come to be called longtermism, and its recent proponents include Bostrom (2003, 2013), Beckstead (2013, 2019), Cowen (2018), Greaves and MacAskill (2019), and Ord (2020). There are many ways in which we might try to positively influence the far future—e.g., building better and more stable institutions, shaping cultural norms and moral values, or accelerating economic growth. But one particularly obvious concern is ensuring the long-term survival of our civilization, by avoiding civilization- or species-ending ‘existential catastrophes’ from sources like nuclear weapons, climate change, biotechnology, and artificial intelligence.[1] Longtermism in general, and the emphasis on existential catastrophes in particular, have major revisionary practical implications if correct, e.g., suggesting the need for major reallocations of resources and collective attention (Ord, 2020, pp. 57ff ).

All these recent defenses of longtermism appeal, in one way or another, to the astronomical scale of the far future. For instance, Beckstead’s central argument starts from the premises that ‘Humanity may survive for millions, billions, or trillions of years’ and ‘If humanity may survive may survive for millions, billions, or trillions of years, then the expected value of the future is astronomically great’ (Beckstead, 2013, pp. 1–2). Importantly for our purposes, the astronomical scale of the far future most plausibly results from the astronomical number of individuals who might exist in the far future: while the far future population might consist, say, of just a single galaxy-spanning individual, the futures that typically strike longtermists as most worth pursuing involve a very large number of individuals with lives worth living (and conversely, the futures most worth avoiding involve a very large number of individuals with lives worth not living).

Under this assumption, we can understand arguments like Beckstead’s as instantiating the following schema.

Arguments from Astronomical Scale

Because far more welfare subjects or value-bearing entities are affected by A than by B, we can make a much greater difference to the overall value of the world by focusing on A rather than B.

Beckstead and other longtermists take this schema and substitute, for instance, ‘the long-run trajectory of human-originating civilization’ for A and ‘the (non-trajectory-shaping) events of the next 100 years’ for B. To illustrate the scales involved, Bostrom (2013) estimates that if we manage to settle the stars, our civilization could ultimately support at least 10 to the 32nd power century-long human lives, or 10 to the 52nd power subjectively similar lives in the form of simulations. Since only a tiny fraction of those lives will exist in the next century or millennium, it seems prima facie plausible that even comparatively minuscule effects on the far future (e.g., small changes to the average welfare of the far-future population, or to its size, or to the probability that it comes to exist in the first place) would be vastly more important than any effects we can have on the more immediate future.[2]

Should we find arguments from astronomical scale persuasive? That is, does the fact that A affects vastly more individuals than B give us strong reason to believe, in general, that A is vastly more important than B? Although there are many possible complications, the sheer numbers make these arguments quite strong if we accept an axiology (a theory of the value of possible worlds or states of affairs) according to which the overall value of the world is simply a sum of values contributed by each individual in that world—e.g., the sum of individual welfare levels. In this case, the effect that some intervention has on the overall value of the world scales linearly with the number of individuals affected (all else being equal), and so astronomical scale implies astronomical importance.

But can the overall value of the world be expressed as such a sum? This question represents a crucial dividing line in axiology, between axiologies that are additively separable (hereafter usually abbreviated ‘additive’) and those that are not. Additive axiologies allow the value of a world to be represented as a sum of values independently contributed by each value-bearing entity in that world, while non-additive axiologies do not. For example, total utilitarianism claims that the value of a world is simply the sum of the welfare of every welfare subject in that world, and is therefore additive. On the other hand, average utilitarianism, which identifies the value of a world with the average welfare of all welfare subjects, is non-additive.

When we consider non-additive axiologies, the force of arguments from astronomical scale becomes much less clear, especially in variable-population contexts (i.e. when comparing possible populations of different sizes). They therefore represent a challenge to the case for longtermism and, more particularly, to the case for the overwhelming importance of avoiding existential catastrophe. As a stylized illustration: suppose that there are 10 to the 10th power existing people, all with welfare 1. We can either (O1) leave things unchanged, (O2) improve the welfare of all the existing people from 1 to 2, or (O3) create some number n of new people with welfare 1.5. Total utilitarianism, of course, tells us to choose O3, as long as n is sufficiently large. But average utilitarianism—while agreeing that O3 is better than O1 and that the larger n is, the better—nonetheless prefers O2 to O3 no matter how astronomically large n may be. Now, additive axiologies can disagree with total utilitarianism here if they claim that adding people with welfare 1.5 makes the world worse instead of better; but the broader point is that they will almost always claim that the difference in value between O3 and O1 becomes astronomically large (whether positive or negative) as n increases—bigger, for example, than the difference in value between O2 and O1. Non-additive axiologies, on the other hand, need not regard O3 as making a big difference to the value of the world, regardless of n. Again, average utilitarianism agrees with total utilitarianism that O3 is an improvement over O1, but regards it as a smaller improvement than O2, even when it affects vastly more individuals.

Thus, the abstract question of additive separability seems to play a crucial role with respect to arguably the most important practical question in population ethics: the relative importance of (i) ensuring the long-term survival of our civilization and its ability to support a very large number of future individuals with lives worth living vs. (ii) improving the welfare of the present population.

The aim of this paper, however, is to show that under certain circumstances, a wide range of non-additive axiologies converge in their implications with some counterpart additive axiology. This convergence has a number of interesting consequences, but perhaps the most important is that non-additive axiologies can inherit the scale-sensitivity of their additive counterparts. This makes arguments from astronomical scale less reliant on the controversial assumption of additive separability. It thereby increases the robustness of the practical case for the overwhelming importance of the far future and of avoiding existential catastrophe.

Our starting place is the observation that, according to non-additive axiologies, which of two outcomes is better can depend on the welfare of the people unaffected by the choice between them. That is, suppose we are comparing two populations X and Y.[3] And suppose that, besides X and Y, there is some ‘background population’ Z that would exist either way. (Z might include, for instance, past human or non-human welfare subjects on Earth, faraway aliens, or present/​future welfare subjects who are simply unaffected by our present choice.) Non-additive axiologies allow that whether X -and-Z is better than Y -and-Z can depend on facts about Z.[4]

With this in mind, our argument has two steps. First, we prove several results to the effect that, in the large-background-population limit (i.e., as the size of the background population Z tends to infinity), non-additive axiologies of various types converge with counterpart additive axiologies. Thus, these axiologies are effectively additive in the presence of sufficiently large background populations. Second, we argue that the background populations in real-world choice situations are, at a minimum, substantially larger than the present and near-future human population. This provides some prima facie reason to believe that non-additive axiologies of the types we survey will agree closely with their additive counterparts in practice. More specifically, we argue that real-world background populations are large enough to substantially increase the importance that average utilitarianism (and, more tentatively, variable value views) assign to avoiding existential catastrophe. Thus, our arguments suggest, it is not merely the potential scale of the future that has important ethical implications, but also the scale of the world as a whole—in particular, the scale of the background population.

The paper proceeds as follows: section 2 introduces some formal concepts and notation, while section 3 formally defines additive separability and describes some important classes of additive axiologies. In sections 4–5, we survey several important classes of non-additive axiologies and show that they become additive in the large-background-population limit. In section 6, we consider the size and other characteristics of real-world background populations and, in particular, argue that they are at least substantially larger than the present human population. In sections 7–8, we answer two objections: that we should simply ignore background populations for decision-making purposes, and that we should apply ‘axiological weights’ to non-human welfare subjects that reduce their contribution to the size of the background population. Section 9 considers how real-world background populations affect the importance of avoiding existential catastrophe according to average utilitarianism and variable-value views. Section 10 briefly describes three more potential implications of our results: they make it harder to avoid (a generalization of ) the Repugnant Conclusion, help us to extend non-additive axiologies to infinite-population contexts, and suggest that agents who accept non-additive axiologies may be vulnerable to a novel form of manipulation. Section 11 is the conclusion.

Read the rest of the paper


  1. ↩︎

    The importance of avoiding existential catastrophe is especially emphasized by Bostrom (2003, 2013) and Ord (2020).

  2. ↩︎

    Thus, for instance, in reference to the 1052 estimate, Bostrom claims that ‘if we give this allegedly lower bound...a mere 1 per cent chance of being correct, we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives’ (Bostrom, 2013, p. 19).

  3. ↩︎

    We follow the tradition in population ethics that ‘populations’ are individuated not only by which people they contain, but also by what their welfare levels would be. (However, in the formalism introduced in section 2, the populations we’ll consider are anonymous, i.e. the identities of the people are not specified.)

  4. ↩︎

    The role of background populations in non-separable axiologies has received surprisingly little attention, but has not gone entirely unnoticed. In particular, Budolfson and Spears (ms) consider the implications of background populations for issues related to the ‘Repugnant Conclusion’ (see §10.1 below). And, as we discovered while revising this paper, an argument very much in the spirit of our own (though without our formal results) was elegantly sketched several years ago in a blog post by Carl Shulman (Shulman, 2014).

No comments.