Questions about and Objections to ‘Sharing the World with Digital Minds’ (2020)

Questions about and Objections to
‘Sharing the World with Digital Minds’ (2020)

First see my summary, or the original paper. Here are some things that stuck out to me:


1. Apparent conflation between individual outweighing and collective outweighing. Bostrom and Shulman (B&S) define ‘super-beneficiaries’ as individuals who are better at converting resources into (their own) well-being than any human. But they then go on to say that one reason to think that digital minds will be super-beneficiaries, is that they are likely to greatly outnumber human beings eventually. Part 2 of the paper is a long list of reasons to think that we might be able to construct digital super-beneficiaries. The first given is that digital minds can be created more quickly and easily than humans can reproduce, they will likely come to far outnumber humans once pioneered:
‘One of the most basic features of computer software is the ease and speed of exact reproduction, provided computer hardware is available. Hardware can be rapidly constructed so long as its economic output can pay for manufacturing costs (which have historically fallen, on price-performance bases, by enormous amounts; Nordhaus, 2007). This opens up the door for population dynamics that would take multiple centuries to play out among humans to be compressed into a fraction of a human lifetime. Even if initially only a few digital minds of a certain intellectual capacity can be affordably built, the number of such minds could soon grow exponentially or super-exponentially, until limited by other constraints. Such explosive reproductive potential could allow digital minds to vastly outnumber humans in a relatively short time—correspondingly increasing the collective strength of their claims.’ (p.3)

However, evidence that digital minds will outnumber humans is not automatically evidence that individual digital minds will produce well-being more efficiently than humans, and so count as super-beneficiaries by their definition. More importantly, the fact that there will be large numbers of digital minds, whilst evidence that most future utility will be produced by digital minds, is not evidence that digital minds, as a collective, will convert resources into utility more efficiently than humans as a collective will. B&S seem to have conflated these two claims with the following: if digital minds are far more numerous than humans, the amount of well-being digital minds collectively generate will be higher than the amount that humans collectively generate.

I’m open to this being essentially just a typesetting error (putting the subsection in the wrong section!).



2. They assume that equality of resource allocation trumps equality of welfare. B&S worry that, if digital minds are a) far more numerous than humans and b) take far more resources to support, then egalitarian principles will force us to set basic income at a level lower than human subsistence. As far as I can tell the argument is:

giving humans a higher basic income than digital minds would be unjust discrimination,

  1. but if there are many more digital minds than humans, and

  2. if digital minds can survive on far less resources than humans can,

  3. then we might not be able to afford to give both humans and digital minds a basic income high enough for humans to survive on.


Setting aside questions about whether a high enough basic income for humans to survive on really would be unaffordable in a world with many digital minds, it’s not clear that any true non-discrimination principle requires the state to give everyone the same basic income.

For instance, in many countries that currently have welfare states, disabled people are entitled to disability benefits to which non-disabled people do not have access. Few people think that this is an example of unfair discrimination against the non-disabled, which suggests that it is permissible (and perhaps required) for governments to give higher levels of welfare support to citizens with higher levels of need.

So if humans need more resources to survive than do digital minds, then it is probably permissible for governments to give humans a higher basic income than they give to digital minds (at least by current common sense).


3. Unclear argument for a minor diminishment of human utility. B&S tentatively propose:

C = “in a world with very large numbers of digital minds, humans receiving .01% of resources might be 90% as good for humans as humans receiving 100% of resources.”

How B&S arrive at this isn’t very clear. According to them, C follows from the fact that an economy where most workers are digital minds would be vastly more productive than one where most workers are humans.

But they don’t explain why it follows from the fact that an economy produces a very large amount of resources per human, that humans capturing .01% of resources might be 90% as good for humans as humans capturing 100%. Presumably the idea is that once every human has reached some high absolute standard of living, giving them further resources doesn’t help them much, because resources have diminishing marginal utility. However, it’s hard to be sure this is what they mean: the argument is not spelled out.

(Also, in ‘Astronomical Waste’, Bostrom expresses scepticism that large increases in resources above a certain level will only make a small difference to the well-being of a human in a world with digital minds. He reasons that in such a world we might have invented new and resource-intensive ways for humans to access very high well-being. However, Bostrom only says this may be true. And in Digital Minds, B&S are also tentative about the claim that humans could receive 90% of the benefits of 100% of resources by capturing 0.01% of the resources produced by a digital mind economy. So there isn’t a crisp inconsistency between ‘Digital Minds’ and ‘Astronomical Waste’.)

4. Unclear evidence for relative cost of digital and biological subsistence. B&S claim that it is “plausible” that it will be cheaper to maintain digital minds at a subsistence level than to keep humans alive, but they don’t actually give much argument for this, or cite any source in support of it. I think this is a common assumption in EA and rationalist speculation about the future, and that it might be a good idea to check what the supporting evidence for it actually is. (There’s an implicit reference to Moore’s law – “The cost of computer hardware to support digital minds will likely decline”.)

5. Diminishing marginal utility as a condition of some values? B&S claim that digital minds could become super-beneficiaries by being designed so that they don’t become habituated to pleasures, in the way that humans eventually become bored or sated with food, sex etc.

One worry: it’s unclear that a digital mind could be built like this and remain able to function: not getting sated might mean that they get stuck on undergoing the same pleasurable experience over and over again. On hedonistic theories of well-being, this might still make them super-beneficiaries, since a digital mind that spent all its time repeating the same high-intensity pleasure over and over might well experience a large net amount of pleasure-minus-pain over its lifetime. But on the subset of objective list theories of value on which the best lives involve a balance of different goods, not getting sated might actually get in the way of even matching, let alone surpassing, humans in the efficiency with which you turn resources into well-being. (If you never get bored with one good, why move on to others and achieve balance?).



6. Difficulties with the claim that different minds can have different hedonic capacities. B&S claim that digital minds might be capable of experiencing pleasures more intense than any human could ever undergo. However, I am somewhat sceptical of the view that maximum possible pleasure intensity can vary between different conscious individuals. It is notoriously difficult to explain what makes a particular pleasure (or pain) more or less intense when the two pleasures occur in different individuals. (The notorious problem of “interpersonal utility comparisons.”) I think that one of the best candidate solutions to this problem entails that all minds which can undergo conscious pleasures/​pains have maximum pleasure/​pain experiences with the same level of intensity. The argument for this is complex, so I’ve put it in a separate doc.

7. Maximising could undermine digital minds’ breadth. B&S’s discussion of objective list theory claims that digital minds could achieve goods like participation in strong friendships, intellectual achievement, and moral virtue to very high degrees, but they don’t discuss whether maximising for one of these goods would lead a digital mind towards a life containing very little of the others, or whether balance between these goods is part of a good life.

8. Unclear discussion of superhuman preferences. B&S list having stronger preferences as one way that digital minds could become super-beneficiaries. But their actual discussion of this possibility doesn’t really provide much argument that digital minds would or could have stronger-than-human preferences. It just says that it’s difficult to compare the strength of preferences across different minds, and then goes on to say that ‘emotional gloss’ and ‘complexity’, might be related to stronger preferences.

9. Conflation of two types of person-affecting view. B&S object to the idea that creating digital super-beneficiaries is morally neutral because creating happy people is neutral, by complaining that ‘strict person-affecting views’, must be wrong, because they imply that we have no reason to take action to prevent negative effects of climate change on people who do not yet exist. However, I don’t think this reasoning is very convincing.

A first objection: it’s not immediately clear that actions to prevent future harms from climate change are actually ruled out by views on which actions are only good if they improve things for some person. If someone is going to exist whether or not we take action against climate change, then taking action against climate change might improve things for them. However, this isn’t really a problem, since it does seem person-affecting views are plausibly refuted by the fact that actions which prevent climate harm, and also change completely the identity of everyone born in the future, are still good insofar as they prevent the harms.

A more serious objection: you might be able to deny that making happy people is good, even while rejecting person-affecting views on which an action can only be good if there is some person it makes better-off. ‘Making happy people is neutral’ is a distinct claim from ‘an action is only good if there is at least one person it makes better off’. So the burden of proof is on B&S when they claim that if the latter is false, the former must be too. They need to either give an argument here, or at least cite a paper in the population ethics literature. (B&S do say that appealing to person-affecting views is just one way of arguing that creating super-beneficiaries is morally neutral, so they might actually agree with what I say in this paragraph.)

10. Possible overconfidence about the demandingness of deontic theories. B&S state outright that deontic moral theories imply that we don’t have personal duties to transfer our own personal resources to whoever would benefit most from them. Whilst I’m sure that most (maybe even all) deontologist philosophers think this, I’d be a little nervous about inferring from that to the claim that the deontological moral theories endorsed by those philosophers, imply that we have no such obligation (or even that they fail to imply that we do have such an obligation.)

My reason for this as follows: contractualist theories are generally seen as “deontological”, and I know of at least one paper in a top ethics journal arguing that contractualist theories in fact generate just as demanding duties to give to help others as do utilitarian theories. I haven’t read this paper, so I don’t have an opinion on how strong its argument is, or whether, even if its conclusion is correct, it generates the result that we are obliged to transfer all (or a large amount) of our resources to super-beneficiaries. (My guess is not: it probably makes a difference that super-beneficiaries are unlikely to be threatened with death or significant suffering without the transfer.) But I think at the very least more argument is needed here.

11. The ‘principle of substrate nondiscrimination’ is badly named, because it doesn’t actually rule out discrimination on the basis of substrate (i.e. the physical material a mind is made of’). Rather, it rules out discrimination on the basis of substrate between minds that are conscious. This means it is actually compatible with saying that digital minds don’t have interests at all, if for instance you believed that no thing without biological neurons is conscious. (Some philosophers defend accounts of consciousness which seem to imply this: see the section on “biological theories of consciousness” on p.1112 of Ned Block’s ‘Comparing the Major Theories of Consciousness’).

A principle compatible with denying, on the basis of their substrate, that digital minds have any rights probably shouldn’t be called the “principle of substrate non-discrimination”. This is especially true when these reasons for denying that digital minds have interests are actually endorsed by some experts.

This post is part of my work for Arb Research.