Summary of ‘Sharing the World With Digital Minds’ by Carl Shulman and Nick Bostrom

Summary of ‘Sharing the World With Digital Minds’ by Carl Shulman and Nick Bostrom


Full paper can be found here: digital-minds.pdf (nickbostrom.com). The following is written from Shulman & Bostrom’s point of view; see this companion post for my own take: https://​​forum.effectivealtruism.org/​​posts/​​qLYFBpT2amHxJtf2o/​​questions-about-and-objections-to-sharing-the-world-with

TL;DR

In the future we will be able to create “digital minds”, conscious agents whose lives can contain different amounts of well-being. These would likely:

  • be “super-beneficiaries”, much better at converting resources into well-being than human beings;

  • become much more numerous than human beings, since AIs can be created far faster than humans;

  • facilitate massive economic growth, since they would be superhumanly-efficient workers;

  • have their preferences shaped by their human designers.


Should we create such digital minds? On views on which adding happy people to the world is good, the answer is probably ‘yes’, given that this is a method of creating super-beneficiaries. On views which deny this, it will depend on whether creating digital minds benefits existing people, and on how good/​bad the lives of those digital minds will be.

In practice, we have some prima facie moral reason to create digital super-beneficiaries, since some reasonable moral views take this to be inherently good, others to be neutral in itself, and we aren’t sure which view is correct.

If we create digital minds, we face difficult questions about how to share resources with them:

  • On total utilitarian views, humans are morally obligated to give all our resources to digital super-beneficiaries, even if this means human extinction.

  • If digital minds vastly outnumber humans then we must either give up or modify democracy, or allow humans to be a tiny electoral minority.

  • A welfare state which gives an equal basic income to each citizen, human or digital, will have to be set below human subsistence level if digital minds are sufficiently numerous.

Bostrom and Shulman recommend:

  • Exploring the permissibility and feasibility of a bargain whereby digital minds get 99.99% of society’s resources, and humans 0.01%; this would give digital minds almost everything they could want, but might also leave humans with a very high standard of living, due to the vast size of an economy that included digital workers.

  • An anti-speciesist principle, to avoid privileging the interests of humans over digital minds just because the former are biological humans.

  • Avoiding creating digital minds that suffer more easily than humans.

They suggest that the following are probably morally permissible and probably a good idea:

  • Restricting the right of digital minds to produce further digital minds.

  • Engineering digital minds to have preferences which enable stable bargains with humans.

Full summary

In the future, we may create many “digital minds” (AIs) who are conscious agents, and hence whose lives can contain different degrees of well-being (for example, because they experience pleasure and pain). If we create such digital minds, it will be morally better if their “lives” go well, rather than badly, and they will likely be owed some resources with which to further their interests. Because of this there are challenging questions about whether we should create digital minds, and (if we do create them), how much weight we should give to their interests relative to human beings.

It is likely that if we learn how to create conscious digital minds, we will eventually be able to create digital minds that are so-called “super-beneficiaries”: better at converting resources into well-being than humans. There are multiple reasons to think that if we learn how to create conscious digital minds, we’ll eventually be able to make them super-beneficiaries:

  1. Digital minds will likely need less resources than humans to remain in existence at all, given that it will become cheaper to provide them with computational resources than to feed a human. So it will be all-things-being-equal cheaper to keep a digital mind in existence at any particular level of well-being, than to keep a human alive and at that level of well-being.

  2. Unlike humans, digital minds will mostly gain well-being by consuming virtual, rather than real, goods and services.. Virtual goods likely take less resources to provide.

  3. Digital minds will automatically avoid sources of human suffering like growing old or sick, without any resources needing to be spent on this.

  4. Since computer processors already work faster than neurons fire, digital minds will likely be able to produce more mental activity, and hence more well-being per unit of time, than humans.

  5. Humans grow bored and sated with pleasurable activities, and gain pleasure from positional goods, like social status, that are necessarily scare because they depend on doing better than most other people. Neither of these things will necessarily be true of digital minds.

  6. It would be surprising if the best pleasures in quality/​intensity that humans experience, were the best possible pleasures that any mind could undergo, so perhaps we will design digital minds that can experience pleasures better than any human could ever experience.

  7. On some theories, having your preferences satisfied itself counts as increasing your well-being, and we could create digital minds with preferences which are extremely easy to satisfy, giving those minds a source of ultra-cheap welfare.

  8. For a variety of goods other than pleasure and preference-satisfaction human lives fall well short of what is possible in principle: we could all be much more morally virtuous, be better friends, achieve more, have greater knowledge and understanding of the world, etc. So perhaps we will be able to make digital minds which are better than we are at realizing these goods.

  9. Larger minds generate more well-being (compare humans to insects) but also take more resources to maintain. So the size at which minds best convert resources into well-being, is the size with the best trade-off between minimizing the resources needed to sustain the mind, and maximizing the amount of good that the mind can produce. It would be a strange coincidence if the human brain was the optimal size for this trade-off. So it’s very likely the optimal size is bigger or smaller than the human brain. But we could build digital minds that are closer to the optimal size.

  10. Large parts of the human brain do not seem to be directly involved in producing positive experiences or other morally important goods. There’s no obvious reason why digital minds would have to share this inefficiency.

Given that we will probably one day have the capacity to create digital minds which are super-beneficiaries, two obvious moral questions arise:

  1. Should we create digital minds super-beneficiaries?

  2. If we do create digital super-beneficiaries, how should we share resources between them and humans, given that the super-beneficiaries are better at generating well-being from resources than humans are.

The answer to (11) varies depending on what moral principles about creating new people we accept. If we think that it is good to create people with lives that have net-positive well-being, then we have moral reasons to create super-beneficiaries. On the other hand, if we accept a view on which creating new happy people is morally neutral, then the fact that digital minds would be super-beneficiaries gives us no reason to create them. Instead, we should create digital minds only if doing so would benefit the people who already exist. (And probably only if those digital minds will not have net-negative lives.)

Views on which we should only care about the interests of currently existing people are implausible. And if we are unsure whether the correct moral theory says that bringing people with net-positive lives into existence is good, then we should probably assign some value into bringing people with net-positive lives into existence, since we think there’s a non-zero chance that doing so is morally valuable. So we probably have at least some moral reason to create digital super-beneficiaries. On total utilitarian views on which we are morally required to create as much value as possible the moral reasons for creating large numbers of digital super-beneficiaries are very strong, since it would allow us to generate a very large amount of well-being.

Thinking through (12) raises difficult issues, because there are various reasons why we might be morally required to give so many resources or so much power to digital super-beneficiaries that the interests of humans are very significantly harmed.

Firstly, on total utilitarian views where maximizing value is morally required, we are probably required to give all society’s resources to digital super-beneficiaries, even though this will lead to human extinction. Since the super-beneficiaries generate more utility per unit of resources than we do, giving them all our resources maximizes utility.

On many other moral views, we are not automatically required to transfer our own personal resources to the super-beneficiaries just because doing so would maximize utility. And on many of those other moral views it will be morally impermissible to steal resources owned by other human beings and give them to digital super-beneficiaries. In general on non-utilitarian views humans wouldn’t owe digital super-beneficiaries our resources just because they could create more well-being from them than we can. However, even on such views there are still other reasons to expect that if we create digital super-beneficiaries we will be morally obligated to transfer a large amount of power and/​or resources to them. And the consequences for humans if the obligations are fulfilled could still be very bad:

  • Since digital minds are likely to be less expensive to keep alive than human beings, if we create some digital super-beneficiaries we will likely create very many. (At least if we do not successfully deliberately restrict the numbers of digital minds that we, and the digital minds themselves, are allowed to create.) If we hold the view that all members of society are entitled to a basic income, we might have to use almost all of society’s resources paying a basic income to vast numbers of digital minds. At worst, the basic income level might have to be kept so low that it’s no longer enough to keep a human alive.

  • Since digital minds would be likely to greatly outnumber humans, if our society is democratic, they would make up the vast majority of voters, leaving humans an almost powerless minority.

  • If we try and avoid these outcomes by restricting the ability of digital super-beneficiares to create further minds, we are possibly violating their rights. (Though Bostrom and Shulman ultimately argue that such restrictions might well be justified, on the grounds that if humans could reproduce as easily as digital minds, we would probably consider laws restricting human reproduction justified.)

It’s unclear how to avoid concluding we have these obligations, if we’re committed to an anti-speciesist principle which says that humans don’t matter more or have greater political rights than digital minds, just because we are human and they aren’t.

However, there is a practical compromise that-whether or not it’s a morally permissible outcome to aim for-might be acceptable to both humans and digital super-beneficiaries. A world with digital minds that massively outnumber humans, would produce far, far more resources than one without those minds. Because of this, if we split society’s resources so that the vast, vast majority went to the digital minds, but some resources were still given to humans, this would:

  1. leave humans with a very high standard of living, but

  2. leave digital minds almost as well-off as they would have been if they had captured all resources and left humans with none.

In thinking about what ways of dividing power and resources between humans and digital minds are morally acceptable, we can rely on these two principles (directly quoted from Bostrom and Shulman):

Principle of Substrate Non-Discrimination

If two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.
Principle of Ontogeny Non-Discrimination

If two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status.

We should also place a high value on avoiding actions which are highly efficient at producing large amounts of suffering or digital minds which suffer much more easily than humans.

One final interesting question is whether it’s morally permissible to deliberately design digital minds so that they consent to share resources with us in the way that we favor. It’s (plausibly) wrong to deliberately genetically engineer human children with preferences that are convenient for their parents, as this is arguably an immoral type of manipulation. So it might also be impermissibly manipulative to engineer digital minds with preferences that make them accept particular resource-bargains with humans. However, there are important moral disanalogies between the case of genetically engineering humans to have particular preferences, and the case of choosing preferences for digital minds, which might show that the latter is permissible, even if the former is morally wrong. Firstly, unlike in the case of human procreation, there probably won’t be a way to create digital minds without making choices that predictably shape their preferences. So perhaps when knowingly shaping preferences is unavoidable, it is not automatically wrong to shape preferences in ways that benefit you, or humanity as a whole. Secondly, if we shape humans to have particular strong preferences, those may conflict with other preferences they have, and cause them suffering. If this is what explains why genetically engineering children to have particular preferences is wrong, then it’s probably not inherently wrong to deliberately shape the preferences of digital minds. For in the case of digital minds, we can probably design the minds in ways that avoid them experiencing suffering due to them having competing strong preferences, whilst also ensuring they have the preferences about resource-sharing that we desire.