An aspirationally comprehensive typology of future locked-in scenarios

Introduction

What is this?

In this post, I present a typology of future locked-in scenarios, understood as being irreversible future configurations of the universe. It is concerned with the ultimate locked-in configuration or trajectory, as opposed to the specific events that led to such a lock-in.

This typology aspires to be comprehensive, yet compact. That is, it is an attempt to describe as many scenarios as possible (in the limit, all of them) using a reasonably manageable number of parameters. I have prioritised those that most affect the total moral valuation of a given scenario under total[1] hedonic utilitarianism. I expect that this typology will not be in fact fully comprehensive upon first publication. However, I expect to amend it based on feedback.

Why write this?

Many different scenarios of future lock-in have been proposed, but as far as I know they have not been classified into a compact, unified typology. Doing so would allow us to do the following useful things:

  • Discover possibilities previously not considered. By identifying orthogonal axes of variation of existing scenarios, new possible combinations may be uncovered.

  • Gain clarity about the underlying characteristics of different scenarios. Narrative approaches to scenario generation risk over-emphasizing features not relevant for the value judgement of scenarios, to the detriment of those that are.

Short glossary

These terms are used along the typology. They are explained here for the benefit of a general audience. Most of them are common in discussions about longtermism, so feel free to skip this section and come back only if you need it as reference.

  • Replicator: Short for self-replicating system. That is, something that is capable of making copies of itself. Living beings are usually replicators. In the future, autonomous machines could also be replicators. These are called Von Neumann machines.

  • Lightcone: Suppose Earth emitted a bright pulse of light right now, and it expanded outwards in all directions. As time passed, an ever-growing sphere of space would have been visited by it. As nothing can travel faster than light, this ever-expanding sphere puts an upper bound on how far future earth-originating replicators can go. To represent the sphere’s growth, time can be added as a fourth dimension. Since seeing in 4D is hard, the three spatial dimensions can be reduced to only two, if we flatten the sphere into a solid circle. A circle expanding over time represented by the spatial dimension we just freed up looks like a cone. That is the lightcone, and we will never go outside it.

  • Shockwave: Some self-replicating arrangement expanding through space at near-lightspeed.

  • Carrying capacity: Taken from biology. The highest density of a certain replicator that a certain environment can support.

  • Mentally biomorphic: That has the mental architecture of a living being.

  • Qualia: These are the minimal elements that make up your subjective experience. An instance of qualia is the redness of red, as opposed to, say, the wavelength of red light.

  • Valence: If an instance of qualia feels good, it has positive valence. Conversely, if it feels bad, it has negative valence. Hedonic utilitarians care about valence.

  • Sentients: Beings that have sentience. Sentience is the capacity to experience qualia.

The typology

Earth-originating replicators do not exist

  • Non-earth-originating replicators don’t exist either

  • Non-earth-originating replicators exist

    • The same considerations as below apply[2]

Earth-originating replicators exist

  • They could occupy any one of the following volumes[3]:

    • Earth

    • A volume greater than Earth, but smaller than the lightcone

    • The lightcone

  • They could occupy them with any of the following densities:

    • At carrying capacity, assuming:

      • Stable conflict

      • Coexistence based on mutual non-interference

      • Trade or other cooperation

    • Less than the carrying capacity[4]

  • Earth-originating replicators could be any combination of:

    • Biological beings

      • Biological nonhuman animals

      • Biological human beings

    • Digital beings

      • Mentally biomorphic

        • Digital nonhuman animals

        • Digital human beings

      • Not mentally biomorphic

        • Superintelligent AIs

        • Non-superintelligent AIs

        • Von Neumann machines

  • Regarding sentience

    • Regarding whether digital beings experience qualia or not:

      • All biomorphic digital beings either:

        • Are not capable of experiencing qualia

        • Can be reliably designed to either experience or not experience qualia

        • Obligatorily experience qualia

      • All digital beings either:

        • Are not capable of experiencing qualia

        • Can be reliably designed to either experience or not experience qualia

        • Obligatorily experience qualia

    • Regarding sentients (if they exist) and power:

      • The most powerful beings either:

        • Are sentient

        • Are not sentient

      • Different groups of not-most-powerful sentients (if they exist) could be:

        • Regarding their hedonic optimization:

          • Hedonically optimized to maximize their suffering

          • Hedonically optimized to maximize their bliss

          • Not hedonically optimized

        • Regarding their individual rights (or lack thereof):

          • Could be granted different levels of control over their living environments

          • Could be guaranteed different minimum living standards

          • Could be granted or not the right to:

            • Control their self-replication

            • Not forcibly cease to exist

            • Voluntarily cease to exist

Examples of previously-considered scenarios included by this typology

  • Astronomical suffering from bringing along farmed or wild animals to extrasolar environments without having first hedonically optimized them.

  • A multipolar scenario controlled by artificial superintelligences with competing interests who strategically threaten each other with the production of immense sentient disvalue. They sometimes carry out such threats.

  • Hedonium (or dolorium) shockwaves.

  • An economicum shockwave.

  • Very stable totalitarianism.

New[5] scenarios implied by this typology

  • Eternal universal war[6]

    • This is mostly speculation, but it seems reasonable to me that interstellar conflict could become locked in. Furthermore, interstellar warfare conducted via relativistic weapons seems likely to make the construction and defense of stellar-scale structures infeasible. This would significantly reduce the maximum carrying capacity.

  • The entire lightcone being inhabited almost exclusively by nonhuman animals.

    • This possibility may seem weird, but it could turn out that small animals have a higher valence/​energy consumption ratio than humans[7]. If it also turns out that it is impossible for digital beings to experience qualia, a valence-maximiser may well tile the universe with some analogue of rats on heroin.

Limitations

  • As the moral perspective I am closest to adhering to is hedonic utilitarianism, certain design choices were influenced by an implicit assumption of that framework. It would be valuable for future work to either extend this typology to incorporate other moral viewpoints, or to produce separate typologies based on them.

  • It is it likely that this model fails to include at least one relevant axis of possible variation. You the commenters are welcome to point some out, and I’ll amend it.

Many thanks to Agustín Covarrubias and David Solar for reviewing an earlier draft of this post, and to Pablo Stafforini for discussion regarding whether to write it.

  1. ^

    Total utilitarianism as understood with respect to population ethics. That is, as opposed to average utilitarianism.

  2. ^

    I expect some people to disagree with this choice, in particular those not subscribing to hedonic utilitarianism.

  3. ^

    Confinement to Earth can be explained by a permanent loss of the technological capacity required for space travel. However, scenarios where space travel remains technologically feasible until the end of time, yet no earth-originating replicators expand to fill the lightcone may seem unlikely. Still, there are some reasons that could make that happen: The first is that non-earth-originating (alien) replicators might arrive first to portions of Earth’s lightcone. See Robin Hanson’s grabby aliens model. A second one is that a very stable singleton may forbid expansion. A third (though more exotic) possibility is that the observable universe beyond a certain distance from Earth does not even exist. This has been discussed in relation to the simulation argument.

  4. ^

    The lock-in of this state would require permanent and strong replication limits. It seems to require a locked-in singleton.

  5. ^

    As far as I know.

  6. ^

    A related idea is the dark forest solution to the Fermi paradox.

  7. ^

    This was suggested by Agustín Covarrubias during his review of an earlier draft of this post.

No comments.