Maximizing long-term impact

Outline: I argue that interventions which affect the relative probabilities of humanity’s long-term scenarios have much higher impact than all other interventions. I discuss some possible long-term scenarios and give a high-level classification of interventions.

Background

It is common knowledge in the Effective Altruism movement that different interventions often have vastly different marginal utility (per dollar or per some other unit of invested effort). Therefore, one of the most important challenges in maximizing impact is identifying interventions with marginal utility as high as possible. In the current post, I attack this challenge in the broadest possible scope: taking into account impact along the entire timeline of the future.

One of the first questions that arise in this problem is the relative importance of short-term versus long-term impact. A detailed analysis of this question is outside the scope of the current post. I have argued elsewhere that updateless decision theory and Tegmark’s mathematical universe hypothesis imply a time discount falling much slower than exponentially and only slightly faster than [time since Big Bang]-1. This means that the timescale on which time discount becomes significant (at least about 14 billion years from today’s standpoint) is much larger than the age of the human species, favoring interventions focused on the far long-term.

The most evident factor affecting humanity’s welfare in the long-term is scientific and technological progress. Progress has drastically transformed human society, increased life expectancy, life quality and total human population. The industrial revolution in particular has created a world in which the majority of people in developed countries enjoy a lifestyle of incredible bounty and luxury compared to the centuries which came before. Progress continues to advance in enormous steps, with total eradication of disease and death and full automation of labor required for a comfortable lifestyle being realistic prospects for the coming centuries.

It might appear reasonable to conclude that the focus of long-term interventions has to be advancing progress as fast as possible. Such a conclusion would be warranted if progress was entirely 1-dimensional or at least possessing only one possible asymptotic trajectory in the far future. However, this is almost certainly not the case. Instead, there is a number of conceivable asymptotic trajectories (henceforth called “future scenarios”) with vastly different utility. Hence, interventions aiming to speed up progress appear much less valuable than interventions aiming to modify the relative probabilities of different scenarios. For example, it is very difficult to imagine even a lifetime of effort by the most suitably skilled person speeding up progress by more than 100 years. On the other hand it is conceivable that a comparable effort can lead to changing scenario probabilities by 1%. The value of the former intervention can be roughly quantified as 102 late-humanity-years whereas the value of the latter intervention is at least of the order of magnitude of (14 billion x 1% = 1.4 x 108) late-humanity-years.

Future Scenarios

A precise description of scenario space is probably impossible at the current level of knowledge. Indeed, improving our understanding of this space is one type of intervention I will discuss in the following. In this post I don’t even pretend to give a full classification of scenarios that are possible as far as we can know today. Instead, I only list the examples that currently seem to me to be the most important in order to give some idea of how scenario space might look like.

Some of the scenarios I discuss cannot coexist as real physical possibilities since they rely on mutually contradictory assumptions on the feasibility of artificially creating and/​or manipulating intelligence. Nevertheless, they all seem to be valid possibilities given our current state of knowledge the way I see it (other people have higher confidence than myself regarding the aforementioned assumptions). Also, there seems to be no set of reasonable assumptions under which only scenario is physically possible.

I call “dystopia” those scenarios in which I’m not sure I would want to wake up from cryonic suspension and “utopia” the other scenarios (the future is likely to be so different from the present that it will appear to be either horrible or amazing in comparison). This distinction is not of fundamental importance: instead, our decisions should be guided by the relative value of different scenarios. Also, some scenarios contain residual free parameters (scenario space moduli, so to speak) which affect their relative value with respect to other scenarios.

Dystopian Scenarios

Total Extinction

No intelligent entities remain which are descendant from humanity in any sense. Possible causes include global thermonuclear war, bioengineered pandemic, uncontrollable nanotechnology and natural disasters such as asteroid impact. The last cause, however, seems unlikely since frequency of such events is low and defenses will probably be ready long before time.

Unfriendly Artificial Intelligence

According to a hypothesis known as “AI foom”, self-improving artificial intelligence will reach a critical point in its development (somewhere below human intelligence) at which its intelligence growth will become so rapid that it quickly crosses into superintelligence and becomes smarter than all other coexisting intelligent entities put together. The fate of the future will thus hinge on the goal system programmed into this “singleton”. If the goal system was not designed with safety in mind (a highly non-trivial challenge known as friendly AI), the resulting AI is likely to wipe out the human race. The AI itself is likely to proceed with colonizing the universe, creating a future possibly more valuable than inanimate nature but still highly dystopian1.

Superdictatorship

A single person or a small group of people gains absolute power over the rest of humanity and proceeds to abuse this power. This may come about in a number of ways, for example:

  • Dictators enhance their own intelligence. The risks of this process may lead to extremely immoral posthumans even if the initial persons were moral.

  • Creation of superintelligences that are completely loyal to the dictators. These superintelligences can be AI or enhanced humans. This scenario requires the malevolent group to solve the “friendliness” problem (maintaining a stable goal system through a process of extreme intelligence growth).

  • Use of nanotechnology to forcibly upload the rest of humanity into a computer simulation where they are at the mercy of the dictators.

  • Some sort of technology for complete mind control, e.g. involving genetically reprogramming humanity using a retrovirus.

The risk of these scenarios is elevated by concentration of resources and technological capacity in the hands of authoritarian governments.

Unhumanity

A large number of people undergo a sequence of mind modifications that make them more intelligent and economically competitive but lose important human qualities (e.g. love, compassion, curiosity, humor). The gradual nature of the process creates an unalarming appearance since the participants consider only the next step at any given moment instead of considering the ultimate result. The resulting posthumans use their superior intelligence and economic power to wipe out the remaining unmodified or weakly modified people. The value of this scenario can be as low as the UFAI scenario or somewhat higher, depending on the specifics of the mind modifications.

Utopian Scenarios

Friendly Artificial Intelligence

The AI foom singleton is imbued with a goal system very close to human values, possibly along the lines of Coherent Extrapolated Volition or the values of the specific person or group of persons from whose point of view we examine the desirability of scenarios. This is probably the most utopian scenario since it involves an immensely powerful superintelligence working towards creating the best universe possible. It is difficult to know the details of the resulting future (although there have been some speculations) but it is guaranteed to be highly valuable.

Emulation Republic

All people exist as whole brain emulations or modified versions thereof. Each star system has a single government based on some form of popular sovereignty.

Non-consensual physical violence doesn’t exist since it is impossible to invade someone’s virtual space without her permission and shared virtual spaces follow guaranteed rules of interaction. A fully automated infrastructure in which everyone are shareholders allows living comfortably without the necessity of labor. Disease is irrelevant, immortality is a given (in the sense of extremely long life; the heat death of the universe might still pose a problem). People choose their own pseudo-physical form in virtual spaces for which reason physical appearance is not a factor in social rank, gender assignment at birth causes few problems and racism in the modern sense is a non-issue.

People are free to create whatever virtual spaces they want, within the (weak) resource constraints as long as they don’t contain morally significant entities. Brain emulations without full citizen status are forbidden up to allowances for raising children. Cloning oneself is allowed but making children is subject to regulation for the child’s benefit.

Access to the physical layer of reality is strictly regulated. It is allowed only for pragmatic reasons such as scientific research with the goal of extending the current civilization lifespan even more. All requests for access are reviewed by many people, only minimal necessary access is approved and the process is monitored in real-time. By these means, the threat of malevolent groups breaking the system through the physical layer is neutralized.

Superrational Posthumanity

Human intelligence is modified to be much more superrational. This effectively solves all coordination problems, removing the need in government as we understand it today. This scenario assumes strong modification of human intelligence is feasible which is a weaker assumption than the ability to create de novo AI but a stronger assumption than the ability to create whole brain emulations.

Other Scenarios

There are scenarios which are difficult to classify as “dystopian” and “utopian” due to strong effect of certain parameters and different imaginable “cryonic wake up” scenarios. Such scenarios can be constructed by mixing dystopian and utopian scenarios. This includes scenarios with several classes of people (e.g. free citizens and slaves, the latter existing as emulations for the entertainment of their masters) and scenarios with several “species” of people (people with differently modified minds).

Intervention Types

I distinguish between 4 types of interventions with long-term impact. The types of intervention available in practice depend on the point in which you are located on the progress timeline, with type I interventions available virtually always and type IV interventions available only next to a progress branching point. I give some examples of existing programmes within those categories, but the list is intended to be far from exhaustive. In fact, I would be glad if the readers suggest more examples and discuss their relative marginal utility.

Type I: Sociocultural Intervention

These are interventions that aim to “raise the sanity waterline” (improve the average rationality of mankind, with higher weight on more influential people) and/​or improve the morality of human cultures. The latter is to be regarded from the point of view of the person or group doing the intervention. These interventions don’t assume a specific model of long-term scenarios, instead striving to maximize the chance that humanity chooses the right path when it reaches the crossroads.

Example of a type I interventions include CFAR and the EA movement. Other examples might include educational programmes, atheist movements and human right movements.

Type II: Futures Research

These are interventions that aim to improve our understanding of the possible future scenarios, their relative value and the factors influencing their relative probability. They assume the current state of progress is sufficiently advanced to make discussion of future scenarios relevant. For example, in 1915 nobody would be able to envision whole brain emulation, AI or nanotechnology.

Examples include FHI, CSER, FLI and GCRI.

Type III: Selective Progress

These are interventions that try to accelerate progress in selected areas with the aim of increasing probability of desirable scenarios. They assume the current understanding of future scenarios is sufficiently advanced to know the dependence of the relative probabilities of future scenarios on progress in different areas.

One example is MIRI who try to accelerate progress in AGI safety relatively to progress in AGI in general. Other possible examples would be research programmes studying defense against bioengineered pandemics or nanoreplicators.

Type IV: Scenario Execution

These are interventions that aim at direct realization of a specific scenarios. They assume the relevant technology already exists.

As far as I know, such interventions are still impossible today. Theoretical examples include an FAI construction project or a defense system against bioengineered pandemics.

Summary

Long-term thinking leads to somewhat counterintuitive conclusions regarding the most effective interventions. Interventions aiming to promote scientific and technological progress are not necessarily beneficial and can even be harmful. Effective interventions are focused on changing culture, improving our understanding of the future and accelerating progress in highly selected areas.

Many questions remain, for example:

  • What is the importance of cultural interventions in first world versus third world countries?

  • Progress in which areas is beneficial /​ harmful, to the extent of our current ability to predict?

  • What are the relative marginal utilities of existing programmes in the 4 categories above?

1 There is a possibility that the UFAI will bargain acausally with a FAI in a different Everett branch, resulting in a Utopia. However, there is still an enormous incentive to increase the probability of the FAI scenario with respect to the UFAI scenario.