Present-day good intentions aren’t sufficient to make the longterm future good in expectation

Intro

In chapter 9 of What We Owe the Future, Will MacAskill argues that the worst possible futures are worse than the best possible ones, but the best possible futures are more likely.

This is a critical point for many forms of longtermism, because if the expected value (EV) of the future is negative, the EV of reducing x-risk is also negative on a number of moral views.

In this post, I will push back on this argument about the relative likelihood of the best and worst futures and argue that we don’t have a reason to conclude that the best futures are more likely.

MacAskill identifies two schools of thought on this issue: the pessimists and the optimists. However, I want to make the case for a third camp: the agnostics. Predicting near-term outcomes is already difficult, and in many cases impossible. In general, prediction accuracy declines rapidly as the events being predicted become more distant in time. After thousands or millions of years of societal, scientific, technological change, I find it plausible that our 21st century predictions will become all but useless. Even physicists (who can make more accurate predictions than academics in most other disciplines) don’t attempt to make predictions about domains where there is no useful evidence, like the question of what the universe was like before the big bang.

MacAskill offers a theory for why we should expect the EV of the future to be positive: the asymmetry of intentions. In general, most people have pro-social goals—they want to improve the lives of themselves, those close to them, and their community. While some people (for example, violent psychopaths) have bad intentions, they are in the minority. Furthermore, while some evil in the world results from genuinely anti-social intentions, a significant portion of bad outcomes are byproducts of other goals. For example, few people have the goal of deliberately hurting the environment, and yet environmental degradation occurs as the side effect of many otherwise positive or neutral economic activities. If good outcomes are mostly valued for their own sake, whereas bad outcomes are mostly byproducts of optimizing for some other goal, this gives us reason for optimism. There might be little incentive for future people to optimize for the worst possible world (because few desire that outcome per se), whereas many people might wish to optimize for the best possible world. According to this argument, the resulting asymmetry makes the future good in expectation.

I think this argument underrates how much our future descendants might diverge from us. A smart forecaster in the 1600s likely would have predicted that the contemporary level of technology would persist indefinitely, as it had for the past 10,000 years. Of course, such a forecaster would have been wrong. Through little fault of their own, they would have been unable to foresee the dramatic discontinuity created by the new technology of fossil-fueled machines.

It’s impossible to say exactly which discontinuities, if any, lie in our future. However, one plausible candidate is what I will call mind-modification. In the following sections I will consider three scenarios germane to mind-modification. For each scenario, I will seek to shed some light on the question of whether such a world would be good in expectation.

Scenario 1: The hardware and software of the human mind changes substantially

As its name suggests, mind-modification is the process of substantially altering a person’s mind, including aspects like emotionality, reasoning ability, and moral intuition.

Very limited forms of mind-modification already exist today. Stimulants like caffeine and Adderall are consumed by some habitually with the goal of improving energy and focus levels, smartphones and the internet seem to be eroding many people’s ability to focus for prolonged periods of time, and some undetermined aspect of modern society are causing the Flynn Effect (the general increase of IQ test scores over time).

These may be the early hints of a much more important trend. If humanity engages in significant, targeted modification of our brains in a way that impacts features like personality, intelligence, and moral intuition, this could represent a sharp discontinuity where past human behavior ceases to be a good predictor of future behavior.

1a: Mind-modification will likely be possible given a long enough time-horizon

Unless biotechnology hits an insurmountable impasse, we should expect modifications of the human brain to become technologically feasible at some point—if not in the near-term, then in hundreds, thousands, or millions of years.

In principle, limited mind-modification is already possible. Neuroscientists have gotten significant mileage out of studying individuals who have had accidents, strokes, or lesions that destroyed particular regions of their brain. For example, people with damage to the dorsolateral prefrontal cortex (dlPFC) have a much stronger attraction to immediate gratification than people without such damage. On the other hand, people with damage in the ventromedial prefrontal cortex (vmPFC) struggle to make basic decisions, and the decisions they do make are much more utilitarian, e.g. choosing to sacrifice one family member to save five strangers in a hypothetical trolley problem.[1] Similarly, a common experimental technique in neuroscience is to electrically stimulate different brain regions to study any resulting differences in behavior. If prevailing research ethics didn’t forbid such experiments, it seems plausible that deliberately destroying parts of the human brain would also be a fruitful research direction.

This is all speculative in the extreme. However, in this respect my argument does not differ substantially from common longtermist stories about the future. For example, Nick Bostrom’s original Astronomical Waste essay imagined “planet-sized computational structure(s)” which created simulated minds with “experiences indistinguishable from the typical current human [experience].”[2] This is not to say that I’m against cautious theorizing about the longterm future. The accomplished science fiction writer Kim Stanley Robinson likes to point out that “science fiction is the realism of our time.”[3] This makes sense to me. After all, if the ceiling for humanity’s scientific and technologic progress is much higher than what we’ve currently achieved, we should expect the world to become steadily more fantastical the further out we go in time.

1b: If mind-modification is possible, then it will likely occur

If mind-modification becomes easy, individuals and groups will face powerful incentives to adjust themselves. The pressure could start from within. People typically perceive at least some defects in their own mind. We all complain at times about insufficient levels of self-control, empathy, and drive as well as our fallible memory, fluid intelligence, and perceptiveness. And we don’t just complain about our own psychology, we complain about others. Who among us hasn’t wished our neighbors were more/​less communal, open to outsiders, respectful of traditional values, or some combination thereof? If invasive forms of cognitive engineering become available to the typical person, parents will likely face pressure to have children with socially desirable traits, just as parents today face pressure to give their children the best possible nutrition, schooling, and social environment.

For illiberal governments with a high degree of state-capacity, like the Chinese government, mind-modification might be a tempting way to increase social conformity and reduce threats to the government and its goals. The Chinese government continues to run “re-education” camps in Xinjiang in an effort to forcibly indoctrinate members of the majority-Muslim Uyghur population. Boarding schools aiming to “re-educate” Indigenous peoples away from their cultures existed in the U.S. and Canada until relatively recently.[4][5]

Governments have often sought to create ideological conformity in addition to cultural conformity. Communist China and Russia invested significant resources in attempts to increase ideological conformity. The technological modification of people’s brains would be a logic next-step for governments willing to take extreme measures to achieve these types of goals.

1c: Digital people would make extreme mind-modification even more likely

The possibility of digital people (i.e. humans implemented as software) is a popular idea in the longtermist community. It has been discussed by Nick Bostrom and Holden Karnofsky (who was inspired by Robin Hanson) as a plausible technological development.[6] Digital people may consume fewer resources and be subject to fewer constraints compared to flesh-and-blood people, which might make a digital society appealing.

The advent of digital people would likely make mind-modification more powerful and pervasive. Even in black box AI systems, the humans designing the system make deliberate choices about its construction. With present-day AI systems, it’s possible and often desirable to modify the reward-function with new features like novelty-seeking, which allows the system to overcome new challenges.[7]

Modifying the mind of a computer program built to resemble a human may turn out to be easier than modifying a black-box system of equal or greater capabilities. Take the endocrine system for example. Chemicals like testosterone that originate from glands located outside the brain have a significant impact on people’s desires, emotions, and behaviors like competitiveness.[8] If a digital person was implemented with a separate digital brain and digital endocrine system, it should be relatively easy to dial levels of different pseudo-chemicals up and down. With a digital endocrine system, it may be possible to dramatically increase or decrease levels of key inputs for the brain like serotonin or testosterone. Drug development and production would be much easier in such a scenario, and the effects we already see today from brain damage, drug consumption, or electrical stimulation of different brain regions could be taken to new extremes.

1d: The iterative nature of mind-modification will introduce massive uncertainty

The aspect of mind-modification that introduces the most uncertainty is its iterative nature.

To see why, let’s consider a specific scenario. Imagine that in the next century it becomes popular for individuals modify themselves so they become less neurotic (i.e. less prone to experience emotions like fear, regret, or anger). In the short-term, this may be a positive development.

However, let’s also imagine that the modification which induces low neuroticism has the side effect of making people more impulsive, because they are less inclined to fear negative consequences of their actions. A little more impulsiveness might not be a bad thing in the short term. From this point on, however, decisions about subsequent modifications are being made by people who are more impulsive than their ancestors were. These new, less-inhibited generations may engage in modifications that wouldn’t seem prudent to us, like turning the dial further on the low-neuroticism/​high-impulsiveness scale. On the time-scale of a few decades, this might have limited impacts. However, over thousands or millions of years of continuous mental engineering, the end-state of constant modification would be extremely difficult to predict.

Impulsiveness isn’t the only dimension where iterative mind-modification could be impactful. In subsection 1b I made the case that governments may be tempted to induce higher conformity through mental engineering. Increasingly high degrees of conformity might lead to significant changes in the moral outlook of some societies. For example, societies engineered to have high conformity might display more ingroupishness or distrust of outsiders. In addition to making the future less predictable, this could have more direct negative consequences. In particular, the phenomenon of moral circle expansion might reverse itself in societies where people engineer themselves to be more ingroupish. As opinions shift in the direction of insularity with each successive generation, the trend may become difficult to reverse.

Other modifications may target our moral intuitions directly. In subsection 1a I mentioned how damage to a particular region of the brain can lead to a more dispassionate, utilitarian view of the world. There are probably other changes we could induce in the brain that would substantially change our moral intuitions. Consequently, future generations will face the temptation to reform the moral intuitions of the populace along various dimensions. People with strong feelings about the correctness of utilitarianism (or alternatively, traditional, community-oriented values) will likely feel some temptation to win converts by influencing the basic genetic or software programming of society. The consequences of this may also spiral out of control over time. Mind-modification that pushes people towards utilitarianism, for example, might have the side effect of making people less empathetic, which could have very negative consequences in the longterm.

Given the often insurmountable difficulties of making predictions about even the very near-term, the chances are miniscule that we can say anything meaningful about the intentions of a society a million years in the future, especially one that faces completely unknown social and technological constraints.

If we further specify that future people will have psychologies that are foreign to us, which seems likely, then any prediction about total EV originating in the 21st century is vanishingly unlikely to be successful. Without the mental continuity of humanity over time, we cannot say much about our future descendants. The tendency for humans to engage in pro-social behavior like cooperating with strangers, refraining from violating social norms, or attempting to better our communities is rooted in our particular evolutionary history and its accompanying genetic makeup. While it’s true that relatively pro-social behavior has been documented throughout recorded history, all humans throughout recorded history were utilizing essentially the same mental hardware, which seems unlikely to persist on longterm time-horizons.

Scenario 2: Total regulation

2a: The difficulty of regulating fundamental technologies

The sort of mind-modification I described above may not be inevitable. It could be the case that any technologies that allow for mind-modification become highly regulated. However, this seems unlikely. International regulation is relatively weak, and in the areas where it enforces strong prohibitions, like chemical and nuclear weapons, success has been promising but limited. Furthermore, unlike a WMD, mind-modification is not a weapon per se. It poses only a very indirect threat to any given nation, which would undermine the case for a ban. While some nations may attempt to regulate the practice, it seems unlikely that countries like China, the U.S., and Russia will coordinate together to enforce a common regulatory regime.

The problem of regulation grows dramatically more difficult if a substantial portion of the population are digital people. After all, modifying software systems is generally easier than changing physical systems, and once a modification has been developed it may be easy to copy and distribute. Without complete control over the relevant source code and computational resources, preventing the modification of digital people would be extremely difficult.

A liberal democracy would probably not be consistent enough to maintain the stringent regulation that would be required to prevent all forms of mind-modification (including slower, more gradual forms) from occurring in perpetuity. After all, the chances of even a single present-day democracy reaching total consensus on a given issue is low. The chances of all governments on Earth coordinating together to create a consensus and then sustaining that consensus for thousands of years seems miniscule.

Some small groups of present-day humans (like hunter-gatherer societies or the Amish) have successfully opted out of a substantial amount technological change. However, the relative share of the population in those groups is only declining over time and they have minimal influence over the majority of the population. In general, humans seem unwilling to opt out of useful new technologies. Even if some sort of unified global democracy regulated mind-modification for a limited period of time, it’s hard to imagine such regulation persisting indefinitely.

Of course, technology may eventually allow for forms of government that have substantially more power to regulate than modern day governments. A government supported by automated police forces and constant digital surveillance could achieve a level of control over its population that has never been seen before. If the society in question consisted of digital people, the level of control might further exceed what’s possible with governance over flesh-and-blood people.

Such a scenario might successfully prevent mind-modification, but for a longtermist it may have other problems. In my view, there’s no particular reason to think that such a world would have a positive EV.

2b: The downsides of highly concentrated power

First, let’s imagine that power in our hypothetical global government is never transferred, and some individual (or some group of like-minded individuals) has complete power over humanity in perpetuity. Technologies like digital people or biotech that allows for life extension may allow the same autocrat to rule over humanity for a very long time. This longevity may be even more extreme in the case of digital people, which may be able to be run at different relative speeds. If the digital autocrat and their subjects were run slower and faster respectively, a single autocrat could preside over virtually endless generations of subjects.

Whether we consider indefinite autocracy to be or good or bad may depend on who is in power. History is littered with examples of negligent, egotistical, and sadistic autocrats. A world ruled forever by such an autocrat may quickly come to resemble a kind of hell on Earth.

Of course, if the autocrat is selected at random from the population, it’s unlikely they will be a sadistic psychopath. However, the selection is unlikely to be random, as narcissists seem attracted to powerful positions. There could also be psychological effects of having such absolute power for a long period of time. For an autocrat bolstered by sufficiently advanced technology, the power imbalance would be more extreme than anything experienced thus far in human history. In some sense, the autocrat might become a god relative to their subjects. The psychological impacts on the autocrat of such an imbalance might be very difficult to predict.

We can also imagine a scenario where transfer of power occurs on some interval. In this case, the global government still has the absolute control necessarily to clamp down on mind-modification, but power periodically changes hands.

It seems likely that a system based on the transfer of power will erode in the longterm. If autocrats have any power over the system itself, they may attempt to interrupt the transfer of power. Given a sufficiently large number of autocrats, eventually an autocrat who desires to do so will come along.

It could be the case that safeguards are set up in advance to prevent the system from degrading and ensure that power is actually transferred. Such safeguards would need to be extraordinarily robust. An autocrat with enough power to strictly regulate the behavior of every person on Earth might have insidious ways of influencing their successors or the overall system of succession. Present-day systems of government are subject to all kinds of changes over time, e.g. the erosion of norms, the abolition of term limits, and regime change. This would represent a serious point of failure for any autocratic system with a mechanism to transfer power.

Even if the transfer of power isn’t disrupted, the periodic tenure of ill-intentioned autocrats might have strong enough effects that successors are unable to repair them. Autocrats who are so-minded could presumably relax the regulation of mind-modification for the duration of their tenure and open Pandora’s box. In this way, a single bad autocrat could leave the minds of humanity in a very different shape than they received them.

Of course, it’s possible that the system of governance is constructed so that its limits are completely rigid. A super-intelligent, super-powerful AI may be able to enforce such a system if its goals could be specified with sufficient certainty in advance. This scenario might lead to a case of positive EV. However, it’s worth noting that this would also represent a profound curtailment of human autonomy. If the AI exercised such complete control that it would be impossible for would-be autocrats to alter the system and gain control for themselves, then all of humanity would be disempowered forever. In such a scenario, all future generations would live under the dead hand of their ancestors.

Scenario 3: Stagnation

Often, people in EA imagine a future that diverges substantially from the present in terms of technology. I count myself among this camp. It seems plausible that we are only scratching the surface of what science and technology can achieve. Conditional on that assumption, thousands of years of increasingly scientific progress in areas like computing, biotech, and physics seem poised to have profound impacts on human society, and on the minds and bodies of the typical human.

However, technological stagnation is a non-zero possibility. Few predicted the industrial revolution, which represented the end of thousands of years of stagnation. Prior to the industrial revolution, there was the agricultural revolution, which itself marked the end of a much longer period of relative stagnation. There may be insurmountable technological or scientific hurdles in our future that make the ceiling for humanity much lower than it currently seems, and those hurdles may be unforeseeable.

I believe this scenario is likely a positive one. Under stagnation, humanity will not attain what might otherwise be its maximum potential, but it will also not suffer under the most extreme and prolonged adverse conditions. If the humans of a million years from now are essentially similar to the humans of today, the uncertainty about the future reduces substantially. By eliminating the peaks and valleys from our range of possible futures, we may guarantee a small positive rate of return. After all, anatomically modern humans have been around for millions of years, and it seems like the typical life has been worth living in most times and places.

Stagnation in the hard sciences might also allow necessary time for the social sciences to develop much more robust models of human behavior, opening up the possibility of predicting the future of society with a high degree of accuracy. This might come with some benefits like reduced conflict.

It might even be the case that stagnation is more likely than any of the other scenarios I have laid out, since it has strong historical precedent. If you buy that scenarios based on dramatic technological advances have completely uncertain value, then the possibility of stagnation might be a tempting reason to think we can influence the value of the longterm future.

Closing thoughts

In this post, I proposed three scenarios, each of which has different implications for the longterm EV of humanity’s future.

In the first scenario, humans use technology to modify our basic mental processes such that over time there is significant divergence between the psychology of anatomically modern humans and our distant descendants. This will lead to a future with an uncertain EV, because our ability to model the behavior of our descendants is contingent on information about their psychology, incentives, and technological constraints that we do not possess. The longer humanity’s future is, and the more technological progress occurs, the less useful our predictions will become.

In the second scenario, anatomically modern humans persist indefinitely, because technologies like mind-modification are restricted by a powerful central authority. The EV of this future is also hard to predict, because the result will depend heavily on the particular system of governance and the underlying probability of the system being captured or controlled by malicious individuals. Other systems that provide a check against absolute power, like liberal democracy, will likely not be able to prevent mind-modification over thousands or millions of years.

Finally, the third scenario acknowledges that stagnation is a possibility. The previous discussion about mind-modification, digital people, or technologically-enabled autocracy is speculation, and the future may indeed resemble the past more than we think.

Making a prediction about the state of humanity thousands or millions of years in the future from the armchair is a fraught exercise. To be honest, absent the existing discourse about longtermism I probably wouldn’t attempt it. However, I hope the argument I’ve laid out here will provide an alternative perspective to a common longtermist train of thought, which holds that the longterm future is positive in expectation and therefore we should take it into consideration when calculating the expected loss from low-probability, high-impact risks.

(As has been pointed out many times, engineered pandemics or AI take-overs are bad on most moral views, including non-longtermist ones. Even if you buy everything in this post, it may have limited practical implications depending on your underlying probabilities for global catastrophic risks.)

Postscript

Just before posting this, I was scanning other entries to make sure my submission was formatted correctly, and I noticed another post on a similar theme. After skimming the other post, I decided to go ahead and post my own version, because I think the two posts are focusing on somewhat different considerations. However, I’m glad to see that I’m not alone in thinking that sign-uncertainty question is an underrated issue for longtermism, and I hope there will be many future discussions addressing similar themes.

  1. ^

    Behave: The Biology of Humans at Our Best and Worst by Robert Sapolsky (Pages 54-56)

  2. ^
  3. ^
  4. ^
  5. ^
  6. ^
  7. ^

    https://​​arxiv.org/​​pdf/​​1901.10995.pdf The Go-Explore algorithm, which I learned about via Brian Christian’s The Alignment Problem: Machine Learning and Human Values

  8. ^

    Behave: The Biology of Humans at Our Best and Worst by Robert Sapolsky (Pages 99-104)

No comments.