Bostrom (2003a) argues that given a technologically mature civilization capable of space colonization on a massive scale, this civilization “would likely also have the ability to establish at least the minimally favorable conditions required for future lives to be worth living”, and that it could thus be assumed that all of these lives would be worth living. Moreover, we can reasonably assume that outcomes which are optimized for everything that is valuable are more likely than outcomes optimized for things that are disvaluable. While people want the future to be valuable both for altruistic and self-oriented reasons, no one intrinsically wants things to go badly.
However, Bostrom has himself later argued that technological advancement combined with evolutionary forces could “lead to the gradual elimination of all forms of being worth caring about” (Bostrom 2004), admitting the possibility that there could be technologically advanced civilizations with very little of anything that we would consider valuable. The technological potential to create a civilization that had positive value does not automatically translate to that potential being used, so a very advanced civilization could still be one of no value or even negative value.
Examples of technology’s potential being unevenly applied can be found throughout history. Wealth remains unevenly distributed today, with an estimated 795 million people suffering from hunger even as one third of all produced food goes to waste (World Food Programme, 2017). Technological advancement has helped prevent many sources of suffering, but it has also created new ones, such as factory-farming practices under which large numbers of animals are maltreated in ways which maximize their production: in 2012, the amount of animals slaughtered for food was estimated at 68 billion worldwide (Food and Agriculture Organization of the United Nations 2012). Industrialization has also contributed to anthropogenic climate change, which may lead to considerable global destruction. Earlier in history, advances in seafaring enabled the transatlantic slave trade, with close to 12 million Africans being sent in ships to live in slavery (Manning 1992).
Technological advancement does not automatically lead to positive results (Häggström 2016). Persson & Savulescu (2012) argue that human tendencies such as “the bias towards the near future, our numbness to the suffering of great numbers, and our weak sense of responsibility for our omissions and collective contributions”, which are a result of the environment humanity evolved in, are no longer sufficient for dealing with novel technological problems such as climate change and it becoming easier for small groups to cause widespread destruction. Supporting this case, Greene (2013) draws on research from moral psychology to argue that morality has evolved to enable mutual cooperation and collaboration within a select group (“us”), and to enable groups to fight off everyone else (“them”). Such an evolved morality is badly equipped to deal with collective action problems requiring global compromises, and also increases the risk of conflict and generally negative-sum dynamics as more different groups get in contact with each other.
As an opposing perspective, West (2017) argues that while people are often willing to engage in cruelty if this is the easiest way of achieving their desires, they are generally “not evil, just lazy”. Practices such as factory farming are widespread not because of some deep-seated desire to cause suffering, but rather because they are the most efficient way of producing meat and other animal source foods. If technologies such as growing meat from cell cultures became more efficient than factory farming, then the desire for efficiency could lead to the elimination of suffering. Similarly, industrialization has reduced the demand for slaves and forced labor as machine labor has become more effective. At the same time, West acknowledges that this is not a knockdown argument against the possibility of massive future suffering, and that the desire for efficiency could still lead to suffering outcomes such as simulated game worlds filled with sentient non-player characters (see section on cruelty-enabling technologies below). [...]
4.2 Suffering outcome: dystopian scenarios created by non-value-aligned incentives.
Bostrom (2004, 2014) discusses the possibility of technological development and evolutionary and competitive pressures leading to various scenarios where everything of value has been lost, and where the overall value of the world may even be negative. Considering the possibility of a world where most minds are brain uploads doing constant work, Bostrom (2014) points out that we cannot know for sure that happy minds are the most productive under all conditions: it could turn out that anxious or unhappy minds would be more productive. [...]
More generally, Alexander (2014) discusses examples such as tragedies of the commons, Malthusian traps, arms races, and races to the bottom as cases where people are forced to choose between sacrificing some of their values and getting outcompeted. Alexander also notes the existence of changes to the world that nearly everyone would agree to be net improvements—such as every country reducing its military by 50%, with the savings going to infrastructure—which nonetheless do not happen because nobody has the incentive to carry them out. As such, even if the prevention of various kinds of suffering outcomes would be in everyone’s interest, the world might nonetheless end up in them if the incentives are sufficiently badly aligned and new technologies enable their creation.
An additional reason for why such dynamics might lead to various suffering outcomes is the so-called Anna Karenina principle (Diamond 1997, Zaneveld et al. 2017), named after the opening line of Tolstoy’s novel Anna Karenina: “all happy families are all alike; each unhappy family is unhappy in its own way”. The general form of the principle is that for a range of endeavors or processes, from animal domestication (Diamond 1997) to the stability of animal microbiomes (Zaneveld et al. 2017), there are many different factors that all need to go right, with even a single mismatch being liable to cause failure.
Within the domain of psychology, Baumeister et al. (2001) review a range of research areas to argue that “bad is stronger than good”: while sufficiently many good events can overcome the effects of bad experiences, bad experiences have a bigger effect on the mind than good ones do. The effect of positive changes to well-being also tends to decline faster than the impact of negative changes: on average, people’s well-being suffers and never fully recovers from events such as disability, widowhood, and divorce, whereas the improved well-being that results from events such as marriage or a job change dissipates almost completely given enough time (Lyubomirsky 2010).
To recap, various evolutionary and game-theoretical forces may push civilization in directions that are effectively random, random changes are likely to bad for the things that humans value, and the effects of bad events are likely to linger disproportionately on the human psyche. Putting these considerations together suggests (though does not guarantee) that freewheeling development could eventually come to produce massive amounts of suffering.
Michael’s definition of risks of disappointing futures doesn’t include s-risks though, right?
a disappointing future is when humans do not go extinct and civilization does not collapse or fall into a dystopia, but civilization[1] nonetheless never realizes its potential.
I guess we get something like “risks of negative (or nearly negative) future” adding up the two types.
Depends on exactly which definition of s-risks you’re using; one of the milder definitions is just “a future in which a lot of suffering exists”, such as humanity settling most of the galaxy but each of those worlds having about as much suffering as the Earth has today. Which is arguably not a dystopian outcome or necessarily terrible in terms of how much suffering there is relative to happiness, but still an outcome in which there is an astronomically large absolute amount of suffering.
We also discussed some possible reasons for why there might be a disappointing future in the sense of having a lot of suffering, in sections 4-5 of Superintelligence as a Cause or Cure for Risks of Astronomical Suffering. A few excerpts:
Michael’s definition of risks of disappointing futures doesn’t include s-risks though, right?
I guess we get something like “risks of negative (or nearly negative) future” adding up the two types.
Depends on exactly which definition of s-risks you’re using; one of the milder definitions is just “a future in which a lot of suffering exists”, such as humanity settling most of the galaxy but each of those worlds having about as much suffering as the Earth has today. Which is arguably not a dystopian outcome or necessarily terrible in terms of how much suffering there is relative to happiness, but still an outcome in which there is an astronomically large absolute amount of suffering.
Good point, and it is consistent with CLR’s s-risks definition. :)