The ‘far future’ is not just the far future
It’s a widely held belief in the existential risk reduction community that we are likely to see a great technological transformation in the next 50 years [1]. A technological transformation will either cause flourishing, existential catastrophe, or other forms of large change for humanity. The next 50 years will matter directly for most currently living people. Existential risk reduction and handling the technological transformation are therefore not just questions of the ‘far future’ or the ‘long-term’; it is also a ‘near-term’ concern.
The far future, the long term, and astronomical waste
Often in EA, the importance of the ‘far future’ is used to motivate existential risk reduction and other long-term oriented work such as AI safety. ‘Long term’ in itself is used even more commonly, and while it is more ambiguous, it often carries or contains the same meaning as ‘far future’. Here are some examples: Influencing the Far Future, The Importance of the Far Future, Assumptions About the Far Future and Cause Priority, The Long Term Future, Longtermism.
‘The importance of the far future’ argument builds on the postulate that there are many possible good lives in the future, many more than currently exist. This long-term future can stretch hundreds, thousands, or billions of years or even further into the future. Nick Bostrom’s Astronomical Waste makes a compelling presentation of the argument:
Given these estimates, it follows that the potential for approximately 10^38 human lives is lost every century that colonization of our local supercluster is delayed; or equivalently, about 10^29 potential human lives per second.
The existential risk reduction position is not predicated on astronomical waste
However, while astronomical waste is a very important argument, strong statements of its type are not necessary to take the existential risk position. The vast majority of work in existential risk reduction is based on the plausibility that we’ll see technologically driven events of immense impact on humanity, a technological transformation, occur within the next 50 years. Currently living people and our children and grandchildren would be drastically affected by such a technological transformation[2].
Similarly, Bostrom, in his astronomical waste essay, argues that even with a ‘person-affecting utilitarian’ view, reducing existential risk is a priority:
Now, if these assumptions are made, what follows about how a person-affecting utilitarian should act? Clearly, avoiding existential calamities is important, not just because it would truncate the natural lifespan of six billion or so people, but also – and given the assumptions this is an even weightier consideration – because it would extinguish the chance that current people have of reaping the enormous benefits of eventual colonization.
The ‘far future’ is the ‘future’
The arguments about the value of future lives and the possible astronomical value of the future of humanity are very important. But our work in existential risk reduction is meant to help future, near-future, and currently existing people. Distinguishing between these mostly doesn’t seem to be decision-relevant if a technological transformation is likely to happen within the next 50 years. And speaking of existential risk reduction and flourishing in terms of the ‘far future’ too often seems like it’s likely to make people focus overly much on the general difficulties of imagining affecting that far future.
I propose we avoid calling what we are doing ‘far future’ work (or other similar terms), excepting cases where we think it will almost only affect events occurring beyond the next 50 years. So what should we say instead? The fates of currently living people, near-term future living people, and long-term future living people are all a question of the ‘future’. Perhaps we should just call it ‘future’ oriented work.
- ↩︎
When does the existential risk reduction community think we may see a technological transformation? In the Walsh 2017 survey, the median estimate of AI experts was that there’s a 50% chance we will have human level AI by 2061. My assessment is that people in the existential risk reduction community have similar views to the AI experts. I’m not aware of any direct surveys in the community. People in AI safety appear to generally have shorter timelines than the AI experts polled. Paul Christiano: “human labor being obsolete… within 20 years is something within the ballpark of 35% … I think compared to the community of people who think about this a lot, I’m more somewhere in, I’m still on the middle of the distribution”.
- ↩︎
Who will be alive in 20 or 50 years? Likely you, and likely your children and grandchildren. The median age in the world is currently 29.6 years. World life expectancy at birth is 72.2 years, US life expectancy is 78 years, and Canada is 82 years. Even without further life span improvements, the average currently living person will be alive 40 years from now. Improving medicine globally will push countries closer to the level of Canada in the next few decades. Standard medicine doesn’t seem likely to lead to a great difference beyond that. However, direct aging prevention or reversal interventions such as senolytics could cause a phase change in life expectancy by adding decades. Interventions of this form may hit the market in the next few decades.
- Crucial questions for longtermists by 29 Jul 2020 9:39 UTC; 104 points) (
- CEEALAR Fundraiser 9: Concrete outputs after 29 months by 27 Nov 2020 14:48 UTC; 69 points) (
- 1 Jan 2022 17:13 UTC; 20 points) 's comment on Convergence thesis between longtermism and neartermism by (
All work is future-oriented (other than a few exceptions involving the manipulation of time). Maybe “long-term future” instead of “far future” would be better since the latter seems to suggest to me that the benefits won’t be observed until far into the future, whereas “long-term” doesn’t necessarily exclude the “short term”, although most of the impact is typically thought to come from benefits in the far future. Just referring to it as “extinction risk reduction” or “existential risk reduction” doesn’t necessarily have longtermist or far-future connotations.
I think the case for it under a symmetric person-affecting view (like presentism or necessitarianism) is much weaker compared to, say, global health and poverty work, for which we have far more robustly cost-effective interventions.
Also, under an asymmetric person-affecting view, reducing extinction risk would probably not be a priority (or it could even be bad), but reducing s-risks, i.e. risks of astronomical suffering, another type of existential risk, could be. There is overlap between extinction risk work and s-risk work through cooperation/conflict work and AI safety; see the priorities of Effective Altruism Foundation/Foundational Research Institute. I think asymmetric views would normally prioritize global health and poverty, animal welfare or s-risks, depending on your priors and the importance of empirical evidence.
Two analyses here indicate expected cost per life saved in the present generation from both AGI safety and alternative foods for nuclear winter, abrupt climate change, etc to be lower than global health. There are orders of magnitude of uncertainty in the X risk interventions, but still little overlap with the global health cost effectiveness distributions, so I think it is fairly robust.
This previous post by Gregory Lewis also seems relevant both to this point in particular, and to this post in general. E.g., Lewis writes:
By robust, I mean relying less on subjective judgements (including priors). Could someone assign a much lower probability of such catastrophic risks? Could they be much more skeptical about how much extra work in the area reduces/mitigates these risk (i.e. the progress)?
On the other hand, how much more skeptical could they be of GiveWell-recommended charities, which are based on RCTs? Of course, generalization is always an issue.
Thank you for your thoughtful comment!
One alternative could be ‘full future’, signifying that it encompasses both the near and long term.
I think there should be space for new and more specific terms. ‘Long term’ has strengths, but it’s overloaded with many meanings. ‘Existential risk reduction’ is specific but quite a mouthful; something shorter would be great. I’m working on another article where I will offer one new alternative.
Isn’t just “x-risk” okay? Or is too much lost in the abbreviation? I suppose people might confuse it for extinction risks specifically, instead of existential risks generally, but you could write it out as “existential risks (x-risks)” or “x-risks (existential risks)” the first time in an article.
Also, “reduction” seems kind of implicit due to the negative connotations of the word “risk” (you could reframe as “existential opportunities” if you wanted to flip the connotation). No one working on global health and poverty wants to make people less healthy or poorer, and no one working on animal welfare wants to make animals suffer more.
Good point, ‘x-risk’ is short and ‘reduction’ should be or should become implicit after some short steps of thinking. It will work well in many circumstances. For example, in “I work with x-risk”, just as “I work with/in global poverty” works. Though some interjections that occur to me in the moment are: “the cause of x-risk” feels clumsy, “letter, dash, and then a word” feels like an odd construct, and it’s a bit negatively oriented.
Some risks are pretty low for the next 2 generations, so that they’re neglected in favor of more present concerns (welfare, inequality, financial risks, etc.)
I was wondering: perhaps one could model human cooperation across the time as analogous to pensions schemes (or Ponzi schemes, if they fail) – as a network of successive games of partial conflict, so that foreseeing the end of the chain of cooperation would, by backward induction reasoning, entail its collapse. So, if we predict that an x-risk will increase, you can anticipate that the probability of future generations refusing to cooperate with previous ones will increase, too: my grand-grandchildren won’t take care of my grandchildren (or of whatever they value), who so will see no point in cooperating with my children, who won’t cooperate with me. Does it make sense?