An existential risk is the risk of an existential catastrophe, i.e. one that threatens the destruction of humanity’s longterm potential (Bostrom 2012; Ord 2020a). Existential risks include natural risks such as those posed by asteroids or supervolcanoes as well as anthropogenic risks like mishaps resulting from synthetic biology or artificial intelligence.
A number of authors have argued that existential risks are especially important because the long-run future of humanity matters a great deal (Beckstead 2013; Bostrom 2013; Greaves & MacAskill 2019; Ord 2020a). Many believe that there is no intrinsic moral difference between the importance of a life today and one in a hundred years. However, there may be many more people in the future than there are now. They argue, therefore, that it is overwhelmingly important to preserve that potential, even if the risks to humanity are small.
One objection to this argument is that people have a special responsibility to other people currently alive that they do not have to people who have not yet been born (Roberts 2009). Another objection is that, although it would in principle be important to manage, the risks are currently so unlikely and poorly understood that existential risk reduction is less cost-effective than work on other promising areas.
Beckstead, Nick (2013) On the Overwhelming Importance of Shaping the Far Future, PhD thesis, Rutgers University.
Bostrom, Nick (2002) Existential risks: analyzing human extinction scenarios and related hazards, Journal of Evolution and Technology, vol. 9.
A paper surveying a wide range of non-extinction existential risks.
Bostrom, Nick (2012) Frequently asked questions, Existential Risk: Threats to Humanity’s Future (updated 2013).
This FAQ introduces readers to existential risk.
Bostrom, Nick (2013) Existential risk prevention as global priority, Global Policy, vol. 4, pp. 15–31.
An academic paper making the case for existential risk work.
Greaves, Hilary & William Macaskill (2019) The case for strong longtermism, GPI working paper No. 7-2019, Working paper Global Priorities Institute, Oxford University.
Karnofsky, Holden (2014) The moral value of the far future, Open Philanthropy, July 3.
Matheny, Jason Gaverick (2007) Reducing the risk of human extinction, Risk Analysis, vol. 27, pp. 1335–1344.
A paper exploring the cost-effectiveness of extinction risk reduction.
Ord, Toby (2020a) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.
Ord, Toby (2020b) Existential risks to humanity in Pedro Conceição (ed.) The 2020 Human Development Report: The Next Frontier: Human Development and the Anthropocene, New York: United Nations Development Programme, pp. 106–111.
Roberts, M. A. (2009) The nonidentity problem, Stanford Encyclopedia of Philosophy, July 21 (updated 1 December 2020).
Tomasik, Brian (2019) Risks of astronomical future suffering, Ceter on Long-Term Risk, July 2.
An article exploring ways in which a future full of Earth-originating life might be bad.
Whittlestone, Jess (2017) The long-term future, Effective Altruism, November 16.
civilizational collapse | dystopia | estimation of existential risks | existential catastrophe | existential risk factor | existential security | global catastrophic risk | hinge of history | longtermism | moral perspectives on existential risk reduction | Toby Ord | rationality community | Russell–Einstein Manifesto | s-risks