RSS

Ethics of ex­is­ten­tial risk

TagLast edit: Jul 12, 2022, 7:32 PM by Pablo

The ethics of existential risk is the study of the ethical issues related to existential risk, including questions of how bad an existential catastrophe would be, how good it is to reduce existential risk, why those things are as bad or good as they are, and how this differs between different specific existential risks. There is a range of different perspectives on these questions, and these questions have implications for how much to prioritise reducing existential risk in general and which specific risks to prioritise reducing.

In The Precipice, Toby Ord discusses five different “moral foundations” for assessing the value of existential risk reduction, depending on whether emphasis is placed on the future, the present, the past, civilizational virtues or cosmic significance.[1]

The future

In one of the earliest discussions of the topic, Derek Parfit offers the following thought experiment:[2]

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

  1. Peace.

  2. A nuclear war that kills 99% of the world’s existing population.

  3. A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.

The scale of what is lost in an existential catastrophe is determined by humanity’s long-term potential—all the value that would be realized if our species survived indefinitely. The universe’s resources could sustain a total of around biological human beings, or around digital human minds.[3] And this may not exhaust all the relevant potential, if value supervenes on other things besides human or sentient minds, as some moral theories hold.

In the effective altruism community, this is probably the ethical perspective most associated with existential risk reduction: existential risks are often seen as a pressing problem because of the astronomical amounts of value or disvalue potentially at stake over the course of the long-term future.

The present

Some philosophers have defended views on which future or contingent people do not matter morally.[4] Even on such views, however, an existential catastrophe could be among the worst things imaginable: it would cut short the lives of every living moral patient, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, a case for reducing existential risk could grounded in concern for presently existing beings.

This present-focused moral foundation could also be discussed as a “near-termist” or “person-affecting” argument for existential risk reduction.[5] In the effective altruism community, it appears to be the most commonly discussed non-longtermist ethical argument for existential risk reduction.

The past

Humanity can be considered as a vast intergenerational partnership, engaged in the task of gradually increasing its stock of art, culture, wealth, science and technology. In Edmund Burke’s words, “As the ends of such a partnership cannot be obtained except in many generations, it becomes a partnership not only between those who are living, but between those who are living, those who are dead, and those who are to be born.”[6] On this view, a generation that allowed an existential catastrophe to occur may be regarded as failing to discharge a moral duty owed to all previous generations.[7]

Civilizational virtues

Instead of focusing on the impacts of individual human action, one can consider the dispositions and character traits displayed by humanity as a whole, which Ord calls civilizational virtues.[8] An ethical framework that attached intrinsic moral significance to the cultivation and exercise of virtue would regard the neglect of existential risks as showing “a staggering deficiency of patience, prudence, and wisdom.”[9]

Cosmic significance

At the beginning of On What Matters, Parfit writes that “We are the animals that can both understand and respond to reasons. [...] We may be the only rational beings in the Universe.”[10] If this is so, then, as Ord writes, “responsibility for the history of the universe is entirely on us: this is the only chance ever to shape the universe toward what is right, what is just, what is best for all.”[11] In addition, it may be the only chance for the universe to understand itself.

Evaluating and prioritizing existential risk reduction

It is important to distinguish between the question of whether a given ethical perspective would see existential risk reduction as net positive and the question of whether that ethical perspective would prioritise existential risk reduction, and this distinction is not always made.[12] One reason this matters is that existential risk reduction may be much less tractable and perhaps less neglected than some other cause areas (e.g., near-term farmed animal welfare), but with that being made up for by far greater importance from a longtermist perspective. Therefore, if one adopts an ethical perspective that just sees existential risk reduction as similarly important to other major global issues, existential risk reduction may no longer seem worth prioritising.

Further reading

Aird, Michael (2021) Why I think The Precipice might understate the significance of population ethics, Effective Altruism Forum, January 5.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, ch. 2.

Related entries

astronomical waste | existential risk | longtermism | moral philosophy | moral uncertainty | person-affecting views | population ethics | prioritarianism | s-risk | suffering-focused ethics

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

  2. ^

    Parfit, Derek (1984) Reasons and Persons, Oxford: Clarendon press, pp. 453–454.

  3. ^

    Bostrom, Nick, Allan Dafoe & Carrick Flynn (2020) Public Policy and Superintelligent AI, In S. Matthew Liao (ed.), Ethics of Artificial Intelligence, Oxford: Oxford University Press, p. 319.

  4. ^

    Narveson, Jan (1973) Moral problems of population, Monist, vol. 57, pp. 62–86.

  5. ^

    Lewis, Gregory (2018) The person-affecting value of existential risk reduction, Effective Altruism Forum, April 13.

  6. ^

    Burke, Edmund (1790) Reflections on the Revolution in France, London: J. Dodsley, p. 193.

  7. ^

    Ord (2020) The Precipice, pp. 49–53.

  8. ^

    Ord (2020) The Precipice, p. 53.

  9. ^

    Grimes, Barry (2020) Toby Ord: Fireside chat and Q&A, Effective Altruism Global, March 21.

  10. ^

    Parfit, Derek (2011) On What Matters, vol. 1, Oxford: Oxford University Press, p. 31.

  11. ^

    Ord (2020) The Precipice, pp. 53 and 55.

  12. ^

    See Daniel, Max (2020) Comment on ‘What are the leading critiques of longtermism and related concepts’, Effective Altruism Forum, June 4.

Cri­tique of MacAskill’s “Is It Good to Make Happy Peo­ple?”

Magnus VindingAug 23, 2022, 9:21 AM
223 points
115 comments8 min readEA link

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory Lewis🔸Apr 13, 2018, 1:44 AM
65 points
33 comments4 min readEA link

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanniJul 1, 2021, 9:01 PM
139 points
10 comments32 min readEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA🔸May 2, 2021, 6:00 PM
30 points
21 comments2 min readEA link

Sum­mary: Mis­takes in the Mo­ral Math­e­mat­ics of Ex­is­ten­tial Risk (David Thorstad)

Noah Varley🔸Apr 10, 2024, 2:21 PM
62 points
23 comments4 min readEA link

Pri­ori­tiz­ing x-risks may re­quire car­ing about fu­ture people

eliflandAug 14, 2022, 12:55 AM
182 points
38 comments6 min readEA link
(www.foxy-scout.com)

Carl Shul­man on the com­mon-sense case for ex­is­ten­tial risk work and its prac­ti­cal implications

80000_HoursOct 8, 2021, 1:43 PM
41 points
2 comments149 min readEA link

Cost-Effec­tive­ness of Foods for Global Catas­tro­phes: Even Bet­ter than Be­fore?

Denkenberger🔸Nov 19, 2018, 9:57 PM
29 points
5 comments10 min readEA link

AGI safety and los­ing elec­tric­ity/​in­dus­try re­silience cost-effectiveness

Ross_TiemanNov 17, 2019, 8:42 AM
31 points
10 comments37 min readEA link

“Long-Ter­mism” vs. “Ex­is­ten­tial Risk”

Scott AlexanderApr 6, 2022, 9:41 PM
521 points
81 comments3 min readEA link

Toby Ord: Fireside Chat and Q&A

EA GlobalJul 21, 2020, 4:23 PM
14 points
0 comments26 min readEA link
(www.youtube.com)

Help me find the crux be­tween EA/​XR and Progress Studies

jasoncrawfordJun 2, 2021, 6:47 PM
119 points
37 comments3 min readEA link

Ex­is­ten­tial risk as com­mon cause

technicalitiesDec 5, 2018, 2:01 PM
49 points
22 comments5 min readEA link

Should we be spend­ing no less on al­ter­nate foods than AI now?

Denkenberger🔸Oct 29, 2017, 11:28 PM
38 points
9 comments16 min readEA link

8 pos­si­ble high-level goals for work on nu­clear risk

MichaelA🔸Mar 29, 2022, 6:30 AM
46 points
4 comments16 min readEA link

Peace­ful­ness, non­vi­o­lence, and ex­pe­ri­en­tial­ist minimalism

Teo AjantaivalMay 23, 2022, 7:17 PM
62 points
14 comments29 min readEA link

How you can save ex­pected lives for $0.20-$400 each and re­duce X risk

Denkenberger🔸Nov 27, 2017, 2:23 AM
24 points
5 comments8 min readEA link

Toby Ord at EA Global: Reconnect

EA GlobalMar 20, 2021, 7:00 AM
11 points
0 comments1 min readEA link
(www.youtube.com)

Toby Ord: Fireside chat (2018)

EA GlobalMar 1, 2019, 3:48 PM
20 points
0 comments28 min readEA link
(www.youtube.com)

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA🔸May 3, 2021, 2:22 PM
39 points
33 comments2 min readEA link

A new place to dis­cuss cog­ni­tive sci­ence, ethics and hu­man alignment

Daniel_FriedrichNov 4, 2022, 2:34 PM
9 points
1 comment2 min readEA link
(www.facebook.com)

The Tyranny of Ex­is­ten­tial Risk

Karl FaulksNov 18, 2024, 4:41 PM
4 points
1 comment5 min readEA link

Linkpost for var­i­ous re­cent es­says on suffer­ing-fo­cused ethics, pri­ori­ties, and more

Magnus VindingSep 28, 2022, 8:58 AM
87 points
0 comments5 min readEA link
(centerforreducingsuffering.org)

Ques­tion­ing the Value of Ex­tinc­tion Risk Reduction

Red Team 8Jul 7, 2022, 4:44 AM
61 points
9 comments27 min readEA link

The Jour­nal of Danger­ous Ideas

rogersbacon1Feb 3, 2024, 3:43 PM
−26 points
1 comment5 min readEA link
(www.secretorum.life)

Joscha Bach on Syn­thetic In­tel­li­gence [an­no­tated]

Roman LeventovMar 2, 2023, 11:21 AM
8 points
0 comments9 min readEA link
(www.jimruttshow.com)

New Book: “Min­i­mal­ist Ax­iolo­gies: Alter­na­tives to ‘Good Minus Bad’ Views of Value”

Teo AjantaivalJul 19, 2024, 1:00 PM
60 points
8 comments5 min readEA link

[Job – Academia] As­sis­tant Pro­fes­sor (Philos­o­phy/​Ethics) at Ling­nan Univer­sity (Hong Kong)

Andrea SauchelliOct 26, 2021, 9:25 AM
5 points
0 comments2 min readEA link

Con­cern About the In­tel­li­gence Divide Due to AI

Soe LinAug 21, 2024, 9:53 AM
17 points
1 comment2 min readEA link

The Me­taethics and Nor­ma­tive Ethics of AGI Value Align­ment: Many Ques­tions, Some Implications

Eleos Arete CitriniSep 15, 2021, 7:05 PM
25 points
0 comments8 min readEA link

Max­imis­ing ex­pected util­ity fol­lows from self-ev­i­dent premises?

Vasco Grilo🔸Jan 19, 2025, 9:59 AM
17 points
25 comments1 min readEA link
(en.wikipedia.org)