RSS

Ethics of ex­is­ten­tial risk

TagLast edit: 12 Jul 2022 19:32 UTC by Pablo

The ethics of existential risk is the study of the ethical issues related to existential risk, including questions of how bad an existential catastrophe would be, how good it is to reduce existential risk, why those things are as bad or good as they are, and how this differs between different specific existential risks. There is a range of different perspectives on these questions, and these questions have implications for how much to prioritise reducing existential risk in general and which specific risks to prioritise reducing.

In The Precipice, Toby Ord discusses five different “moral foundations” for assessing the value of existential risk reduction, depending on whether emphasis is placed on the future, the present, the past, civilizational virtues or cosmic significance.[1]

The future

In one of the earliest discussions of the topic, Derek Parfit offers the following thought experiment:[2]

I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:

  1. Peace.

  2. A nuclear war that kills 99% of the world’s existing population.

  3. A nuclear war that kills 100%.

(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.

The scale of what is lost in an existential catastrophe is determined by humanity’s long-term potential—all the value that would be realized if our species survived indefinitely. The universe’s resources could sustain a total of around biological human beings, or around digital human minds.[3] And this may not exhaust all the relevant potential, if value supervenes on other things besides human or sentient minds, as some moral theories hold.

In the effective altruism community, this is probably the ethical perspective most associated with existential risk reduction: existential risks are often seen as a pressing problem because of the astronomical amounts of value or disvalue potentially at stake over the course of the long-term future.

The present

Some philosophers have defended views on which future or contingent people do not matter morally.[4] Even on such views, however, an existential catastrophe could be among the worst things imaginable: it would cut short the lives of every living moral patient, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, a case for reducing existential risk could grounded in concern for presently existing beings.

This present-focused moral foundation could also be discussed as a “near-termist” or “person-affecting” argument for existential risk reduction.[5] In the effective altruism community, it appears to be the most commonly discussed non-longtermist ethical argument for existential risk reduction.

The past

Humanity can be considered as a vast intergenerational partnership, engaged in the task of gradually increasing its stock of art, culture, wealth, science and technology. In Edmund Burke’s words, “As the ends of such a partnership cannot be obtained except in many generations, it becomes a partnership not only between those who are living, but between those who are living, those who are dead, and those who are to be born.”[6] On this view, a generation that allowed an existential catastrophe to occur may be regarded as failing to discharge a moral duty owed to all previous generations.[7]

Civilizational virtues

Instead of focusing on the impacts of individual human action, one can consider the dispositions and character traits displayed by humanity as a whole, which Ord calls civilizational virtues.[8] An ethical framework that attached intrinsic moral significance to the cultivation and exercise of virtue would regard the neglect of existential risks as showing “a staggering deficiency of patience, prudence, and wisdom.”[9]

Cosmic significance

At the beginning of On What Matters, Parfit writes that “We are the animals that can both understand and respond to reasons. [...] We may be the only rational beings in the Universe.”[10] If this is so, then, as Ord writes, “responsibility for the history of the universe is entirely on us: this is the only chance ever to shape the universe toward what is right, what is just, what is best for all.”[11] In addition, it may be the only chance for the universe to understand itself.

Evaluating and prioritizing existential risk reduction

It is important to distinguish between the question of whether a given ethical perspective would see existential risk reduction as net positive and the question of whether that ethical perspective would prioritise existential risk reduction, and this distinction is not always made.[12] One reason this matters is that existential risk reduction may be much less tractable and perhaps less neglected than some other cause areas (e.g., near-term farmed animal welfare), but with that being made up for by far greater importance from a longtermist perspective. Therefore, if one adopts an ethical perspective that just sees existential risk reduction as similarly important to other major global issues, existential risk reduction may no longer seem worth prioritising.

Further reading

Aird, Michael (2021) Why I think The Precipice might understate the significance of population ethics, Effective Altruism Forum, January 5.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, ch. 2.

Related entries

astronomical waste | existential risk | longtermism | moral philosophy | moral uncertainty | person-affecting views | population ethics | prioritarianism | s-risk | suffering-focused ethics

  1. ^

    Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

  2. ^

    Parfit, Derek (1984) Reasons and Persons, Oxford: Clarendon press, pp. 453–454.

  3. ^

    Bostrom, Nick, Allan Dafoe & Carrick Flynn (2020) Public Policy and Superintelligent AI, In S. Matthew Liao (ed.), Ethics of Artificial Intelligence, Oxford: Oxford University Press, p. 319.

  4. ^

    Narveson, Jan (1973) Moral problems of population, Monist, vol. 57, pp. 62–86.

  5. ^

    Lewis, Gregory (2018) The person-affecting value of existential risk reduction, Effective Altruism Forum, April 13.

  6. ^

    Burke, Edmund (1790) Reflections on the Revolution in France, London: J. Dodsley, p. 193.

  7. ^

    Ord (2020) The Precipice, pp. 49–53.

  8. ^

    Ord (2020) The Precipice, p. 53.

  9. ^

    Grimes, Barry (2020) Toby Ord: Fireside chat and Q&A, Effective Altruism Global, March 21.

  10. ^

    Parfit, Derek (2011) On What Matters, vol. 1, Oxford: Oxford University Press, p. 31.

  11. ^

    Ord (2020) The Precipice, pp. 53 and 55.

  12. ^

    See Daniel, Max (2020) Comment on ‘What are the leading critiques of longtermism and related concepts’, Effective Altruism Forum, June 4.

Cri­tique of MacAskill’s “Is It Good to Make Happy Peo­ple?”

Magnus Vinding23 Aug 2022 9:21 UTC
219 points
115 comments8 min readEA link

A longter­mist cri­tique of “The ex­pected value of ex­tinc­tion risk re­duc­tion is pos­i­tive”

Anthony DiGiovanni1 Jul 2021 21:01 UTC
125 points
10 comments32 min readEA link

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory Lewis13 Apr 2018 1:44 UTC
64 points
33 comments4 min readEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA2 May 2021 18:00 UTC
30 points
21 comments2 min readEA link

How you can save ex­pected lives for $0.20-$400 each and re­duce X risk

Denkenberger27 Nov 2017 2:23 UTC
24 points
5 comments8 min readEA link

Pri­ori­tiz­ing x-risks may re­quire car­ing about fu­ture people

elifland14 Aug 2022 0:55 UTC
182 points
38 comments6 min readEA link
(www.foxy-scout.com)

Carl Shul­man on the com­mon-sense case for ex­is­ten­tial risk work and its prac­ti­cal implications

80000_Hours8 Oct 2021 13:43 UTC
41 points
2 comments150 min readEA link

Cost-Effec­tive­ness of Foods for Global Catas­tro­phes: Even Bet­ter than Be­fore?

Denkenberger19 Nov 2018 21:57 UTC
29 points
4 comments10 min readEA link

AGI safety and los­ing elec­tric­ity/​in­dus­try re­silience cost-effectiveness

Ross_Tieman17 Nov 2019 8:42 UTC
31 points
10 comments38 min readEA link

“Long-Ter­mism” vs. “Ex­is­ten­tial Risk”

Scott Alexander6 Apr 2022 21:41 UTC
512 points
81 comments3 min readEA link

Ex­is­ten­tial risk as com­mon cause

Gavin5 Dec 2018 14:01 UTC
49 points
22 comments5 min readEA link

Should we be spend­ing no less on al­ter­nate foods than AI now?

Denkenberger29 Oct 2017 23:28 UTC
38 points
9 comments16 min readEA link

8 pos­si­ble high-level goals for work on nu­clear risk

MichaelA29 Mar 2022 6:30 UTC
46 points
4 comments13 min readEA link

Peace­ful­ness, non­vi­o­lence, and ex­pe­ri­en­tial­ist minimalism

Teo Ajantaival23 May 2022 19:17 UTC
61 points
14 comments28 min readEA link

Toby Ord: Fireside Chat and Q&A

EA Global21 Jul 2020 16:23 UTC
14 points
0 comments25 min readEA link
(www.youtube.com)

Help me find the crux be­tween EA/​XR and Progress Studies

jasoncrawford2 Jun 2021 18:47 UTC
119 points
37 comments3 min readEA link

Toby Ord at EA Global: Reconnect

EA Global20 Mar 2021 7:00 UTC
11 points
0 comments1 min readEA link
(www.youtube.com)

Toby Ord: Fireside chat (2018)

EA Global1 Mar 2019 15:48 UTC
20 points
0 comments29 min readEA link
(www.youtube.com)

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA3 May 2021 14:22 UTC
39 points
33 comments2 min readEA link

The Jour­nal of Danger­ous Ideas

rogersbacon13 Feb 2024 15:43 UTC
−26 points
1 comment5 min readEA link
(www.secretorum.life)

Ques­tion­ing the Value of Ex­tinc­tion Risk Reduction

Red Team 87 Jul 2022 4:44 UTC
61 points
9 comments27 min readEA link

A new place to dis­cuss cog­ni­tive sci­ence, ethics and hu­man alignment

Daniel_Friedrich4 Nov 2022 14:34 UTC
9 points
1 comment2 min readEA link
(www.facebook.com)

Joscha Bach on Syn­thetic In­tel­li­gence [an­no­tated]

Roman Leventov2 Mar 2023 11:21 UTC
2 points
0 comments9 min readEA link
(www.jimruttshow.com)

Linkpost for var­i­ous re­cent es­says on suffer­ing-fo­cused ethics, pri­ori­ties, and more

Magnus Vinding28 Sep 2022 8:58 UTC
87 points
0 comments5 min readEA link
(centerforreducingsuffering.org)

The Me­taethics and Nor­ma­tive Ethics of AGI Value Align­ment: Many Ques­tions, Some Implications

Eleos Arete Citrini15 Sep 2021 19:05 UTC
25 points
0 comments8 min readEA link

[Job – Academia] As­sis­tant Pro­fes­sor (Philos­o­phy/​Ethics) at Ling­nan Univer­sity (Hong Kong)

Andrea Sauchelli26 Oct 2021 9:25 UTC
5 points
0 comments2 min readEA link