Ethics of ex­is­ten­tial risk

TagLast edit: 5 Jun 2021 8:32 UTC by EA Wiki assistant

The ethics of existential risk concerns the questions of how bad an existential catastrophe would be, how good it is to reduce existential risk, precisely why those things are as bad or good as they are, and how this differs between different specific existential risks. There are a range of different perspectives on these questions, and these questions have implications for how much to prioritise reducing existential risk and which specific risks to prioritise reducing.

In the effective altruism community, the ethical perspective most associated with existential risk reduction is longtermism: existential risks are often seen as a pressing problem because of the astronomical amounts of value or disvalue potentially at stake over the course of the long-term future. But other ethical perspectives could also lead to a focus on existential risk reduction.

For example, in The Precipice (Ord 2020), Toby Ord discusses five different “moral foundations” for the importance of existential risk reduction:

The “present”-focused moral foundation could also be discussed as a “near-termist” or “person-affecting” argument for existential risk reduction (Lewis 2018). In the effective altruism community, this is perhaps the most commonly discussed non-longtermist ethical argument for existential risk reduction. Meanwhile, the “cosmic significance” moral foundation has received some attention among cosmologists and physicists concerned about extinction risk.

However, it is important to distinguish between the question of whether a given ethical perspective would see existential risk reduction as net positive and the question of whether that ethical perspective would prioritise existential risk reduction, and this distinction is not always made (see Daniel 2020). One reason this matters is that existential risk reduction may be much less tractable and perhaps less neglected than some other cause areas (e.g., near-term farmed animal welfare), but with that being made up for by far greater importance from a longtermist perspective. Therefore, if one adopts an ethical perspective that just sees existential risk reduction as similarly important to other major global issues, existential risk reduction may no longer seem worth prioritising.


Aird, Michael (2021) Why I think The Precipice might understate the significance of population ethics, Effective Altruism Forum, January 5.

Daniel, Max (2020) Comment on ‘What are the leading critiques of longtermism and related concepts’, Effective Altruism Forum, June 4.

Grimes, Barry (2020) Toby Ord: Fireside chat and Q&A, Effective Altruism Global, March 21.

Lewis, Gregory (2018) The person-affecting value of existential risk reduction, Effective Altruism Forum, April 13.

Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.

Related entries

astronomical waste | existential risk | longtermism | moral philosophy | moral uncertainty | person-affecting views | population ethics | prioritarianism | s-risks | suffering-focused ethics

The per­son-af­fect­ing value of ex­is­ten­tial risk reduction

Gregory_Lewis13 Apr 2018 1:44 UTC
47 points
34 commentsEA link

AGI safety and los­ing elec­tric­ity/​in­dus­try re­silience cost-effectiveness

Ross_Tieman17 Nov 2019 8:42 UTC
29 points
10 comments37 min readEA link

Toby Ord: Fireside chat (2018)

EA Global1 Mar 2019 15:48 UTC
19 points
0 comments28 min readEA link

Toby Ord at EA Global: Reconnect

EA Global20 Mar 2021 7:00 UTC
11 points
0 comments1 min readEA link

Should we be spend­ing no less on al­ter­nate foods than AI now?

Denkenberger29 Oct 2017 23:28 UTC
36 points
9 commentsEA link

How you can save ex­pected lives for $0.20-$400 each and re­duce X risk

Denkenberger27 Nov 2017 2:23 UTC
24 points
5 commentsEA link

Cost-Effec­tive­ness of Foods for Global Catas­tro­phes: Even Bet­ter than Be­fore?

Denkenberger19 Nov 2018 21:57 UTC
23 points
4 commentsEA link

Thoughts on “The Case for Strong Longter­mism” (Greaves & MacAskill)

MichaelA2 May 2021 18:00 UTC
30 points
19 comments2 min readEA link

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA3 May 2021 14:22 UTC
39 points
33 comments2 min readEA link

Help me find the crux be­tween EA/​XR and Progress Studies

jasoncrawford2 Jun 2021 18:47 UTC
96 points
35 comments3 min readEA link

Toby Ord: Fireside Chat and Q&A

EA Global21 Jul 2020 16:23 UTC
13 points
0 comments26 min readEA link