The ethics of existential risk is the study of the ethical issues related to existential risk, including questions of how bad an existential catastrophe would be, how good it is to reduce existential risk, why those things are as bad or good as they are, and how this differs between different specific existential risks. There is a range of different perspectives on these questions, and these questions have implications for how much to prioritise reducing existential risk in general and which specific risks to prioritise reducing.
In The Precipice, Toby Ord discusses five different “moral foundations” for assessing the value of existential risk reduction, depending on whether emphasis is placed on the future, the present, the past, civilizational virtues or cosmic significance.[1]
The future
In one of the earliest discussions of the topic, Derek Parfit offers the following thought experiment:[2]
I believe that if we destroy mankind, as we now can, this outcome will be much worse than most people think. Compare three outcomes:
Peace.
A nuclear war that kills 99% of the world’s existing population.
A nuclear war that kills 100%.
(2) would be worse than (1), and (3) would be worse than (2). Which is the greater of these two differences? Most people believe that the greater difference is between (1) and (2). I believe that the difference between (2) and (3) is very much greater.
The scale of what is lost in an existential catastrophe is determined by humanity’s long-term potential—all the value that would be realized if our species survived indefinitely. The universe’s resources could sustain a total of around biological human beings, or around digital human minds.[3] And this may not exhaust all the relevant potential, if value supervenes on other things besides human or sentient minds, as some moral theories hold.
In the effective altruism community, this is probably the ethical perspective most associated with existential risk reduction: existential risks are often seen as a pressing problem because of the astronomical amounts of value or disvalue potentially at stake over the course of the long-term future.
The present
Some philosophers have defended views on which future or contingent people do not matter morally.[4] Even on such views, however, an existential catastrophe could be among the worst things imaginable: it would cut short the lives of every living moral patient, destroying all of what makes their lives valuable, and most likely subjecting many of them to profound suffering. So even setting aside the value of future generations, a case for reducing existential risk could grounded in concern for presently existing beings.
This present-focused moral foundation could also be discussed as a “near-termist” or “person-affecting” argument for existential risk reduction.[5] In the effective altruism community, it appears to be the most commonly discussed non-longtermist ethical argument for existential risk reduction.
The past
Humanity can be considered as a vast intergenerational partnership, engaged in the task of gradually increasing its stock of art, culture, wealth, science and technology. In Edmund Burke’s words, “As the ends of such a partnership cannot be obtained except in many generations, it becomes a partnership not only between those who are living, but between those who are living, those who are dead, and those who are to be born.”[6] On this view, a generation that allowed an existential catastrophe to occur may be regarded as failing to discharge a moral duty owed to all previous generations.[7]
Civilizational virtues
Instead of focusing on the impacts of individual human action, one can consider the dispositions and character traits displayed by humanity as a whole, which Ord calls civilizational virtues.[8] An ethical framework that attached intrinsic moral significance to the cultivation and exercise of virtue would regard the neglect of existential risks as showing “a staggering deficiency of patience, prudence, and wisdom.”[9]
Cosmic significance
At the beginning of On What Matters, Parfit writes that “We are the animals that can both understand and respond to reasons. [...] We may be the only rational beings in the Universe.”[10] If this is so, then, as Ord writes, “responsibility for the history of the universe is entirely on us: this is the only chance ever to shape the universe toward what is right, what is just, what is best for all.”[11] In addition, it may be the only chance for the universe to understand itself.
Evaluating and prioritizing existential risk reduction
It is important to distinguish between the question of whether a given ethical perspective would see existential risk reduction as net positive and the question of whether that ethical perspective would prioritise existential risk reduction, and this distinction is not always made.[12] One reason this matters is that existential risk reduction may be much less tractable and perhaps less neglected than some other cause areas (e.g., near-term farmed animal welfare), but with that being made up for by far greater importance from a longtermist perspective. Therefore, if one adopts an ethical perspective that just sees existential risk reduction as similarly important to other major global issues, existential risk reduction may no longer seem worth prioritising.
Further reading
Aird, Michael (2021) Why I think The Precipice might understate the significance of population ethics, Effective Altruism Forum, January 5.
Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing, ch. 2.
Related entries
astronomical waste | existential risk | longtermism | moral philosophy | moral uncertainty | person-affecting views | population ethics | prioritarianism | s-risk | suffering-focused ethics
- ^
Ord, Toby (2020) The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury Publishing.
- ^
Parfit, Derek (1984) Reasons and Persons, Oxford: Clarendon press, pp. 453–454.
- ^
Bostrom, Nick, Allan Dafoe & Carrick Flynn (2020) Public Policy and Superintelligent AI, In S. Matthew Liao (ed.), Ethics of Artificial Intelligence, Oxford: Oxford University Press, p. 319.
- ^
Narveson, Jan (1973) Moral problems of population, Monist, vol. 57, pp. 62–86.
- ^
Lewis, Gregory (2018) The person-affecting value of existential risk reduction, Effective Altruism Forum, April 13.
- ^
Burke, Edmund (1790) Reflections on the Revolution in France, London: J. Dodsley, p. 193.
- ^
Ord (2020) The Precipice, pp. 49–53.
- ^
Ord (2020) The Precipice, p. 53.
- ^
Grimes, Barry (2020) Toby Ord: Fireside chat and Q&A, Effective Altruism Global, March 21.
- ^
Parfit, Derek (2011) On What Matters, vol. 1, Oxford: Oxford University Press, p. 31.
- ^
Ord (2020) The Precipice, pp. 53 and 55.
- ^
See Daniel, Max (2020) Comment on ‘What are the leading critiques of longtermism and related concepts’, Effective Altruism Forum, June 4.
I think this is probably worth citing here, but I’ve only read the abstract myself: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2807377
weak disagree. FWIW lots of good cites in endnotes to chapter 2 of The Precipice pp.305–12; and Moynihan’s X-Risk.
btw — there’s a short section on this in my old Existential Risk wikipedia draft. maybe some useful stuff to incorporate into this.
I tried to incorporate parts of that section, and in the process reorganized and expanded the article. Feel free to edit anything that seems inadequate.
I wrote:
Maybe the wiki shouldn’t say “it is important” for something that could be contested? But I think it’d be pretty hard for a reasonable person to contest this. And I did give a source for someone else saying it (though it’s just a Forum comment).
I wrote:
This is based on my own non-systematic observations, rather than systematic data collection or some other source. I could be wrong. Or maybe it’s obvious enough that I’m right that “perhaps” should be removed.
I also wrote:
This is something I recall Ord saying, maybe in multiple places, but I can’t recall where. I think maybe his talk at the recent SERI conference? Ideally, someone will find a source, add it, and make the sentence more specific.
Things it’d be good to add:
Discussion of other moral perspectives, beyond the 5 Ord mentions
E.g., self-regarding, pure time discounting, person-affecting
In some cases, these involve “carving things up” differently to how Ord does, rather than just adding categories (e.g., pure time discounting and person-affecting could both be specific things that lead to a basically “present”-focused perspective)
Discussion of distinct moral perspectives given longtermism
e.g. longtermism plus an asymmetric person-affecting view
e.g., longtermism plus other suffering-focused ethical views
Discussion of how these different perspectives have different implications for which risks to prioritise
Perhaps most notably extinction risk vs reducing risks of worse-than-extinction futures vs improving quality of life conditional on survival
Discussion of how this might affect prioritisation of GCRs
E.g., I think the present- and civilizational-virtue-focused arguments apply to GCR reduction, but the past- and cosmic-significance-focused arguments probably don’t
If the current name and scope is kept, it should be made clear that this is a distinct category that is being discussed as a sort of aside (i.e., that GCRs and x-risks are not the same things)
I’m not sure what the ideal scope and name for this tag would be. Pablo and I discussed that at some length here.
Here are some possible scopes:
Basically all the main arguments for or against prioritising reducing existential risks
E.g., the 5 moral perspectives Ord discusses in The Precipice, focused on the past, the present, the future, civilization virtues, and cosmic significance
Also things like pure time discounting and population ethics
But also empirical, epistemological, or decision-theoretic arguments
E.g., the idea that the future might be net negative and that this might push against extinction risk reduction (though not necessarily against reducing some other existential risks), or the epistemic challenge to longtermism
Basically all the main moral perspectives that might support or oppose prioritising reducing x-risks
So not including empirical, epistemological, or decision-theoretic arguments, except maybe in passing
Basically all the main non-longtermist moral perspectives that might support or oppose reducing x-risks
E.g., the 4 moral perspectives Ord discusses except the one focused on the future
The reason we might want to have this scope is that it’s already pretty easy to find discussion of the longtermist arguments for prioritising reducing x-risks, and there might be more value added by having a place dedicated to collecting the somewhat less common perspectives
Just the neartermist argument for supporting reducing x-risks
The reason we might want to have this scope is that this is probably the most prominent non-longtermist argument for x-risk reduction, and the one that seems more important to me and I think to Pablo and various other EAs
Relatedly, this also seems like the most “EA-aligned”non-longtermist argument for x-risk reduction
Any of the above, but focused more specifically on just arguments for prioritising x-risk reduction
Maybe also covering direct rebuttals, but not covering distinct arguments against
But I’m not sure how well that distinction can be made
Any of the above, but for extinction risk specifically
Any of the above, but for global catastrophic risks more broadly
(Well, “more broadly” is a bit misleading, since some existential risks wouldn’t be global catastrophic risks. But I think GCRs can mostly be thought of as a broader term.)
Currently I think I lean towards 2 or 3, with 1 just behind. But I’m unsure.
With all of that in mind, here are some possible names, in roughly descending order of how much I like them:
Moral perspectives on existential risk reduction
Non-longtermist perspectives on existential risk reduction
Existential risk prioritization for non-longtermists
Non-longtermist arguments for existential risk reduction
Arguments for reducing existential risk
Arguments for existential risk prioritization
Near-termist existential risk prioritization
Near-termist arguments for existential risk reduction
Alternative perspectives on existential risk prioritization
I don’t really like tag names that say “alternative” in a way that just assumes everyone will know what they’re alternative to, but I’m throwing the idea out there anyway, and we do have some other tags with names like that
Any of the above names, but with “x-risks” instead (just to shorten it, while keeping the scope the same)
Any of the “Arguments for” names, but with “Arguments for and against” instead
Any of the above names that don’t say “prioritisation” or similar, but tweaked to say “prioritising existential risks” or “prioritising existential risk reduction” or similar
Any of the above names, but with “extinction risk” or “GCRs” or “global catastrophic risks” (this would change the scope)
Any of the “near-termist” names, but with “short-termist” instead
How about “ethics of existential risk reduction”?
“Ethics of X” is a standard phrase.
Ok, I’ve now changed the title and changed the first sentence to:
This makes me notice something that’s a bit odd about this wiki (compared to Wikipedia), which is that sometimes we’re kind of making up a name and scope for what was really, until a given wiki edit, just a bundle of papers, blog posts, etc. Like, authors hadn’t necessarily said “My paper is part of the body of work on the ethics of existential risk”, and no one had previously said specifically that the ethics of existential risk covers those 4 questions I mention there. So this edit of mine is quite “original research”-y in the Wikipedia sense.
Perhaps a more honest phrasing would be “We could use the term ‘the ethics of existential risk’ to describe a disparate, scattered collection of work that covers some combination of the questions of...” But that sounds less encyclopaedic.
I’m not sure this is a problem, but it seems slightly odd, and I wanted to flag it in case other people had thoughts.
Expanding on “I’m not sure this is a problem”: I feel like I, Pablo, and probably some other people are happy with me making edits like this, which are kind-of original research yet phrased as if what we’re describing already existed with that name. But I don’t know if we should be ok with editors in general doing that. So maybe we should have some policy indicating when it is vs isn’t ok, how to approach it, that people should flag on the Discussion page when they’ve done it so it can be reviewed, etc.?
Actually, Googling “ethics of existential risk” does yield a fair number of hits at FHI, 80,000 Hours, etc. So I think calling it that isn’t at risk of being original research.
Regarding your last paragraph, I think that it’s in general a good idea if people flag on the Discussion page when they want to make big and non-obvious edits or additions (the threshold can be discussed). But that’s a more general issue (doesn’t just pertain to edits that could be seen as original research). I don’t have a clear sense of exactly how it should be done, though.
What do folks think of just ‘Ethics of existential risk’? The form would match other Wiki entries, such as Psychology of effective altruism. Also, similar formulas have been used in the academic literature: e.g. the subtitle of John Leslie’s book is The Science and Ethics of Human Extinction (as opposed to The Science and Ethics of Human Extinction Prevention). I don’t have a particularly strong preference, though.
I prefer this option to all others mentioned here.
Yes, I think I prefer that (see my subsequent comment).
Yeah, I think that’s probably better than all the suggestions listed above (including the current name). My current plan: Wait a day or so to see if there’s any further commentary, and then probably change the name to “ethics of existential risk reduction”.
Thanks for the suggestion :)
Great! Or just “ethics of existential risk”.
Also, my hunch is that “existential risk” is better than “x-risk” in Wiki articles, since I think the Wiki should have a somewhat formal tone.
Yeah, thanks again, I think those are both good suggestions.
I usually prefer “existential risk” in general and especially for the wiki. I deliberately decided to deviate from that general policy here, but I can’t remember for sure why and I’m not sure I endorse it—I think it was basically just that the term is used a lot here, so it’s a bit annoying to write and read the full version every time. But that’s probably outweighed by the perks of sounding professional/formal. I’ve now switched this entry’s text to saying “existential risk”.
Thanks! Yeah I get that it may look slightly clunky but also agree that that’s outweighed by the advantages of sounding more formal.
FWIW, and setting aside stylistic considerations for the Wiki, I dislike ‘x-risk’ as a term and avoid using it myself even in informal discussions.
it’s ambiguous between ‘extinction’ and ‘existential’, which is already a common confusion
it seems unserious and somewhat flippant (vaguely comic book/sci-fi vibes)
the ‘x’ prefix can denote the edgy, or sexual (e.g. X Games; x-rated; Generation X?)
‘x’ also often denotes an unknown value (e.g. in ‘Cause X’ — another abbreviation I dislike; or indeed Stefan’s comment earlier in this thread)
Thanks for this comment. I was already aware of the first two downsides, and often lean away from the term for those reasons. But I hadn’t considered the other two downsides, and they make sense to me, so this updates me towards more consistently avoiding the term.
Out of interest, do you use “x-risk” in e.g. Slack threads, google doc comments, and conversations at lunch? I.e., in contexts that are not just informal but also private and two-way (so it’s easier to notice if something has been misunderstood or left a bad impression)? I think by default I’d continue to do that myself.
Thanks. I updated the Style Guide to reflect this:
I can’t off the top of my head think of situations where the abbreviated form would be more appropriate, but if others have concrete cases in mind, please mention them here so that we can revise the Guide, if necessary.