Concepts of existential catastrophe

Link post

This is a linkpost for Concepts of existential catastrophe by Hilary Greaves (version of September 2023). Below are some excerpts and my comments.

Abstract

The notion of existential catastrophe is increasingly appealed to in discussion of risk management around emerging technologies, but it is not completely clear what this notion amounts to. Here, I provide an opinionated survey of the space of plausibly useful definitions of existential catastrophe. Inter alia, I discuss: whether to define existential catastrophe in ex post or ex ante terms, whether an ex ante definition should be in terms of loss of expected value or loss of potential, and what kind of probabilities should be involved in any appeal to expected value.

1. Introduction and motivations

Humanity today arguably faces various very significant existential risks, especially from new and anticipated technologies such as nuclear weapons, synthetic biology and advanced artificial intelligence (Rees 2003, Posner 2004, Bostrom 2014, HäggstrÜm 2016, Ord 2020). [...]

An existential risk is a risk of an existential catastrophe. An existential catastrophe is a particular type of possible event. This much is relatively clear. But there is not complete clarity, or uniformity of terminology, over what exactly it is for a given possible event to count as an existential catastrophe. Unclarity is no friend of fruitful discussion. Because of the importance of the topic, it is worth clarifying this as much as we can. The present paper is intended as a contribution to this task.

I have estimated a nearterm annual risk of human extinction from nuclear war of 5.93*10^-12, war on priors of 6.36*10^-14, and terrorist attacks on priors of 4.35*10^-15. How these compare with their respective nearterm annual existential risk depends on how one defines an existential catastrophe, but I guess ones from nuclear weapons and synthetic biology are astronomically unlikely in the absence of transformative artificial intelligence (TAI).

2. Defining “existential catastrophe” in terms of extinction

Among those who discuss existential risk, one concern is the possibility that humanity might go extinct in, say, the next century or two. This would plausibly be a massive-scale catastrophe not only from a partial point of view that has special concern for humans, but also by impartial lights, even if other species survived. The reason is that among species presently on Earth, humans are special. The specialness has two aspects. First, an especially high degree of sentience: humans have significantly higher capacity for well-being than individuals of most other species [...] Secondly, intelligence: because of the greater intelligence of humans, we alone have developed technologies facilitating massive rises in population size and in standards of living, and we alone have any remotely realistic prospect of settling parts of the universe beyond the Earth [...]

I agree humans have an especially high degree of sentience per individual, but not per energy consumption. I estimated the welfare range per calorie consumption for 6 species besides humans, and they ranged from 47.3 % (pigs) to 4.88 k times (bees) that of humans. So I would be very surprised if human experiences were the most efficient way of creating welfare.

Candidate Definition 1.1. An event is an existential catastrophe iff it is the extinction of the human species.

At least two things, however, are wrong with this definition.

First, it is inevitable that humanity will go extinct at some point, if only because of the eventual heat death of the universe [Wikipedia’s page]. If humanity survived as long as was compatible with the facts of thermodynamics, its eventual extinction would not count as an existential catastrophe in any decision-relevant sense. What is catastrophic is not extinction per se, but rather early or premature extinction.

Second, we should not fixate on the disappearance of the human species per se. If Homo sapiens underwent continued evolution to such an extent that our successors came to count as members of a distinct biological species, that would not in and of itself be cause for concern. If Homo sapiens was replaced as the dominant species on Earth by some other type of entity, either of our own creation (genetically enhanced “posthumanity”, artificial intelligence) or not (the result of mutations in a competitor species [Home sapiens is the only surviving species among the 13 of the genus Homo]), that also need not be a catastrophe from the impartial point of view, provided that the takeover species also possesses the morally relevant properties that made humans special in the first place (i.e., according to our above account of the “specialness”, intelligence and sentience).

A catastrophe causing human extinction would not be existential if there was a high probability of an intelligent sentient species evolving again (relatedly). I see this as a particular case of what is referred above as Homo sapiens being replaced by a competitor species.

I actually think human extinction would be very unlikely to be an existential catastrophe if it was not caused by TAI. For example, I think there would only be a 0.0513 % (= e^(-10^9/​(132*10^6))) chance of a repetition of the last mass extinction 66 M years ago, the Cretaceous–Paleogene extinction event, being existential. I got my estimate assuming:

  • An exponential distribution with a mean of 132 M years (= 66*10^6*2) represents the time between i) human extinction in such catastrophe and ii) the evolution of an intelligent sentient species after such a catastrophe. I supposed this on the basis that:

    • An exponential distribution with a mean of 66 M years describes the time between:

      • 2 consecutive such catastrophes.

      • i) and ii) if there are no such catastrophes.

    • Given the above, i) and ii) are equally likely. So the probability of an intelligent sentient species evolving after human extinction in such a catastrophe is 50 % (= 1⁄2).

    • Consequently, one should expect the time between i) and ii) to be 2 times (= 1⁄0.50) as long as that if there were no such catastrophes.

  • An intelligent sentient species has 1 billion years to evolve before the Earth becomes habitable.

I expect other intelligent sentient species would be roughly as good as humans. As a cautionary tale against guessing they would be worse, it is worth noting the last mass extinction may well have contributed to the emergence of mammals and ultimately humans.

It looks like Toby Ord did not consider in The Precipice the possibility of other intelligent sentient species evolving following human extinction. Otherwise, I do not see how his guesses for the existential risk from 2021 to 2120 could be so high. For example:

  • Toby says the probability of an asteroid larger than 10 km colliding with Earth from 2021 to 2120 is lower than 1 in 150 M (Table 3.1), and guesses that the risk from comets larger than 10 km is similarly large (p. 72), which implies a total collision risk from asteroids and comets larger than 10 km of around 1.33*10^-8 (= 2/​(150*10^6)). This is only 1.33 % (= 1.33*10^-8/​10^-6) of Toby’s guess for the existential risk from asteroids and comets, which implies Toby expects the vast majority of existential risk to come from asteroids and comets smaller than 10 km.

  • The last mass extinction “was caused by the impact of a massive asteroid 10 to 15 km (6 to 9 mi) wide”. Thus size is close to the aforementioned threshold of 10 km, so Toby would expect an asteroid impact similar to that of the last mass extinction to be an existential catastrophe.

  • Nonetheless, according to my calculations, even if such asteroid was certain to cause human extinction (Salotti 2022uses a larger threshold of 100 km), there would only be a 0.0513 % chance of an existential catastrophe.

The case for strong longtermism by Hilary and William MacAskill also seems to overlook the above possibility, at least interpreting humanity as including not only humans, but also any Earth-originating intelligent conscious species:

The non-existence of humanity is a persistent state par excellence. To state the obvious: the chances of humanity re-evolving, if we go extinct, are miniscule.

The probability of an existential catastrophe given human extinction matters for cause prioritisation:

  • The more species on or adjacent to humans’ past evolutionary path go extinct (e.g. due to a larger asteroid/​comet), the lower the likelihood of another intelligent and sentient species evolving again, and the higher the existential risk.

  • If human extinction is caused by malevolent TAI, the likelihood of another intelligent and sentient species having control over the future would arguably be negligible, so that is a clear existential risk.

In other words, even given human extinction, catastrophes:

  • Not involving TAI become worse as their severity increases.

  • Involving TAI become worse as its malevolence increases.

Candidate Definition 1.2. An event is an existential catastrophe iff it is the premature extinction of Earth-originating, intelligent, sentient life.

As I noted previously, human extinction does not necessarily qualify as an existential catastrophe under this definition. I would not correspond to the premature extinction of Earth-originating, intelligent and sentient life if there was a high probability of such life evolving again. Hilary also clarifies in footnote 2 that Homo sapiens being replaced by an intelligent non-sentient entity (benevolent TAI) need not be an existential catastrophe (emphasis mine):

A little more fundamentally, the desideratum is that there exist, throughout the long future, large numbers of high-welfare entities. One way this could come about is if a takeover species is both highly intelligent and highly sentient, and spreads itself throughout the universe (with high-welfare conditions). But other ways are also possible. For example, takeover by non-sentient AI together with extinction of humanity need not constitute an existential catastrophe if the non-sentient AI itself goes on to create large numbers of highwelfare entities of some other kind. (Thanks to Adam Bales for discussion of this point.)

The way I see it, humans had better align TAI with expectedtotalhedonisticutilitarianism (ETHU) instead of avoiding human extinction. Yet, this may well be a good proxy this century, as aligning our successors with ETHU would presumably take some time.

Of course, it is also possible for a catastrophe to be existential without causing human extinction.

The possibilities in question are generally cases in which the size and/​or the welfare of the future population is massively reduced, in such a way that total welfare is massively reduced. They include, for example, the possibilities that:

- All-out nuclear war (Sagan 1983, Robock et al 2007, Ellsberg 2017) or a global pandemic (Millet and Snyder-Beattie 2017) decimates the human population. Humanity survives, but with a massively reduced population size, and is reduced to subsistence conditions. Advanced technological civilisation never re-emerges. [In contrast, I have argued there is a high chance that another intelligent sentient species would evolve.]

- Advanced technology allows an oppressive totalitarian regime to take permanent control of the entire world, in such a way that most future people live in very low-welfare conditions (Caplan 2008)).

- Extreme climate change permanently and massively reduces the carrying capacity of the Earth, so that many fewer people can live at any given future time, without actually bringing forward the date of human extinction (Sherwood & Huber 2010).

I start the next section with an excerpt defining existential catastrophe accounting for the above possibilities.

4. Defining “existential catastrophe” in terms of loss of expected value

Candidate Definition 4.1. An existential catastrophe is an event which brings about the loss of a large fraction of the expected value of the future of humanity.

Importantly, Hilary clarifies in section 8 that:

Generally, where “humanity” appears in a definition of existential catastrophe [as just above], it is to be read as an abbreviation for “Earth-originating intelligent sentient life” [including non-biological one].

With respect to the definition above, Hilary says:

In the end, something very close to this might be the best definition of existential catastrophe. But at least two further clarifications are required.

The first centres on the term “expected”. Expected value is a matter of probability-weighted average value with respect to some particular probability distribution. If talking in terms of expected value (or any other ex ante evaluation), therefore, one key question is what determines which is the relevant probability distribution. For example, a straightforward subjectivist approach would appeal to the evaluator’s own subjective credences (whatever they might be). But one can also consider more objective criteria for picking out the relevant probability distribution.

The second source of unclarity is the term “brings about”. Since the value function on possible worlds is fixed, to bring about a change in expected value, an event must bring about a change in the probability distribution. But there are several, importantly distinct, senses in which this could be the case. In one sense, an event E “brings about” a given shift in the relevant probability distribution if a causal consequence of that event’s occurring is that the shift in question occurs over time (as a temporal change in, for example, the evaluator’s subjective or evidential credences or the objective chances). On an alternative approach, the shift is instead a matter of conditioning or imaging the existing probability distribution in question on E (i.e., on the proposition that the event in question occurs).

8. Summary and conclusions

In this paper, I have considered various possible ways in which one might define “existential catastrophe”. One useful notion to focus on is that of the premature extinction of “humanity”, broadly construed. For this to be a useful focus, “humanity” must indeed be construed broadly, so as to avoid fetishization of species boundaries and so as to secure (insofar as is desired) impartiality between humans and other relevantly similar entities. Generally, where “humanity” appears in a definition of existential catastrophe, it is to be read as an abbreviation for “Earth-originating intelligent sentient life” [including non-biological one].

However, scenarios of premature extinction are far from the only locus of concern. Many of the same new and emerging technologies that give rise to concerns about premature human extinction also give rise to similarly serious concerns about similarly bad non-extinction scenarios. It is important to have a concept that also includes these other scenarios.

[...]

Within the “expected value” approach, my own preferred definition of existential catastrophe is:

An existential catastrophe is an event that brings about the permanent loss of a large fraction of the expected value of humanity

I like that Hilary’s preferred definition qualifies the loss of expected value as permanent. This highlights catastrophes leading to temporary losses of value would not be existential, such as human extinction followed by the evolution of another intelligent sentient species, or caused by benevolent non-sentient TAI.