The term âglobal catastrophic riskâ lacks a sharp definition. We use it to refer, loosely, to a risk that might have the potential to inflict serious damage to human well-being on a global scale.
[...] a catastrophe that caused 10,000 fatalities or 10 billion dollars worth of economic damage (e.g., a major earthquake) would not qualify as a global catastrophe. A catastrophe that caused 10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic) would count as a global catastrophe, even if some region of the world escaped unscathed. As for disasters falling between these points, the definition is vague. The stipulation of a precise cut-off does not appear needful at this stage. [emphasis added]
risks that could be bad enough to change the very long-term trajectory of humanity in a less favorable direction (e.g. ranging from a dramatic slowdown in the improvement of global standards of living to the end of industrial civilization or human extinction).
a hypothetical future event which could damage human well-being on a global scale, even endangering or destroying modern civilization. [...]
any risk that is at least âglobalâ in scope, and is not subjectively âimperceptibleâ in intensity.
Yassif (appearing to be writing for the Open Philanthropy Project):
By our working definition, a GCR is something that could permanently alter the trajectory of human civilization in a way that would undermine its long-term potential or, in the most extreme case, threaten its survival. This prompts the question: How severe would a pandemic need to be to create such a catastrophic outcome? [This is followed by interesting discussion of that question.]
Beckstead (writing for Open Philanthropy Project/âGiveWell):
the Open Philanthropy Projectâs work on global catastrophic risks focuses on both potential outright extinction events and global catastrophes that, while not threatening direct extinction, could have deaths amounting to a significant fraction of the worldâs population or cause global disruptions far outside the range of historical experience.
(Note that Beckstead might not be saying that global catastrophes are defined as those that âcould have deaths amounting to a significant fraction of the worldâs population or cause global disruptions far outside the range of historical experienceâ. He might instead mean that Open Phil is focused on the relatively extreme subset of global catastrophes which fit that description. It may be worth noting that he later quotes Open Philâs other, earlier definition of GCRs, which I listed above.)
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
My half-baked commentary
My impression is that, at least in EA-type circles, the term âglobal catastrophic riskâ is typically used for events substantially larger than things which cause â10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic)â.
E.g., the Global Challenges Foundationâs definition implies that the catastrophe would have to be able to eliminate at least ~750 million people, which is 75 times higher than the number Bostrom & ÄirkoviÄ give. And Iâm aware of at least some existential-risk-focused EAs whose impression is that the rough cutoff would be 100 million fatalities.
With that in mind, I also find it interesting to note that Bostrom & ÄirkoviÄ gave the â10 million fatalitiesâ figure as indicating something clearly is a GCR, rather than as the lower threshold that a risk must clear in order to be a GCR. From their loose definition, it seems entirely plausible that, for example, a risk with 1 million fatalities might be a GCR.
That said, I do agree that âThe stipulation of a precise cut-off does not appear needful at this stage.â Personally, I plan to continue to use the term in a quite loose way, but probably primarily for risks that could cause much more than 10 million fatalities.
a collaboration between Stanford faculty and students dedicated to mitigating global catastrophic risks (GCRs). Our goal is to foster engagement from students and professors to produce meaningful work aiming to preserve the future of humanity by providing skill, knowledge development, networking, and professional pathways for Stanford community members interested in pursuing GCR reduction. [emphasis added]
And they write:
What is a Global Catastrophic Risk?
We think of global catastrophic risks (GCRs) as risks that could cause the collapse of human civilization or even the extinction of the human species.
That is much closer to a definition of an existential risk (as long as we assume that the collapse is not recovered from) than of an global catastrophic risk. Given that fact and the clash between the term the initiative uses in its name and the term it uses when describing what theyâll focus on, it appears this initiative is conflating these two terms/âconcepts.
This is unfortunate, and could lead to confusion, given that there are many events that would be global catastrophes without being existential catastrophes. An example would be a pandemic that kills hundreds of millions but that doesnât cause civilizational collapse, or that causes a collapse humanity later fully recovers from. (Furthermore, there may be existential catastrophes that arenât âglobal catastrophesâ in the standard sense, such as âplateauing â progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturityâ (Bostrom).)
(I should note that I have positive impressions of the Center for International Security and Cooperation (which this initiative is a part of), that Iâm very glad to see that this initiative has been set up, and that I expect theyâll do very valuable work. Iâm merely critiquing their use of terms.)
Global catastrophic risks (GCRs) are roughly defined as risks that threaten great worldwide damage to human welfare, and place the long-term trajectory of humankind in jeopardy. Existential risks are the most extreme members of this class.
[W]e use the term âglobal catastrophic risksâ to refer to risks that could be globally destabilising enough to permanently worsen humanityâs future or lead to human extinction.
The Johns Hopkins Center for Health Securityâs working definition of global catastrophic biological risks (GCBRs): those events in which biological agentsâwhether naturally emerging or reemerging, deliberately created and released, or laboratory engineered and escapedâcould lead to sudden, extraordinary, widespread disaster beyond the collective capability of national and international governments and the private sector to control. If unchecked, GCBRs would lead to great suffering, loss of life, and sustained damage to national governments, international relationships, economies, societal stability, or global security.
Baum and Barrett (2018) gesture at some additional definitions/âconceptualisations of global catastrophic risk that have apparently been used by other authors:
In general terms, a global catastrophe is generally understood to be a major harm to global human civilization. Some studies have focused on catastrophes resulting in human extinction, including early discussions of nuclear winter (Sagan 1983). Several studies posit minimum damage thresholds such as the death of 10% of the human population (Cotton-Barratt et al. 2016), the death of 25% of the human population (Atkinson 1999), or 104 to 107 deaths or $109 to $1012 in damages (Bostrom and ÄirkoviÄ 2008). Other studies define global catastrophe as an event that exceeds the resilience of global human civilization, resulting in its collapse (Maher and Baum 2013; Baum and Handoh 2014).
âAriel Conn: [...] I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if thereâs any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change.â
Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome â which is something unexpectedly, unprecedentedly bad. And, actually, once youâve got your head around that, different groups have slightly different understandings of what the differences between these three terms are.
So, for some groups, itâs all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us.
Most of the systems â be this physiological systems, the worldâs ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on â they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing, and human survival are built on: that we can get food from the biosphere, that our bodies will continue to operate in a way thatâs consistent with and supporting our health and our continued survival, and that the institutions that weâve developed will still work, will still deliver food to our tables, will still suppress interpersonal and international violence, and that weâll basically, weâll be able to get on with our lives.
If you look at it that way, then an extreme risk, or an extreme threat, is one that pushes at least one of these systems outside of its normal boundaries of operation and creates an abnormal behavior that we then have to work really hard to respond to. A catastrophic risk is one where that happens, but then that also cascades. Particularly in global catastrophe, you have a whole system that encompasses everyone all around the world, or maybe a set of systems that encompass everyone all around the world, that are all operating in this abnormal state thatâs really hard for us to respond to.
And then an existential catastrophe is one where the systems have been pushed into such an abnormal state that either you canât get them back or itâs going to be really hard. And life as we know it cannot be resumed; Weâre going to have to live in a very different and very inferior world, at least from our current way of thinking.â (emphasis added)
The term âglobal catastrophic riskâ (GCR) is increasingly used in the scholarly community to refer to a category of threats that are global in scope, catastrophic in intensity, and non-zero in probability (Bostrom and Cirkovic, 2008). [...] The GCR framework is concerned with low-probability, high-consequence scenarios that threaten humankind as a whole (Avin et al., 2018; Beck, 2009; Kuhlemann, 2018; Liu, 2018)
(Personally, I donât think I like that second sentence. Iâm not sure what âthreaten humankindâ is meant to mean, but Iâm not sure Iâd count something that e.g. causes huge casualties on just one continent, or 20% casualties spread globally, as threatening humankind. Or if I did, Iâd be meaning something like âthreatens some humansâ, in which case Iâd also count risks much smaller than GCRs. So this sentence sounds to me like itâs sort-of conflating GCRs with existential risks.)
Collection of some definitions of global catastrophic risks (GCRs)
See also Venn diagrams of existential, global, and suffering catastrophes
Bostrom & ÄirkoviÄ (pages 1 and 2):
Open Philanthropy Project/âGiveWell:
Global Challenges Foundation:
Wikipedia (drawing on Bostromâs works):
Yassif (appearing to be writing for the Open Philanthropy Project):
Beckstead (writing for Open Philanthropy Project/âGiveWell):
(Note that Beckstead might not be saying that global catastrophes are defined as those that âcould have deaths amounting to a significant fraction of the worldâs population or cause global disruptions far outside the range of historical experienceâ. He might instead mean that Open Phil is focused on the relatively extreme subset of global catastrophes which fit that description. It may be worth noting that he later quotes Open Philâs other, earlier definition of GCRs, which I listed above.)
I intend to add to this list over time. If you know of other relevant work, please mention it in a comment.
My half-baked commentary
My impression is that, at least in EA-type circles, the term âglobal catastrophic riskâ is typically used for events substantially larger than things which cause â10 million fatalities or 10 trillion dollars worth of economic loss (e.g., an influenza pandemic)â.
E.g., the Global Challenges Foundationâs definition implies that the catastrophe would have to be able to eliminate at least ~750 million people, which is 75 times higher than the number Bostrom & ÄirkoviÄ give. And Iâm aware of at least some existential-risk-focused EAs whose impression is that the rough cutoff would be 100 million fatalities.
With that in mind, I also find it interesting to note that Bostrom & ÄirkoviÄ gave the â10 million fatalitiesâ figure as indicating something clearly is a GCR, rather than as the lower threshold that a risk must clear in order to be a GCR. From their loose definition, it seems entirely plausible that, for example, a risk with 1 million fatalities might be a GCR.
That said, I do agree that âThe stipulation of a precise cut-off does not appear needful at this stage.â Personally, I plan to continue to use the term in a quite loose way, but probably primarily for risks that could cause much more than 10 million fatalities.
There is now a Stanford Existential Risk Initiative, which (confusingly) describes itself as:
And they write:
That is much closer to a definition of an existential risk (as long as we assume that the collapse is not recovered from) than of an global catastrophic risk. Given that fact and the clash between the term the initiative uses in its name and the term it uses when describing what theyâll focus on, it appears this initiative is conflating these two terms/âconcepts.
This is unfortunate, and could lead to confusion, given that there are many events that would be global catastrophes without being existential catastrophes. An example would be a pandemic that kills hundreds of millions but that doesnât cause civilizational collapse, or that causes a collapse humanity later fully recovers from. (Furthermore, there may be existential catastrophes that arenât âglobal catastrophesâ in the standard sense, such as âplateauing â progress flattens out at a level perhaps somewhat higher than the present level but far below technological maturityâ (Bostrom).)
For further discussion, see Clarifying existential risks and existential catastrophes.
(I should note that I have positive impressions of the Center for International Security and Cooperation (which this initiative is a part of), that Iâm very glad to see that this initiative has been set up, and that I expect theyâll do very valuable work. Iâm merely critiquing their use of terms.)
Some more definitions, from or quoted in 80kâs profile on reducing global catastrophic biological risks
Gregory Lewis, in that profile itself:
Open Philanthropy Project:
Schoch-Spana et al. (2017), on GCBRs, rather than GCRs as a whole:
Metaculus features a series of questions on global catastrophic risks. The author of these questions operationalises a global catastrophe as an event in which âthe human population decrease[s] by at least 10% during any period of 5 years or lessâ.
Baum and Barrett (2018) gesture at some additional definitions/âconceptualisations of global catastrophic risk that have apparently been used by other authors:
From an FLI podcast interview with two researchers from CSER:
âAriel Conn: [...] I was hoping you could quickly go over a reminder of what an existential threat is and how that differs from a catastrophic threat and if thereâs any other terminology that you think is useful for people to understand before we start looking at the extreme threats of climate change.â
Simon Beard: So, we use these various terms as kind of terms of art within the field of existential risk studies, in a sense. We know what we mean by them, but all of them, in a way, are different ways of pointing to the same kind of outcome â which is something unexpectedly, unprecedentedly bad. And, actually, once youâve got your head around that, different groups have slightly different understandings of what the differences between these three terms are.
So, for some groups, itâs all about just the scale of badness. So, an extreme risk is one that does a sort of an extreme level of harm; A catastrophic risk does more harm, a catastrophic level of harm. And an existential risk is something where either everyone dies, human extinction occurs, or you have an outcome which is an equivalent amount of harm: Maybe some people survive, but their lives are terrible. Actually, at the Center for the Study of Existential Risk, we are concerned about this classification in terms of the cost involved, but we also have coupled that with a slightly different sort of terminology, which is really about systems and the operation of the global systems that surround us.
Most of the systems â be this physiological systems, the worldâs ecological system, the social, economic, technological, cultural systems that surround those institutions that we build on â they have a kind of normal space of operation where they do the things that you expect them to do. And this is what human life, human flourishing, and human survival are built on: that we can get food from the biosphere, that our bodies will continue to operate in a way thatâs consistent with and supporting our health and our continued survival, and that the institutions that weâve developed will still work, will still deliver food to our tables, will still suppress interpersonal and international violence, and that weâll basically, weâll be able to get on with our lives.
If you look at it that way, then an extreme risk, or an extreme threat, is one that pushes at least one of these systems outside of its normal boundaries of operation and creates an abnormal behavior that we then have to work really hard to respond to. A catastrophic risk is one where that happens, but then that also cascades. Particularly in global catastrophe, you have a whole system that encompasses everyone all around the world, or maybe a set of systems that encompass everyone all around the world, that are all operating in this abnormal state thatâs really hard for us to respond to.
And then an existential catastrophe is one where the systems have been pushed into such an abnormal state that either you canât get them back or itâs going to be really hard. And life as we know it cannot be resumed; Weâre going to have to live in a very different and very inferior world, at least from our current way of thinking.â (emphasis added)
Sears writes:
(Personally, I donât think I like that second sentence. Iâm not sure what âthreaten humankindâ is meant to mean, but Iâm not sure Iâd count something that e.g. causes huge casualties on just one continent, or 20% casualties spread globally, as threatening humankind. Or if I did, Iâd be meaning something like âthreatens some humansâ, in which case Iâd also count risks much smaller than GCRs. So this sentence sounds to me like itâs sort-of conflating GCRs with existential risks.)