[Paper] Global Catastrophic and Existential Risks Communication Scale, similar to Torino scale
We (Alexey Turchin and David Denkenberger) have a new paper out where we suggest a scale to communicate the size of global catastrophic and existential risks.
For impact risks, we have the Torino scale of asteroid danger which has five color-coded levels. For hurricanes we have the Saffir-Simpson scale of five categories. Here we present similar scale for communicating the size of the global catastrophic and existential risks.
Typically, some vague claims about probability are used as a communication tool for existential risks, for example, some may say, “There is a 10 per cent chance that humanity will be exterminated by nuclear war”. But the probability of the most serious global risks is difficult to measure, and the probability estimate doesn’t take into account other aspects of risks, for example, preventability, uncertainty, timing, relation to other risks etc. As a result, claims about probability could be misleading or produce reasonable skepticism.
To escape these difficulties, we suggested creating a scale to communicate existential risks, similar to the Torino scale of asteroid danger.
In our scale, there are six color codes, from white to purple. If hard probabilities are known, the color corresponds to probability intervals for a fixed timeframe of 100 years, which helps to solve uncertainty and timing.
However, for most serious risks, like AI, their probabilities are not known, but the required levels of prevention action are known. For these cases, the scale communicates the risk’s size through the required level of prevention action. In some sense, it is similar to Updateless Decision Theory, where an event’s significance is measured, not by observable probabilities, but by the utility of corresponding actions. The system would work, because in many cases of x-risks, the required prevention actions are not very sensitive to the probability.
How should the scale be implemented in practice? If probabilities are not known, a group of experts should aggregate available information and communicate it to the public and policymakers, saying something like: “We think that AI is a red risk, a pandemic is a yellow risk and asteroid danger is a green risk.” It would help to bring some order to the public perception of each risk—where, currently, asteroid danger is clearly overestimated compared to the risks of AI risk—without making unsustainable claims about unmeasurable probabilities.
In the article we have already given some estimates for the most well-known existential risks, but clearly they are open to debate.
Here’s the abstract:
Existential risks threaten the future of humanity, but they are difficult to measure. However, to communicate, prioritize and mitigate such risks it is important to estimate their relative significance. Risk probabilities are typically used, but for existential risks they are problematic due to ambiguity, and because quantitative probabilities do not represent some aspects of these risks. Thus, a standardized and easily comprehensible instrument is called for, to communicate dangers from various global catastrophic and existential risks. In this article, inspired by the Torino scale of asteroid danger, we suggest a color-coded scale to communicate the magnitude of global catastrophic and existential risks. The scale is based on the probability intervals of risks in the next century if they are available. The risks’ estimations could be adjusted based on their severities and other factors. The scale covers not only existential risks, but smaller size global catastrophic risks. It consists of six color levels, which correspond to previously suggested levels of prevention activity. We estimate artificial intelligence risks as “red”, while “orange” risks include nanotechnology, synthetic biology, full-scale nuclear war and a large global agricultural shortfall (caused by regional nuclear war, coincident extreme weather, etc.) The risks of natural pandemic, supervolcanic eruption and global warming are marked as “yellow” and the danger from asteroids is “green”.
The paper is published in Futures https://www.sciencedirect.com/science/article/pii/S001632871730112X
If you want to read the full paper, here’s a link on preprint: https://philpapers.org/rec/TURGCA
Two main pictures from the paper:
- 25 Mar 2023 8:34 UTC; 3 points) 's comment on Revolutionising National Risk Assessment (NRA): improved methods and stakeholder engagement to tackle global catastrophe and existential risks by (
- 4 Mar 2018 21:24 UTC; 1 point) 's comment on [Paper] Surviving global risks through the preservation of humanity’s data on the Moon by (
This seems like a good project and I found the 2-axis picture helpful. The only bit that stood out was global warming. I’m not sure how you’re defining it but my sense is that global warming of some sort seems pretty likely to be a problem in the next 100 years. If you mean a particularly severe form of global warming, it might help to have a more expressive term like “runaway climate change” or “severe climate change” and possible also a term for a more moderate form that appears in another box.
Surely, there are two types of global warming.
I think that risks of runaway global warming are underestimated, but there is very small scientific literature to support the idea.
If we take accumulated tall from smaller effects of the long-term global warming of 2-6C, it could be easily calculated as a very larger number, but to be regarded as a global catastrophe, it probably should be more like a one-time event, or many other things will be also a global catastrophe, like cancer etc.
Modeling global catastrophic risks as single events makes sense for a lot of purposes. In terms of preventive actions to be taken, and how resources from, e.g., governments should be divvied up between risks, insuring against one-off catastrophes are important. I think it makes sense to exclude climate change itself from the model, and include potentially catastrophic events which may result from runaway climate change, like worldwide famines.
Another way to model global catastrophic risks is to model their rising profile as a function of time, and past which critical milestones will the scale or degree of the risk irreversibly escalate. This way for the purposes of x-risk studies we can have a model which incorporates both the risk climate change poses over time, versus the development of emerging technologies like AI and genetic engineering over the same period of time.
I haven’t read the whole paper yet, so forgive me if I miss some of the major points by just commenting on this post.
The image seems to imply that non-aligned AI would only extinguish human life on Earth. How do you figure that? It seems that an AI could extinguish all the rest of life on Earth too, even including itself in the process. [edit: this has since been corrected in the blog post]
For example, you could have an AI system that has the objective of performing some task X, before time Y, without leaving Earth, and then harvests all locally available resources in order to perform that task, before eventually running out of energy and switching off. This would seem to extinguish all life on Earth by any definition.
We could also discuss whether AI might extinguish all civilizations in the visible universe. This also seems possible. One reason for this is that humans might be the only civilization in the universe.
It is hard to encapsulate this all into a simple scale, but we wanted to recognize that false vacuum decay that would destroy the Universe at light speed would be worse than bad AI, at least if you think the future will be net positive. Bad AI could be constrained by a more powerful civilization.
No, in the paper we clearly said that non-alaigned AI is the risk to the whole universe in the worst case scenario.
Also, the image above indicates AI would likely destroy all life on earth, not only human life.
In the article AI is destroying all life on earth, but on the previous version of the image in this blog post the image was somewhat redesign to better visibility and AI risk jumped to the kill all humans. I corrected the image now, so it is the same as in the article, - so the previous comment was valid.
Will the AI be able to destroy other civilizations in the universe depends on the fact if these civilizations will create their own AI before intelligence explosion wave from us arrive to them.
So AI will kill only potential and young civilizations in the universe, but not mature civilizations.
But it is not the case for false vacuum decay wave which will kill everything (according to our current understanding of AI and vacuum).