Some global catastrophic risk estimates
In October of 2018, I developed a question series on Metaculus related to extinction events spanning risks from nuclear war, bio-risk, risks from climate change and geo-engineering, Artificial Intelligence risk, and risks from nanotechnology failure modes. Since then, these questions have accrued over 3,000 predictions (ETA: as of today, there the number is around 5,000).
Catastrophes were defined as a reduction in the human population of at least 10% in any period of 5 years or less. (Near) extinction is defined as an event that reduces the human population by at least 10% within 5 years, and by at least 95% within 25 years.
Here’s a summary of the results as they stand today (September 24, 2023), ordered by risk of near extinction:
Global catastrophic risk | Chance of catastrophe by 2100 | Chance of (near) extinction by 2100 |
Artificial Intelligence | 6.16% | 3.39% |
Other risks | 1.52% | 0.13% |
Biotechnology or bioengineered pathogens | 1.52% | 0.07% |
Nuclear war | 2.86% | 0.06% |
Nanotechnology | 0.02% | 0.01% |
Climate change or geo-engineering | 0.00% | 0.00% |
Natural pandemics | 0.62% | N/A |
These predictions are generated by aggregating forecasters’ individual predictions based on their track records. Specifically, the predictions are weighted by a function of the forecasters’ level of ‘skill’, where ‘skill’ is estimated with data on relative performance on a number (typically many hundreds) of resolved forecasts.
If we assume that these events are independent, the predictions suggest that there’s at a ~17% chance of catastrophe, and a ~1.9% chance of (near) extinction by the end of the century. Admittedly, independence is likely to be an inappropriate assumption, since, for example, some catastrophes could exacerbate other global catastrophic risks.[1]
Interestingly, the predictions indicate that although nuclear risk and bioengineered pathogens are most likely to result in a major catastrophe, an AI failure mode is by far the biggest source of extinction-level risk—it is at least 5-times more likely to cause near extinction than all other risks combined.
Links to all the questions on which these predictions are based may be found here.
For reference, these were the estimates when I first posted this (19 Jun 2022):
Global catastrophic risk | Chance of catastrophe by 2100 | Chance of (near) extinction by 2100 |
Artificial Intelligence | 3.06% | 1.56% |
Other risks | 1.36% | 0.11% |
Biotechnology or bioengineered pathogens | 2.21% | 0.07% |
Nuclear war | 1.87% | 0.06% |
Nanotechnology | 0.17% | 0.06% |
Climate change or geo-engineering | 0.51% | 0.01% |
Natural pandemics | 0.51% | n/a |
- A case for the effectiveness of protest by 29 Nov 2021 11:50 UTC; 123 points) (
- 10 Feb 2022 17:07 UTC; 32 points) 's comment on Linch’s Quick takes by (
- 5 Mar 2022 9:28 UTC; 16 points) 's comment on The Future Fund’s Project Ideas Competition by (
- Is nuclear war indeed unlikely? by 23 May 2021 23:14 UTC; 2 points) (LessWrong;
Forum readers who are not frequently on Metaculus may be interested in knowing that there are a number of biases and internal validity issues for long-term predictions on Metaculus, potentially more so than for short term questions there. For example, arguably the most important long-term question on Metaculus:
has comments like:
I think nonzero predictors take these comments quite seriously, or for other reasons are fairly flippant about finding out accurate answers to these long-term questions. Thus, forum readers should be extra careful before deferring blindly to Metaculus about such questions, and thus rely more on other sources over Metaculus.
The strongest counterargument to my reasoning above might be something like “Metaculus is unusually public and quantitative as a platform. To the extent that Metaculus has visible errors, we may expect that other epistemic sources have other, potentially larger, invisible errors.”(Analogy: the concept of “not even wrong” in science). I take this reasoning quite seriously but do not consider it overwhelming.
The reasoning in the comment you quoted is actually not very persuasive, because it’s virtually certain that the user will be dead by 2100, Metaculus won’t exist by then, or MIPs will have ceased to be valuable to them. Even the slightest concern for accuracy should trump the minuscule expected benefit from pursuing this alleged “optimal strategy”. (Though I guess some would derive great pleasure from being able to truly say “I predicted that humanity had a 99% chance of surviving the century 80 years ago and, low and behold, here we are, alive and kicking!”).
Unfortunately, for questions with a shorter time horizon, that kind of argument may have some force. I feel ambivalent about discussing these issues, since I’m not sure how to balance the benefit of alerting others to the potential biases in Metaculus against the cost of exacerbating those biases, either by drawing attention to this strategy among predictors who hadn’t considered it, or by creating the impression that other predictors are using it and thereby eroding the social norm to predict honestly. I guess one can try to emphasize that, at least with questions whose answers have social value, adopting the MIP-maximizing strategy when it is in conflict with accuracy should be seen as a form of defection and those who do it should feel bad about it.
This is a good point that in retrospect seems obvious, and I’m a bit disappointed I hadn’t thought of it when I previously considered this issue or saw the comment Linch quoted. (That said, “virtually certain” maybe seems a bit strong to me.)
6% chance of Metaculus existing in 2100, from anthropic reasoning
1% chance of user alive in 2100, from eyeballing actuarial life tables
Given independence, that’s ~0.05%, and I’d say conditional on that combination of events obtaining, maybe 15% chance the user cares (not caring includes not just a change in preferences but also a failure to fulfil the preconditions for caring, such as not remembering the prediction, being too senile to understand things, etc). So something in the order of one in 10k.
I think somewhat higher chance of users being alive than that, because of the big correlated stuff that EAs care about.
Thanks for sharing this summary! I think these questions and forecasts are a useful resource.
For anyone who wants to see more forecasts of existential risks (or similarly extreme outcomes), I made a database of all the ones I’m aware of. (People can also suggest additions to that. And it includes a link to these Metaculus forecasts.) And here’s a short talk in which I introduce the database and overview the importance and challenges of estimating existential risk.
You may very well already be aware of this (I didn’t look at your linked post closely), but Elicit IDE has a “Search Metaforecast database” tool to search forecasts on several sites that may be helpful to your existential risk forecast database project. Here are the first 120 results for “existential risk.”
By request, I have updated the predictions based on the latest predictions. Previous numbers can be found here.
Thanks so much for this, it’s a great resource!
Could you clarify a little the difference between the ‘community’ and the ‘metaculus’ forecasts? Is it correct that if I look at the live forecasts, I’ll see the community one (e.g. the community thinks 24% chance of a catastrophe atm)?
Is it also possible to calculate the change of a catastrophe from an unknown risk from this? My understanding is the total risk is forecasted at ~14% by the metaculus group. If we add up the individual risks, we also get to ~14%. This suggests that the metaculus group think there’s not much room for a catastrophe from an unknown source. Is that right?
The “metaculus” forecast weights users’ forecasts by their track record and corrects for calibration, I don’t think the details for how are public. Yes you can only see the community one on open questions.
I’d recommend against drawing the conclusion you did from the second paragraph (or at least, against putting too much weight on it). Community predictions on different questions about the same topic on Metaculus can be fairly inconsistent, due to different users predicting on each.
Ah thanks for clarifying (that’s a shame!).
Maybe we could add another question like “what’s the chance it’s caused by something that’s not one of the others listed?”
Or maybe there’s a better way at getting at the issue?
Hi Benjamin, these are great questions! I work with Metaculus and wanted to add a bit of color here:
To your question about how to see the Metaculus Prediction, that’s located: https://www.metaculus.com/help/faq/#tachyon-costs
—basically one has to be of a sufficient “level”, and then pay out some tachyons (the coin of the realm) to unlock the Metaculus Prediction for that question. That said, in this case, we’re happy to share the current MP with you. (I’ll message you here in a moment.)
And as to how the MP is calculated, the best resource there was written by one of the founders, and lives in this blog post: https://metaculus.medium.com/a-primer-on-the-metaculus-scoring-rule-eb9a974cd204
To your question about catastrophic risk from an unknown source, the table in the post doesn’t include that bit, as it’s only summing the %s of the different catastrophic risk questions, but you’re right that you can get something like it from the question you link to:
Which just refers to that 10% decrease by any means, full stop. The Metaculus Prediction there is lower than the Community Prediction, FYI, but is indeed above the 14% you get from summing the other questions. So that makes some sense given that there are the other possibilities, however remote, that are not explicitly named. But it’s also true that there are different predictors on each question, and also the linked to forecast is not explicitly pitched as “summing the other catastrophes up gives you 14% and so this linked to question is meant to produce a forecast of 14+X%, where X is the probability of unnamed catastrophes.”
I hope that was useful. Please do reach out if you’d like to continue the conversation.
That’s really helpful thank you!
Late to the thread, but one further thing I’d note is that it’s entirely possible for multiple different global catastrophe scenarios to occur by 2100. E.g., a global catastrophe in 2030 due to nuclear conflict and another in 2060 due to bioengineering. From a skim, I think the relevant Metaculus questions are about “by 2100” rather than “the first global catastrophe by 2100″, so they’re not mutually exclusive.
So if it was the case that the individual questions added to 14% and the total question added to 14% (which Christian’s answer suggests it isn’t, but I haven’t checked), that wouldn’t necessarily mean a ~0% chance of catastrophe from something else (though it’s at least weak evidence of that, e.g. because if the total question had a forecast twice as high as the sum of the individual questions, that would be evidence in favour of the likelihood of some other catastrophe).
I’ve updated the numbers based on today’s predictions. Key updates:
AI-related risks have seen a significant increase, almost doubling both in terms of catastrophic (from 3.06% in Jun 2022 to 6.16% in September 2023) and extinction risk (from 1.56% to 3.39%).
Biotechnology risks have actually decreased in terms of catastrophe likelihood (from 2.21% to 1.52%), while staying constant for extinction risk (0.07% in both periods).
Nuclear War has shown an uptick in catastrophic risk (from 1.87% to 2.86%) but remains consistent in extinction risk (0.06% in both periods).