Some thoughts on Toby Ord’s existential risk estimates

Toby Ord’s The Precipice is an ambitious and excellent book. Among many other things, Ord attempts to survey the entire landscape of existential risks[1] humanity faces. As part of this, he provides, in Table 6.1, his personal estimates of the chance that various things could lead to existential catastrophe in the next 100 years. He discusses the limitations and potential downsides of this (see also), and provides a bounty of caveats, including:

Don’t take these numbers to be completely objective. [...] And don’t take the estimates to be precise. Their purpose is to show the right order of magnitude, rather than a more precise probability.

Another issue he doesn’t mention explicitly is that people could anchor too strongly on his estimates.

But on balance, I think it’s great that he provides this table, as it could help people to:

  • more easily spot where they do vs don’t agree with Ord

  • get at least a very approximate sense of what ballpark the odds might be in

In this post, I will:

  1. Present a reproduction of Table 6.1

  2. Discuss whether Ord may understate the uncertainty of these estimates

  3. Discuss an ambiguity about what he’s actually estimating when he estimates the risk from “unaligned AI”

  4. Discuss three estimates I found surprisingly high (at least relative to the other estimates)

  5. Discuss some adjustments his estimates might suggest EAs/​longtermists should make to their career and donation decisions

Regarding points 2 and 4: In reality, merely knowing that these are Ord’s views about the levels of uncertainty and risk leads me to update my views quite significantly towards his, as he’s clearly very intelligent and has thought about this for much longer than I have. But I think it’s valuable for people to also share their “independent impressions”—what they’d believe without updating on other people’s views. And this may be especially valuable in relation to Ord’s risk estimates, given that, as far as I know, we have no other single source of estimates anywhere near this comprehensive.[2]

I’ll hardly discuss any of the evidence or rationale Ord gives for his estimates; for all that and much more, definitely read the book!

The table

Here’s a reproduction of Table 6.1:[3]

Understating uncertainty?

In the caption for the table, Ord writes:

There is significant uncertainty remaining in these estimates and they should be treated as representing the right order of magnitude—each could easily be a factor of 3 higher or lower.

Lighthearted initial reaction: Only a factor of 3?! That sounds remarkably un-uncertain to me, for this topic. Perhaps he means the estimates could easily be a factor of 3 higher or lower, but the estimates could also be ~10-50 times higher or lower if they really put their backs into it?

More seriously: This at least feels to me surprisingly “certain”/​“precise”, as does his above-quoted statement that the estimates’ “purpose is to show the right order of magnitude”. On the other hand, I’m used to reasoning as someone who hasn’t been thinking about this for a decade and hasn’t written a book about it—perhaps if I had done those things, then it’d make sense for me to occasionally at least know how many 0s should be on the ends of my numbers. But my current feeling is that, when it comes to existential risk estimates, uncertainties even about orders of magnitude may remain appropriate even after all that research and thought.

Of course, the picture will differ for different risks. In particular:

  • for some risks (e.g., asteroid impacts), we have a lot of at least somewhat relevant actual evidence and fairly well-established models.

    • But even there, our evidence and models are still substantially imperfect for existential risk estimates.

  • And for some risks, the estimated risk is already high enough that it’d be impossible for the “real risk” to be two orders of magnitude higher.

    • But then the real risk could still be orders of magnitude lower.

I’d be interested in other people’s thoughts on whether Ord indeed seems to be implying more precision than is warranted here.

What types of catastrophe are included in the “Unaligned AI” estimate?

Ord estimates a ~1 in 10 chance that “unaligned artificial intelligence” will cause existential catastrophe in the next 100 years. But I don’t believe he explicitly states precisely what he means by “Unaligned AI” or “alignment”. And he doesn’t include any other AI-related estimates there. So I’m not sure which combination of the following issues he’s estimating the risk from:[4]

  1. AI systems that aren’t even trying to act in accordance with the instructions or values of their operator(s) (as per Christiano’s definition).

    • E.g., the sorts of scenarios Bostrom’s Superintelligence focuses on, where an AI actively strategises to seize power and optimise for its own reward function.

  2. AI systems which are trying to act in accordance with the instructions or values of their operator(s), but which make catastrophic mistakes in the process.

    • I think that, on some definitions, this would be an “AI safety” rather than “AI alignment” problem?

  3. AI systems that successfully act in accordance with the instructions or values of their operator(s), but not of all of humanity.

    • E.g., the AI systems are “aligned” with a malicious or power-hungry actor, causing catastrophe. I think that on some definitions this would be a “misuse” rather than “misalignment” issue.

  4. AI systems that successfully act in accordance with something like the values humanity believes we have, but not what we truly value, or would value after reflection, or should value (in some moral realist sense).

  5. “Non-agentic” AI systems which create “structural risks” as a byproduct of their intended function, such as by destabilising nuclear strategies.

In Ord’s section on “Unaligned artificial intelligence”, he focuses mostly on the sort of scenario Bostrom’s Superintelligence focused on (issue #1 in the above list). However, within that discussion, he also writes that we can’t be sure the “builders of the system are striving to align it with human values”, as they may instead be trying “to achieve other goals, such as winning wars or maximising profits.” And later in the chapter, he writes:

I’ve focused on the scenario of an AI system seizing control of the future, because I find it the most plausible existential risk from AI. But there are other threats too, with disagreement among experts about which one poses the greatest existential risk. For example, there is a risk of a slow slide into an AI-controlled future, where an ever-increasing share of power is handed over to AI systems and an increasing amount of our future is optimised towards inhuman values. And there are the risks arising from deliberate misuse of extremely powerful AI systems.

I’m not sure whether those quotes suggest Ord is including some or all of issues #2-5 in his definition or estimate of risks from “unaligned AI”, or if he’s just mentioning “other threats” as also worth noting but not part of what he means by “unaligned AI”.

I’m thus not sure whether Ord thinks the existential risk:

(A) from all of the above-mentioned issues is ~1 in 10.

(B) from some subset of those issues is ~1 in 10, while the risk from the other issues is negligible.

  • But how low would be negligible anyway? Recall that Table 6.1 includes a risk for which Ord gives only odds of 1 in a billion; I imagine he’d seen the other AI issues as more risky than that.

(C) from some subset of those issues is ~1 in 10, while the risk from others of those issues is non-negligible but (for some other reason) not directly estimated

Personally, and tentatively, it seems to me that at least the first four of the above issues may contribute substantially to existential risk, with no single one of the issues seeming more important than the other three combined. (I’m less sure about the importance of structural risks from AI.) Thus, if Ord meant B or especially C, I may have reason to be even more concerned than the ~1 in 10 estimate suggests.

I’d be interested to hear other people’s thoughts either on what Ord meant by that estimate, or on their own views about the relative importance of each of those issues.

“Other environmental damage”: Surprisingly risky?

Ord estimates a ~1 in 1000 chance that “other environmental damage” will cause existential catastrophe in the next 100 years. This includes things like overpopulation, running out of critical resources, or biodiversity loss, and not climate change.

I was surprised that that estimate was:

  • that high

  • as high as his estimate of the existential risk from each of nuclear war and climate change

  • 10 times higher than his estimate for the risk of existential catastrophe from “‘naturally’ arising pandemics”

I very tentatively suspect that this estimate of the risk from other environmental damage is too high. I also suspect that, whatever the “real risk” from this source is, it’s lower than that from nuclear war or climate change. E.g., if ~1 in 1000 turns out to indeed be the “real risk” from other environmental damage, then I tentatively suspect the “real risk” from those other two sources is greater than ~1 in 1000. That said, I don’t really have an argument for those suspicions, and I’ve spent especially little time thinking about existential risk from other environmental damage.

I also intuitively feel like the risks from ‘naturally’ arising pandemics is larger than that from other environmental damage, or at least not 10 times lower. (And I don’t think that’s just due to COVID-19; I think I would’ve said the same thing months ago.)

One the other hand, Ord gives a strong argument that the per-century extinction risks from “natural” causes must be very low, based in part on our long history of surviving such risks. So I mostly dismiss my intuitive feeling here as quite unfounded; I suspect my intuitions can’t really distinguish “Natural pandemics are a big deal!” from “Natural pandemics could lead to extinction, unrecoverable collapse, or unrecoverable dystopia!”

On the third hand, Ord notes that that argument doesn’t apply neatly to ‘naturally’ arising pandemics. This is because changes in society and technology have substantially changed the ability for pandemics to arise and spread (e.g., there’s now frequent air travel, although also far better medical science). In fact, Ord doesn’t even classify ‘naturally’ arising pandemics as a natural risk, and he places them in the “Future risks” chapter. Additionally, as Ord also notes, that argument applies most neatly to risks of extinction, not to risks of “unrecoverable collapse” or “unrecoverable dystopia”.

So I do endorse some small portion of my feeling that the risk from ‘naturally’ arising pandemics is probably more than a 10th as big as the risk from other environmental damage.

“Unforeseen” and “other” anthropogenic risks: Surprisingly risky?

By “other anthropogenic risks”, Ord means risks from

  • dystopian scenarios

  • nanotechnology

  • “back contamination” from microbes from planets we explore

  • aliens

  • “our most radical scientific experiments”

Ord estimates the chances that “other anthropogenic risks” or “unforeseen anthropogenic risks” will cause existential catastrophe in the next 100 years are ~1 in 50 and ~1 in 30, respectively. Thus, he views these categories of risks as, respectively, ~20 and ~33 times as existentially risky (over this period) as are each of nuclear war and climate change. He also views them as in the same ballpark as engineered pandemics. And as there are only 5 risks in the “other” category, this means he must see at least some of them (perhaps dystopian scenarios and nanotechnology?) as posing much higher existential risks than do nuclear war or climate change.

I was surprised by how high his estimates for risks from the “other” and “unforeseen” anthropogenic risks were, relative to his other estimates. But I hadn’t previously thought about these issues very much, so I wasn’t necessarily surprised by the estimates themselves, and I don’t feel myself inclined towards higher or lower estimates. I think my strongest opinion about these sources of risk is that dystopian scenarios probably deserve more attention than the longtermist community typically seems to give them, and on that point it appears Ord may agree.

Should this update our career and donation decisions?

One (obviously imperfect) metric of the current priorities of longtermists is the problems 80,000 Hours recommends people work on. Their seven recommended problems are:

  • Positively shaping the development of artificial intelligence

  • Reducing global catastrophic biological risks

  • Nuclear security

  • Climate change (extreme risks)

  • Global priorities research

  • Building effective altruism

  • Improving institutional decision-making

The last three of those recommendations seem to me like they’d be among the best ways of addressing “other” and “unforeseen” anthropogenic risks. This is partly because those three activities seem like they’d broadly improve our ability to identify, handle, and/​or “rule out” a wide range of potential risks. (Another top contender for achieving such goals would seem to be “existential risk strategy”, which overlaps substantially with global priorities research and with building EA, but is more directly focused on this particular cause area.)

But as noted above, if Ord’s estimates are in the right ballpark, then:

  • “other” and “unforeseen” anthropogenic risks are each (as categories) substantially existentially riskier than each of nuclear war, climate change, or ‘naturally’ arising pandemics

  • at least some individual “other” risks must also be substantially higher than those three things

  • “other environmental damage” is similarly existentially risky as nuclear war and climate change, and 10 times more so than ‘naturally’ arising pandemics

So, if Ord’s estimates are in the right ballpark, then:

  • Perhaps 80,000 Hours should write problem profiles on one or more of those specific “other” risks? And perhaps also about “other environmental damage”?

  • Perhaps 80,000 Hours should more heavily emphasise the three “broad” approaches they recommend (global priorities research, building EA, and improving institutional decision-making), especially relative to work on nuclear security and climate change?

  • Perhaps 80,000 Hours should write an additional “broad” problem profile on existential risk strategy specifically?

  • Perhaps individual EAs should shift their career and donation priorities somewhat towards:

    • those broad approaches?

    • specific “other anthropogenic risks” (e.g., dystopian scenarios)?

    • “other environmental damage”?

Of course, Ord’s estimates relate mainly to scale/​impact, and not to tractability, neglectedness, or number of job or donation opportunities currently available. So even if we decided to fully believe his estimates, their implications for career and donation decisions may not be immediately obvious. But it seems like the above questions would be worth considering, at least.

From memory, I don’t think Ord explicitly addresses these sorts of questions, perhaps because he was writing partly for a broad audience who would neither know nor care about the current priorities of EAs. Somewhat relevantly, though, his “recommendations for policy and research” (Appendix F) include items specifically related to nuclear war, climate change, environmental damage, and “broad” approaches (e.g., horizon-scanning for risks), but none specifically related to any of the “other anthropogenic risks”.


As stated above, I thought this book was excellent, and I’d highly recommend it. I’d also be excited to see more people commenting on Ord’s estimates (either here or in separate posts), and/​or providing their own estimates. I do see potential downsides in making or publicising such estimates. But overall, it seems to me probably not ideal how many strategic decisions longtermists have made so far without having first collected and critiqued a wide array of such estimates.

This is one of a series of posts I plan to write that summarise, comment on, or take inspiration from parts of The Precipice. You can find a list of all such posts here.

This post is related to my work with Convergence Analysis, but the views I expressed in it are my own. I’m grateful to David Kristoffersson for helpful comments on an earlier draft.


  1. ↩︎

    Ord defines an existential catastrophe as “the destruction of humanity’s longterm potential”. Such catastrophes could take the form of extinction, “unrecoverable collapse”, or “unrecoverable dystopia”. He defines an existential risk as “a risk that threatens the destruction of humanity’s longterm potential”; i.e., a risk of such a catastrophe occurring.

  2. ↩︎

    The closest other thing I’m aware of was a survey from 12 years ago, which lacks estimates for several of the risks Ord gives estimates for.

  3. ↩︎

    I hope including this is ok copyright-wise; all the cool codexes were doing it.

  4. ↩︎

    This is my own quick attempt to taxonomise different types of “AI catastrophe”. I hope to write more about varying conceptualisations of AI alignment in future. See also The Main Sources of AI Risk?