Exaggerating the risks (Part 13: Ord on Biorisk)

Link post

This is a crosspost for Exaggerating the risks (Part 13: Ord on Biorisk), as published by David Thorstad on 29 December 2023.

This massive democratization of technology in biological sciences … is at some level fantastic. People are very excited about it. But this has this dark side, which is that the pool of people that could include someone who has … omnicidal tendencies grows many, many times larger, thousands or millions of times larger as this technology is democratized, and you have more chance that you get one of these people with this very rare set of motivations where they’re so misanthropic as to try to cause … worldwide catastrophe.

Toby Ord, 80,000 Hours Interview

Listen to this post [there is an option for this in the original post]

1. Introduction

This is Part 13 of my series Exaggerating the risks. In this series, I look at some places where leading estimates of existential risk look to have been exaggerated.

Part 1 introduced the series. Parts 2-5 (sub-series: “Climate risk”) looked at climate risk. Parts 6-8 (sub-series: “AI risk”) looked at the Carlsmith report on power-seeking AI.

Parts 9, 10 and 11 began a new sub-series on biorisk. In Part 9, we saw that many leading effective altruists give estimates between 1.0-3.3% for the risk of existential catastrophe from biological causes by 2100. I think these estimates are a bit too high.

Because I have had a hard time getting effective altruists to tell me directly what the threat is supposed to be, my approach was to first survey the reasons why many biosecurity experts, public health experts, and policymakers are skeptical of high levels of near-term existential biorisk. Parts 9, 10 and 11 gave a dozen preliminary reasons for doubt, surveyed at the end of Part 11.

The second half of my approach is to show that initial arguments by effective altruists do not overcome the case for skepticism. Part 12 examined a series of risk estimates by Piers Millett and Andrew Snyder-Beattie. We saw, first, that many of these estimates are orders of magnitude lower than those returned by leading effective altruists and second, that Millett and Snyder-Beattie provide little in the way of credible support for even these estimates.

Today’s post looks at Toby Ord’s arguments in The Precipice for high levels of existential risk. Ord estimates the risk of irreversible existential catastrophe by 2100 from naturally occurring pandemics at 110,000, and the risk from engineered pandemics at a whopping 130. That is a very high number. In this post, I argue that Ord does not provide sufficient support for either of his estimates.

2. Natural pandemics

Ord begins with a discussion of natural pandemics. I don’t want to spend too much time on this issue, since Ord takes the risk of natural pandemics to be much lower than that of engineered pandemics. At the same time, it is worth asking how Ord arrives at a risk of 110,000.

Effective altruists effectively stress that humans have trouble understanding how large certain future-related quantities can be. For example, there might be 1020, 1050 or even 10100 future humans. However, effective altruists do not equally stress how small future-related probabilities can be. Risk probabilities can be on the order of 10-2 or even 10-5, but they can also be a great deal lower than that: for example, 10-10, 10-20, or 10-50 [for example, a terrorist attack causing human extinction is astronomically unlikely on priors].

Most events pose existential risks of this magnitude or lower, so if Ord wants us to accept that natural pandemics have a 110,000 chance of leading to irreversible existential catastrophe by 2100, Ord owes us a solid argument for this conclusion. It is certainly far from obvious: for example, devastating as the COVID-19 pandemic was, I don’t think anyone believes that 10,000 random re-rolls of the COVID-19 pandemic would lead to at least one existential catastrophe. The COVID-19 pandemic just was not the sort of thing to pose a meaningful threat of existential catastrophe, so if natural pandemics are meant to go beyond the threat posed by the recent COVID-19 pandemic, Ord really should tell us how they do so.

Ord begins by surveying four historical pandemics: the Plague of Justinian, Black Death, Columbian Exchange, and Spanish Flu. Ord notes that while each of these events led to substantial loss of life, most were met with surprising resilience.

Even events like these fall short of being a threat to humanity’s longterm potential. In the great bubonic plagues we saw civilization in the affected areas falter, but recover. The regional 25 to 50 percent death rate was not enough to precipitate a continent-wide collapse of civilization. It changed the relative fortunes of empires, and may have altered the course of history substantially, but if anything, it gives us reason to believe that human civilization is likely to make it through future events with similar death rates, even if they were global in scale.

I drew a similar lesson from the study of historical pandemics in Part 9 of this series.

Next, Ord notes that the fossil record suggests the historical risk of existential catastrophe from naturally occurring pandemics was low:

The strongest case against existential risk from natural pandemics is the fossil record argument from Chapter 3. Extinction risk from natural causes above 0.1 percent per century is incompatible with the evidence of how long humanity and similar species have lasted.

This accords with what we found in Part 9 of this series: the fossil record reveals only a single confirmed mammalian extinction due to disease, and that was the extinction of a species of rat in a very small and remote location (Christmas Island).

Of course, Ord notes, levels of risk from natural pandemics have changed both for the better and for the worse in recent history. On the one hand, we are more vulnerable because there are more of us, and we live in a denser and more interconnected society. On the other hand, we have excellent medicine, technology, and public health to protect us. For example, we saw in Part 10 of this series that simple non-pharmaceutical interventions in Wuhan and Hubei may have reduced cases by a factor of 67 by the end of February 2020, and that for the first time a global pandemic was ended in real-time by the development of an effective vaccine.

So far, we have seen the following: Historical pandemics suggest, if anything, surprising resilience of human civilization to highly destructive events. The fossil record suggests that disease rarely leads to mammalian extinction, and while human society has since changed in some ways that make us more vulnerable than our ancestors were, we have also changed in some ways that make us less vulnerable than our ancestors were. So far, we have been given no meaningful argument for a 110,000 chance of irreversible existential catastrophe from natural pandemics by 2100. Does Ord have anything in the way of a positive argument to offer?

Here is the entire remainder of Ord’s analysis of natural pandemics:

It is hard to know whether these combined effects have increased or decreased the existential risk from pandemics. This uncertainty is ultimately bad news: we were previously sitting on a powerful argument that the risk was tiny; now we are not. But note that we are not merely interested in the direction of the change, but also in the size of the change. If we take the fossil record as evidence that the risk was less than one in 2,000 per century, then to reach 1 percent per century the pandemic risk would need to be at least 20 times larger. This seems unlikely. In my view, the fossil record still provides a strong case against there being a high extinction risk from “natural” pandemics. So most of the remaining existential risk would come from the threat of permanent collapse: a pandemic severe enough to collapse civilization globally, combined with civilization turning out to be hard to re-establish or bad luck in our attempts to do so.

What is the argument here? Certainly Ord makes a welcome concession in this passage: since natural pandemics are unlikely to cause human extinction in this century, most of the risk should come from threats of civilizational collapse. But that isn’t an argument. It’s a way of setting the target that Ord needs to argue for. Why think that civilization stands a 110,000 risk of collapse, let alone permanent collapse without recovery, by 2100 due to natural pandemics? We really haven’t been given any substantive argument at all for this conclusion.

3. Laboratory research

Another potential biorisk is the threat posed by unintentional release of pathogens from research laboratories. Ord notes that biological research is progressing quickly:

Progress is continuing at a rapid pace. The last ten years have seen major qualitative breakthroughs, such as the use of CRISPR to efficiently insert new genetic sequences into a genome and the use of gene drives to efficiently replace populations of natural organisms in the wild with genetically modified versions. Measures of this progress suggest it is accelerating, with the cost to sequence a genome falling by a factor of 10,000 since 2007 and with publications and venture capital investment growing quickly. This progress in biotechnology seems unlikely to fizzle out soon: there are no insurmountable challenges looming; no fundamental laws blocking further developments.

That’s fair enough. But how do we get from there to a 130 chance of existential catastrophe?

Ord begins by discussing the advent of gain-of-function research, focusing on a Dutch researcher Ron Fouchier who passed strains of H5N1 through ferrets until it gained the ability to be transmitted between mammals. That is, by now, old news. Indeed, we saw in Part 12 of this series that the US Government commissioned in 2014 a thousand-page report on the risks and benefits of gain-of-function research. That report made no mention of existential risks of any kind: the largest casualty figure modeled in this report is 80 million.

Does Ord provide an argument to suspect that gain-of-function research could lead to existential catastrophe? Ord goes on to discuss the risks of laboratory escapes. These are, again, well-known and discussed in the mainstream literature, including the government report featured in Part 12 of this series. Ord concludes from this discussion that:

In my view, this track record of escapes shows that even BSL-4 is insufficient for working on pathogens that pose a risk of global pandemics on the scale of the 1918 flu or worse—especially if that research involves gain-of-function.

But this is simply not what is at issue: no one thinks that pandemics like the 1918 flu or COVID-19 pandemic pose a 130 chance of irreversible existential catastrophe by 2100. Perhaps the argument is meant to be contained in the final phrase (“1918 flu or worse“), but if this is the view, it isn’t an argument, merely a statement of Ord’s view.

Aside from a list of notable laboratory escapes, this is the end of Ord’s discussion of risks posed by unintentional release of pathogens from research laboratories. Is this discussion meant to ground a 130 risk of existential catastrophe by 2100? I hope not, because there is nothing in the way of new evidence in this section, and very little in the way of argument.

4. Bioweapons

The final category of biorisk discussed by Ord is the risk posed by biological weapons. Ord begins by reviewing historical bioweapons programs, including the Soviet bioweapons program as well as biowarfare by the British army in Canada in the 18th century CE, ancient biowarfare in Asia Minor in the 13th century BCE, and potential intentional spread of the Black Death by invading Mongol armies.

I also discussed the Soviet bioweapons program in Part 9 of this series, since it is the most advanced (alleged) bioweapons program of which I am aware. We saw there that a leading bioweapons expert drew the following conclusion from study of the Soviet bioweapons program:

In the 20 years of the Soviet programme, with all the caveats that we don’t fully know what the programme was, but from the best reading of what we know from the civil side of that programme, they really didn’t get that far in creating agents that actually meet all of those criteria [necessary for usefulness in biological warfare]. They got somewhere, but they didn’t get to the stage where they had a weapon that changed their overall battlefield capabilities; that would change the outcome of a war, or even a battle, over the existing weapon systems available to them.

Ord’s discussion of the Soviet bioweapons program tends rather towards omission of the difficulties posed by the program, instead playing up its dangers:

The largest program was the Soviets’. At its height it had more than a dozen clandestine labs employing 9,000 scientists to weaponize diseases ranging from plague to smallpox, anthrax and tularemia. Scientists attempted to increase the diseases’ infectivity, lethality and resistance to vaccination and treatment. They created systems for spreading the pathogens to their opponents and built up vast stockpiles, reportedly including more than 20 tons of smallpox and of plague. The program was prone to accidents, with lethal outbreaks of both smallpox and anthrax … While there is no evidence of deliberate attempts to create a pathogen to threaten the whole of humanity, the logic of deterrence or mutually assured destruction could push superpowers or rogue states in that direction.

I’m a bit disappointed by the selective use of details here. We are told all of the most frightening facts about the Soviet program: how many scientists they employed, how large their stockpiles were, and how they were prone to accidents. But we aren’t told how far they fell from their goal of creating a successful bioweapon.

Is there anything in this passage that grounds a case for genuine existential risk? Ord notes that: “While there is no evidence of deliberate attempts to create a pathogen to threaten the whole of humanity, the logic of deterrence or mutually assured destruction could push superpowers or rogue states in that direction.”. What should we make of this argument? Well, what we should do with this argument is to ask Ord for more details.

We’ve seen throughout Parts 9, 10 and 11 of this series that it is extremely difficult to engineer a pathogen which could lead to existential catastrophe. Ord seems to be claiming not only that such a pathogen could be developed in this century, but also that states may soon develop such a pathogen as a form of mutually assured destruction. Both claims need substantial argument, the latter not least because humanity already has access to a much more targeted deterrent in the form of nuclear weapons. That isn’t to say that Ord’s claim here is false, but it is to say that a single sentence won’t do. If there is a serious case to be made that states can, and soon may develop pathogens which could lead to existential catastrophe in order to deter others, that case needs to be made with the seriousness and care that it deserves.

Ord notes that historical data does not reflect substantial casualties from bioweapons. However, Ord suggests, we may have too little data to generalize from, and in any case the data suggests a “power law” distribution of fatalities that may favor high estimates of existential risk. That’s fair enough, though we saw in Part 12 that power law estimates of existential biorisk face substantial difficulties, and also that the most friendly published power law estimate puts the risks orders of magnitude lower than Ord does.

From here, Ord transitions into a discussion of the dangers posed by the democratization of biotechnology and the spread of `do-it-yourself’ science. Ord writes:

Such democratization promises to fuel a boom of entrepreneurial biotechnology. But since biotechnology can be misused to lethal effect, democratization also means proliferation. As the pool of people with access to a technique grows, so does the chance it contains someone with malign intent.

We discussed the risk of `do-it-yourself’ science in Part 10 of this series. There, we saw that a paper by David Sarapong and colleagues laments “Sensational and alarmist headlines about DiY science” which “argue that the practice could serve as a context for inducing rogue science which could potentially lead to a ‘zombie apocalypse’.” These experts find little empirical support for any such claims.

That skepticism is echoed by most leading experts and policymakers. For example, we also saw in Part 10 that a study of risks from synthetic biology by Catherine Jefferson and colleagues decries the “myths” that “synthetic biology could be used to design radically new pathogens” and “terrorists want to pursue biological weapons for high consequence, mass casualty attacks”, concluding:

Any bioterrorism attack will most likely be one using a pathogen strain with less than optimal characteristics disseminated through crude delivery methods under imperfect conditions, and the potential casualties of such an attack are likely to be much lower than the mass casualty scenarios frequently portrayed. This is not to say that speculative thinking should be discounted … however, problems arise when these speculative scenarios for the future are distorted and portrayed as scientific reality.

The experts are skeptical. Does Ord give us any reason to doubt this expert consensus? The only remaining part of Ord’s analysis is the following:

People with the motivation to wreak global destruction are mercifully rare. But they exist. Perhaps the best example is the Aum Shinrikyo cult in Japan, active between 1984 and 1995, which sought to bring about the destruction of humanity. They attracted several thousand members, including people with advanced skills in chemistry and biology. And they demonstrated that it was not mere misanthropic ideation. They launched multiple lethal attacks using VX gas and sarin gas, killing 22 people and injuring thousands. They attempted to weaponize anthrax, but did not succeed. What happens when the circle of people able to create a global pandemic becomes wide enough to include members of such a group? Or members of a terrorist organization or rogue state that could try to build an omnicidal weapon for the purposes of extortion or deterrence?

The first half of this paragraph suggests that although few sophisticated groups would want to cause an existential catastrophe, some such as Aum Shinrikyo have had that motivation. The best thing to say about this claim is that it isn’t what is needed: we were looking for an argument that advances in biotechnology will enable groups to bring about existential catastrophe, not that groups will be motivated to do so. However, we also saw in Part 2 of my series on epistemics that this claim is false: Aum Shinrikyo did not seek to “bring about the destruction of humanity,” and the falsity of this claim is clear enough from the research record that it is hard to understand why Ord would be repeating it.

The second half of this paragraph concludes with two leading questions: “What happens when the circle of people able to create a global pandemic becomes wide enough to include members of such a group? Or members of a terrorist organization or rogue state that could try to build an omnicidal weapon for the purposes of extortion or deterrence?” But questions are not arguments, and they are especially not arguments for what Ord needs to show: that the democratization of biotechnology will soon provide would-be omnicidal actors with the means to bring about existential catastrophe.

5. Governance

The chapter concludes with a discussion of some ways that biohazards might be governed, and some failures of current approaches. I don’t want to dwell on these challenges, in large part because I agree with most of them, though I would refer readers to Part 2 of my series on epistemics for specific disagreements about the tractability of progress in this area.

Ord begins by noting that since its founding, the Biological Weapons Convention (BWC) has been plagued with problems. The BWC has a minuscule staff and no effective means of monitoring or enforcing compliance. This limits the scope of international governance of biological weapons.

Ord notes that synthetic biology companies often make voluntary efforts to manage the risks posed by synthetic biology, such as screening orders for dangerous compounds. This is not surprising: theory suggests that large companies will often self-regulate as a strategy for avoiding government regulation. As Ord notes, there is some room for improvement: only about 80% of orders are screened, and future advances may make screening more difficult. That is fair enough.

Ord observes that the scientific community has also tried to self-regulate, though with mixed success.

All of this is quite reasonable, but it does not do much to bolster the fundamental case for a 130 risk of existential catastrophe from engineered pandemics by 2100. It might make it easier for those already convinced of the risk to see how catastrophes could fail to be prevented, but what we really need from Ord is more argument bearing on the nature and prevalence of the underlying risks.

6. Taking stock

Toby Ord claims that there is a 130 chance of irreversible existential catastrophe by 2100 from engineered pandemics. That is an astoundingly high number.

We saw in Parts 9-11 of this series that most experts are deeply skeptical of Ord’s claim, and that there are at least a dozen reasons to be wary. This means that we should demand especially detailed and strong arguments from Ord to overcome the case for skepticism.

Today’s post reviewed every argument, or in many cases every hint of an argument made by Ord in support of his risk estimates. We found that Ord draws largely on a range of familiar facts about biological risk which are common ground between Ord and the skeptical expert consensus. We saw that Ord gives few detailed arguments in favor of his risk estimates, and that those arguments given fall a good deal short of Ord’s argumentative burden.

We also saw that Ord estimates a 110,000 chance of irreversible existential catastrophe by 2100 from natural pandemics. Again, we saw that very little support is provided for this estimate.

This isn’t a situation that should sit comfortably with effective altruists. Extraordinary claims require extraordinary evidence, yet here as so often before, extraordinary claims about future risks are supported by rather less than extraordinary evidence. Much more is needed to ground high risk estimates, so we will have to look elsewhere for arguments in favor of high risk estimates.