Decomposing Biological Risks: Harm, Potential, and Strategies

Summary: If a very large global catastrophic pandemic requires a stealthy pathogen, as suggested as a key pathway by Manheim, 2018, then preventing pathogens from being stealthy in at least one place could be enough to greatly reduce the existential risk. If this is the case, it could mean that just pushing for the implementation of metagenomic sequencing (at least) at the entries of one single country could be enough to greatly reduce the risk. If this is true, it could be a relatively low-hanging fruit.

Epistemic status: I’m not sure of all my assertions. This is intended to spark a discussion, and build a clearer idea of the type of risk associated with each scenario. Take everything I’ve said in this article as meaning “I have the impression that”. (I have thought and read quite extensively about the topic for 3 weeks as part of a research project on GCBRs reduction. I’ve spent 2 months more chatting with some people about it and having it in mind). At some point, I give some estimates of risk using words. It enables to rank the risks. I didn’t feel like I could give meaningful probability estimates.

We’ll be considering in this post five different kinds of agents:

  • State Agents, for which there are three main classes of scenarios that could lead to a catastrophic event:

    • Agents release some pathogens to cause as much harm as possible:

      • A malevolent agent has enough power to lead a country to release bioweapons on purpose.

      • A state feels threatened and does everything to survive, or to cause as much harm as possible before disappearing.

    • There is an accidental leak from a bioweapon program (we include this below)

  • Non-State Agents with large capabilities/​resources (Al-Qaeda, for instance).

    • Most such groups are not omnicidal /​ are not motivated to pursue the most worrying bioweapons technologies. (Omnicidal groups are thankfully not popular.)

    • They are much less likely to be cautious and are not subject to the same pressures against the development or use of bioweapons that states are.

  • Non-State Agents without extensive resources (most terrorist and omnicidal groups)

  • Accidental Leaks (from insufficiently cautious state bioweapons programs or from other research of concern)

  • Natural Pandemic Emergence

A Risk Factor Which is a Game-Changer

Before going into the details of each scenario, we need to talk about one of the factors that can vary the overall risk and its distribution.

According to Kevin Esvelt, a microbiologist who works on evolutionary and ecological engineering, it’s really difficult to develop new pathogens and especially with the kind of functions that could be lethal to humanity. Sonia Quigham-Gormley, a bioweapons expert, seems to strongly support this viewpoint in her book, “Barriers to Bioweapons.” It seems unclear if there are experts who disagree, at least in the near-term future. If this view is correct, it means that most of the risk over the coming few years will come from the biggest research laboratories, whether research labs or state-controlled labs.

As a result, two of the biggest risk factors for each location are the extent to which:

  • There is gain-of-function or other pathogen enhancement or dual-use research of concern involving pathogens which could plausibly lead to or enable the development of extremely dangerous pathogens.

  • Such research is made public, with released DNA sequences, open access, or other forms of information hazard.

Depending on the magnitude of these risk factors, the distribution of the risk changes:

  • Greater publicity about this kind of research greatly increases the risk coming from non-state agents; it tends to increase more than proportionally by enabling many little independent agents such as non-state agents with few resources to engage in research of concern. This would be especially worrying for existential risks.

  • More gain of function research means that all else being equal, there is a higher likelihood of a lab leak. This would be worrying mainly for catastrophic risks. It would also make future attempts to intentionally develop bioweapons more likely to succeed.

Finally, it is important to note that there is a qualitative difference between the risk of accidents caused by the research of concern and the risk linked to the publication of information of concern. While gain-of-function research causes a transient risk (i.e. when research is stopped, the risk ceases), the risk of publication permanently increases the potential for small agents to be harmful on a large scale.

Below I will estimate the risk with a set of parameters, assuming that we are not able to totally stop the research of concern but are somewhat able to prevent worse blueprints from being made publicly available.

One key risk modeling technique which could be useful is to approximate or elicit the distributions for these factors. In such a model, it seems likely that big agents have much fatter tails (greater probability of extreme risks) than small agents, but they have much less probability to cause any accident. When looking at expected value, however, it is less clear, and because the tail of the distribution for large agents includes existentially risky scenarios, from a longtermist viewpoint, these risks could easily dominate the calculation.

Existential Risks

Definition: Humanity goes extinct.

Consequences: Humanity’s potential is lost forever. As argued by Toby Ord in The Precipice, this outcome is tremendously worse than a catastrophic but non-extinction-level event, as long as we think that humanity’s future is large and valuable, and do not rapidly discount the value of that future.

Type of scenarios:

I see mainly three biorisk scenarios that could lead to humanity’s extinction:

  1. A pathogen is released and kills basically almost everyone at the same time, without sufficient warning to respond. It spreads quickly enough to take the human population below the minimum viable population.

  2. A pathogen kills most people and a proactive organization kills the survivors.

  3. A pathogen kills most people, and other indirect causes make humanity go extinct (Note: this is unlikely according to Luisa Rodriguez).

For each of these scenarios, a stealthy pathogen (i.e with a long incubation time) would be of great help and represent an important mass of the probability. Killing most people without a stealthy pathogen seems almost impossible. This is quite clear for the first scenario, but even for scenarios 2 and 3, it seems to be an almost necessary condition. Indeed, World War II as a whole killed no more than 3% of the population, so having a single organization that kills more than 1% of the population seems rather unlikely. For scenario 2 to kill everyone, the pathogen would probably need to kill more than 99% of the people to enable the organization to kill the last survivors. For scenario 3 to kill everyone, the pathogen would have to kill 99,9% of the population, and even given this, Luisa Rodriguez argues that the main way such an event could lead to extinction is if there is only one big group that is surviving. In any of these scenarios, the required amount of deaths seems to make the stealthy characteristic really important because having so many deaths due to a pathogen would probably have few chances to happen if we are aware of it and have any time to prepare or mitigate the spread.

Agents:

  • State Agent: Low to moderate risk (Malevolent agent could make this, and this is where the risk is coming from. State actors, even those acting to ensure their own survival, seem unlikely to have the motivation to develop and widely release such a universally deadly pathogen.)

  • Non-State Actors with extensive resources (Al-Qaeda for instance, or in the past, Aum Shinrikyo): Moderate risk. Given the median scenario for which I estimate the risk, the crafting of a pathogen such as the ones mentioned above and an efficient release of it requires lots of resources and a fairly long time for development. Thus this kind of agent that might be both very malevolent and powerful seem to concentrate a lot of the risk on very few potential actors. Moreover, if we are concerned about scenario 2, it requires that the omnicidal non-state agent can expect to stay alive long enough to be able to kill the last survivors.

  • Non-State Actors without much resources: Low Risk, mainly through scenario 1 or 3. Scenario 2 (a proactive organization which can lead to end game) seems to be highly unlikely. The magnitude of the risk here greatly depends on the publicly available information because such an organization could probably only craft what’s already existing. But as there are many more agents like this than the bigger ones, if dangerous pathogens are publicly available, this risk could easily become the main one.

  • Accidental Leaks: Low risk except through scenario 1 or 3. The risks are comparable to those coming from weak non state agents because there’s no intentionality to kill humanity (which makes update downward on the risk) but there are many labs, there were many leaks in the past and in our median scenario there is till some gain of function research that make scenario 1 and 3 possible.

  • Natural Pandemics: Likely a very low risk. One big uncertainty, as argued in Manheim’s 2018 paper, is that:

    • In the past, the world was not interconnected so one pathogen couldn’t kill humanity as a whole.

    • Thus there could have been pathogens that wiped out entire civilizations without us having any record of it. Thus the risk could still be as high as 15000 due to this.

Partial Conclusion:

If this analysis captures most of the X-risks, it thus means that eliminating a stealthy scenario might greatly reduce the risk of extinction coming from GCBRs. This implies that broad or universal surveillance could be a critical risk mitigation measure.

Catastrophic Risks

Definition: A catastrophe that kills more than 10% of the population[1] but that doesn’t drive the world population close to or below the minimum viable population (Minimal viable population = 100 − 1,000) .

Type of scenarios:

  • A pathogen that can spread very rapidly, with a high mortality rate, seems to be a required condition to have such a catastrophe. Being stealthy would enable spread before any response can be mounted, which would also greatly increase the likelihood that a pathogen leads to a catastrophe.

Distribution of the risk:

  • State Agent: Moderate risk. The risk coming from a state in survival mode seems higher for catastrophic risks than for existential ones and there is still risk from malevolent agents who become highly influential in a country.

  • Non-State Actors but with much resources (~Al-Qaeda for instance): high risk

  • Non-State Actors without much resources: moderate to high risk

  • Accidental Leaks: high risk

  • Natural Pandemics: a quite low risk

Moderate Global Risks

0.01% of the population is killed, in at least 10 different countries, but less than 10% of the population is killed. (This is intended to be similar to Covid).

Type of scenarios:

  • A natural event

  • A state agent bioweapon program accident

  • An intentionally targeted attack that spreads only to a limited extent /​ can be contained

Distribution of the risk:

  • State Agent: moderate to high risk (because the kind of agent we can expect from such an organization is quite likely to reach 10% if it reaches 0.01%)

  • Non-State Actors but with extensive resources (Al-Qaeda for instance): high to great risk (for the same reasons as mentioned above)

  • Non-State Actors without much resources: great risk

  • Accidental Leaks: great risk

  • Natural Pandemics: high risk

One Country to Safeguard Humanity

Safeguarding Humanity’s Potential Might Require Only One Country

The biggest difference between catastrophic risks reduction and anthropogenic existential bio risks reduction is that:

  • For existential risk reduction, you only need one country to survive. Thus, under the assumption that it’s unlikely that everyone dies in a country if the country is aware of the danger, for this risk, investing a lot of resources into one or a few countries’ risk reduction is possibly much more efficient than trying to improve the standards of the global community. A few thoughts deriving from this statement:

    • We could thus maximize a quantity that accounts for :

      • Likelihood that a country accepts the required measures

      • Potential of X-risks mitigation. This factor would include parameters such as given the information of a very dangerous pathogen, how likely is it that the country takes very strong measures ?

      • Self-sufficiency ability.

  • Islands have some comparative advantages but few of them have a very strong self-sufficiency ability. The United States seems to be a good candidate in the short term, because of its emphasis on national security (including biosecurity) and because of its huge resources that could ensure the preservation of human potential. (For example, it is largely self-sufficient in terms of food and energy, and could likely become self-sufficient in other ways if needed.) One big downside is that it is among the most likely targets for asymmetrical bioterrorism or biological warfare. It is also highly connected to most countries in the world. But if we think that it’s impossible that the entire population of a country gets wiped out if this country is aware of the threat, then the US seems to be a good target.

  • Given that metagenomic sequencing seems to be enough to prevent any stealthy scenario, due to its ability to capture any exponentially growing DNA sequence, pushing for it in the US could be enough to greatly reduce X-risks. According to Kevin Esvelt, it would cost 1 billion per year with current technologies to implement this as a standard precaution for screening most people entering the country.

  • One of the other advantages is that having such a few very strong countries protects every other country — because this scenario greatly reduces the potential of bioweapons as a way to kill humanity and thus disincentivize big organizations (from whom a significant amount of the risk is currently coming) to use this to try to destroy humanity.

Conclusion :

I hope this article provides a useful breakdown of the risk and gives some food for thought to discuss the probability that we give to various scenarios. If my analysis is right it means that we’re lucky and that we don’t need that much coordination to get rid of the bulk of the existential risk coming from pathogens. If not, I’m happy to discuss what you think are the most plausible scenarios and whether there are normative consequences on public policies. I’m currently running a project with other people that suggests that the amount of coordination (understood as the number of countries that we need to get onboard to solve most of a problem) that we need to mitigate GCBRs could be decreasing with the magnitude of the catastrophe. More on this in a later post.
Thanks for reading, and please, share your thoughts!

  1. ^

    This threshold is arbitrary, but aims at designating a memorable event that would affect humanity for at least decades and probably centuries.