Countermeasures & substitution effects in biosecurity

An adversary who knows that his opponents’ troops have been inoculated against anthrax can switch his battle plans to smallpox or plague—or to an agent for which no vaccine exists. -Ken Alibek, Biohazard

A challenge for reducing bio risk is that many of the risks are coming from adversaries. Adversaries can react to our interventions, and so developing countermeasures may be less effective than one might naively expect due to ‘substitution effects’.[1] There are several distinct substitution effects:

  • ‘Switching’ - we find a countermeasure for X, adversary then switches from X to developing Y

  • ‘Escalating’ - we find a countermeasure for X, adversary modifies X to X’ to overcome countermeasure[2]

  • ‘Attention Hazard + Offense Bias’ - we investigate a countermeasure for X, but fail. Adversary was previously not developing X, but seeing our interest in X starts to develop X.

    • Can be combined with escalation, even if we successfully find countermeasure for X, adversary is now on general X pathway and starts developing X’

    • Can also simply be a timeframe effect, if adversary can produce X before we successfully get the countermeasure to X (although here it matters if the adversary is more like a terrorist and would deploy X as soon as it was created, or a state program that would keep X around for awhile, imposing some ongoing accident or warfare risk until the countermeasure for X was found).

    • Can be a problem if we think the countermeasure is imperfect enough that the attention hazard outweighs the countermeasure development.

  • ‘Exposing Conserved Vulnerabilities’ - imagine that there are 10 possible attacks. For 9 of them, we could quickly develop a countermeasure in an emergency, but for one of them, finding a countermeasure is impossible (we don’t know which is which in advance). We research countermeasures for all 10 attacks, solve 9 of them, but also reveal the attack that is impossible to counter. The adversary thus picks that one, leaving us worse off than had we just remained ignorant and just waited for an emergency (e.g. if we assume before doing the research that adversary would have only a 10% chance of picking that attack). By picking up the low hanging fruit, we’ve ‘funneled’ our adversary towards the weak points.

Substitution effects will have varying consequences for global catastrophic biological risks (GCBRs). In a worst case scenario, finding countermeasures to more mundane things will cause adversaries to move towards GCBR territory (either by more heavily engineering mundane things, or switching into entirely new kinds of attack). However, this is also counterbalanced by the fact that bioweapons (‘BW’) in general might be less attractive when there are countermeasures for a lot of them.

  • ‘Reduced BW appeal’ - An adversary has a program that is developing X and Y. We find a cure to X, which reduces the appeal of the entire program, causing the adversary to give up on both X and Y.

Better technology for attribution (e.g. tracing the origin of an attack or accident) is one concrete example that produces ‘reduced BW appeal.’ Better attribution is unlikely to dissuade development of bioweapons oriented towards mutually assured destruction (and we might expect most GCBRs to be coming from such weapons). But in reducing the strategic/​tactical appeal of bioweapons for assassination, sabotage, or ambiguous attacks, the overall appeal of a BW program is reduced, which could have spillover effects into reducing the probability of more GCBR-style weapons.

One key question around substitution effects is the flexibility of an adversary. I get a vague impression from reading about the Soviet program that many scientists were extreme specialists, focusing on only one type of microbe. If this is the case, I would expect escalation risk to be greater than risks of switching or attention hazards (e.g. all the smallpox experts try to find ways around the vaccine, rather than switching to Ebola[3]). This is especially true if internal politics and budget battles are somewhat irrational and favor established incumbents (e.g. so that the smallpox scientist gets a big budget even if their project to bypass a countermeasure is unjustifiable).

Some implications:

  • Be wary of narrow countermeasures

  • Be hesitant to start an offense-defense race unless we think we can win

  • Look for broad spectrum countermeasures or responses—which are more likely to eliminate big chunks of the risk landscape and provide overall reduced bioweapons appeal

Thank you to Chris Bakerlee, Anjali Gopal, Gregory Lewis, Jassi Pannu, Jonas Sandbrink, Carl Shulman, James Wagstaff, and Claire Zabel for helpful comments.


  1. ↩︎

    Analogous to the ‘fallacy of the last move’ - H/​T Greg Lewis

  2. ↩︎

    Forcing an adversary to escalate from X to X’ may still reduce catastrophic risk by imposing additional design constraints on the attack

  3. ↩︎

    Although notably the Soviet program attempted to create a smallpox/​Ebola chimera virus in order to bypass smallpox countermeasures