Thanks for writing this! I like the post a lot. This heuristic is one of the criteria we use to evaluate bio charities at Founders Pledge (see the “Prioritize Pathogen- and Threat-Agnostic Approaches” section starting on p. 87 of my Founders Pledge bio report).
One reason that I didn’t see listed as one of your premises is just the general point about hedging against uncertainty: we’re just very uncertain about what a future pandemic might look like and where it will come from, and the threat landscape only becomes more complex with technological advances and intelligent adversaries. One person I talked to for that report said they’re especially worried about “pandemic Maginot lines”:
I also like the deterrence-by-denial argument that you make…
[broad defenses] might also act as a deterrent because malevolent actors might think: “It doesn’t even make sense to try this bioterrorist attack because the broad & passive defense system is so good that it will stop it anyways”
… though I think for it to work you have to also add a premise about the relative risk of substitution, right? I.e. if you’re pushing bad actors away from BW, what are you pushing them towards, and how does the risk of that new choice of weapon compare to the risk of BW? I think most likely substitutions (e.g. chem-for-bio substitution, as with Aum Shinrikyo) do seem like they would decrease overall risk.
hedging against uncertainty: we’re just very uncertain about what a future pandemic might look like and where it will come from
I fully agree with this; I think this was an implicit premise of mine that I failed to point out explicitly.
… though I think for it to work you have to also add a premise about the relative risk of substitution, right?
Great point that I actually haven’t considered so far. I would need to think about this more before giving my opinion. It seems really context-dependent, though, and hard to determine with any confidence.
Also, the Maginot line analogy is cool; I hadn’t seen that before. (I guess I really should read more of your report 🙂)
I’m personally not that worried about substitution risks. Roughly: The deterrence aspect is strongest for low-resource threat actors and—from a tail-risk perspective—bio is probably the most dangerous thing they can utilize, with that pesky self-replication and whatnot.
Thanks for writing this! I like the post a lot. This heuristic is one of the criteria we use to evaluate bio charities at Founders Pledge (see the “Prioritize Pathogen- and Threat-Agnostic Approaches” section starting on p. 87 of my Founders Pledge bio report).
One reason that I didn’t see listed as one of your premises is just the general point about hedging against uncertainty: we’re just very uncertain about what a future pandemic might look like and where it will come from, and the threat landscape only becomes more complex with technological advances and intelligent adversaries. One person I talked to for that report said they’re especially worried about “pandemic Maginot lines”:
I also like the deterrence-by-denial argument that you make…
… though I think for it to work you have to also add a premise about the relative risk of substitution, right? I.e. if you’re pushing bad actors away from BW, what are you pushing them towards, and how does the risk of that new choice of weapon compare to the risk of BW? I think most likely substitutions (e.g. chem-for-bio substitution, as with Aum Shinrikyo) do seem like they would decrease overall risk.
Very useful comment, thanks!
I fully agree with this; I think this was an implicit premise of mine that I failed to point out explicitly.
Great point that I actually haven’t considered so far. I would need to think about this more before giving my opinion. It seems really context-dependent, though, and hard to determine with any confidence.
Also, the Maginot line analogy is cool; I hadn’t seen that before. (I guess I really should read more of your report 🙂)
I’m personally not that worried about substitution risks. Roughly: The deterrence aspect is strongest for low-resource threat actors and—from a tail-risk perspective—bio is probably the most dangerous thing they can utilize, with that pesky self-replication and whatnot.