Currently: Head of AI Research at SecureBio
Previously: Biosecurity roadmapping (focus on built-environment disinfection via far-UVC) at Convergent Research. Finishing a virology PhD on clinical sequencing, diversity, and evolution of DNA viruses in the transplant setting.
Also ran EA Osnabrück and Hannover from 2015–2022. Cultivating a wide range of EA-related interests, including welfare bio, metaethics, progress studies, and many more.
All posts and comments are in purely personal capacity.
This must have been a ton of work, thanks for this thorough writeup of current discussions in biosecurity. Thank you for the high praise of SecureBio, too!
I still want to flag three possible misconceptions:
Early in the piece, on the question of motivation for pursuing bioweapons, you ask:
This is an objection we hear relatively frequently and it is somewhat missing the whole reason why the current longtermist-flavored scope-sensitive biosecurity that you discuss in your article emerged—global catastrophic risks. If you truly want to cause catastrophic mass harm (from tens of millions of deaths to extinction), you basically only have nukes and pandemics available. Nukes are practically impossible to obtain, and while pandemic-capable viruses are definitely not trivially easy to make and work with, information and methods around them are close to open-source. Thus, if a malicious actor wants to cause global-scale harm, there aren’t any simpler acts of violence that roughly accomplish the same thing.
On the offense–defense balance of AIxBio progress:
I think this claim understates how much disagreement is around the offense–defense balance of biological tools. The landscape of biological tool capabilities seems to be pretty jagged and it’s not obvious to me that this results in balanced progress, especially given (as you correctly point out) the inherent offense-favor of pandemics. I think it would be a pretty big coincidence if a new technology happens to be exactly as useful for offense as it is for defense. Indeed, the consensus of my colleagues and me at SecureBio is that AI tools will accelerate bioweapon design + deployment well before they enable us to have countermeasure design + deployment that is sufficient to render us acceptably safe from such risks.
On weaponization:
It is a bit unclear to what extent the concept of “weaponization” applies to many of the pandemic threats many people in scope-sensitive biosecurity care about. Historically, weaponization was indeed a pretty big problem for certain BWs, which required specific conditions for storage in munitions, preparation for aerosolization, etc. But as we’ve seen with COVID, “weaponization” for many pandemic-capable pathogens might be essentially complete at the purification stage and only requires getting it onto somebody’s epithelium via a spray bottle. (And regarding the “economics of bioweapons” point, I’d re-emphasize my first point.)
That being said, I definitely agree with most of the things you wrote about biodefense and reasons for optimism. We have a ton of promising interventions that can form a pretty airtight swiss-cheese stack if we manage to implement them; biohardened environments, PPE stockpiles, and widespread early detection alone would make BWs very unattractive. Again, great to see such a detailed piece on biosecurity.