SecureBio—Notes from SoGive

Call notes with SecureBio

The below sets out call notes between Sanjay & Spencer from SoGive and Tiffany Tzeng (Operations Manager) & Ben Mueller (COO) from SecureBio on Tuesday, January 30, 2024. We reached out to SecureBio after seeing that they are one of several biosecurity funding opportunities that are actively recommended by Founders Pledge. We choose to publish those call notes which we believe are likely to be of most interest.

[Who is SoGive? SoGive works with donors, especially major donors, to support them to have more impact with their giving. If you are interested in having a conversation about your giving, please reach out to Spencer (spencer@sogive.org).]

Current work

Ben Mueller joined SecureBio two years ago, when it was just the Delay/​Detect/​Defend framework to avert engineered pathogens. He previously ran strategy and operations at a fintech company. Tiffany Tzeng joined a year and a half ago and was previously a Product Manager at Wayfair. She has an academic background in moral philosophy, and her engagement with EA led her to seek longtermist work. The founder, Kevin Esvelt, who was not present with us, invented the CRISPR gene drive and began to develop concerns about the lack of preparedness for exponential biorisks.

SecureBio suggested that the security mindset was not sufficiently present in most of the bioengineering industry and biotechnology research. Other conversations we’ve had with other biosecurity experts corroborate this claim. Kevin Esvelt teaches research labs how to draw the distinction between research that should/​shouldn’t be transparently pursued.

The Delay/​Detect/​Defend framework is a holistic take on how to solve future pandemics.

  1. Delay: SecureDNA/​gene synthesis screening is underused. SecureDNA has released a software tool that can be deployed by gene synthesis companies and catch more threats. This buys more time before a potentially threatening pathogen can be released. In the future, SecureBio expects that there will be ways to evade screening tools. However, screening tools still make it much more difficult to order/​synthesize dangerous pathogens, they claim.

  2. Detect: SecureBio uses metagenomics to catch new pathogens early. They are working on solving core research barriers in areas such as exponential growth detection, which is one way to catch a pathogen if you don’t know what it is yet. This forms an early warning for potentially threatening pathogens that have been released. The branch of SecureBio working on detection is NAO.

  3. Defend: SecureBio is researching Far-UVC. Assuming a threatening pathogen eventually is synthesized despite delays, and also evades detection, Far-UVC is for protection against transmission.

The interplay between AI and bio threats

SecureBio finds that AI is a potential new source of threat. They claim we are not yet at a stage where AI is likely to be a source of new pandemic risk, i.e., it is not able to engineer drastically new threats yet.

SecureBio bootstrapped a team to build evaluation tools for the state of biorisk from AI. They brought together industry stakeholders (frontier AI companies and other organizations like METR) to build benchmarks and analyses. The evaluations are slightly different for each frontier AI company, built to detect sources of biorisk in their models.

SecureBio is developing a few proprietary mitigation strategies to prevent AI from gaining dangerous capabilities in pathogen engineering. SecureBio has one software engineer in-house, and CAIS is taking the lead on dev-ops, deployment, and computational infrastructure. SecureBio’s contribution to these mitigation strategies is mostly understanding biorisk and the Delay/​Detect/​Defend model.

This project is SecureBio’s highest marginal value-add in fundraising. They would be able to expand this work substantially. They can fund this work through core funds until March or April 2024 (so as of posting these notes, they likely need funding for these evals to continue). It costs SecureBio $700,000 per 6 months of AI work, which covers a team lead, policy scientist, external contractors, research assistant, and some others. They say they would probably try the next iteration without a policy scientist. Despite throttling work from April onwards, they would be able to spin this project back up, because it is staffed by contractors.

Counterfactual impact

If SecureBio were not around, who would work on this? Some similar projects exist at Gryphon Scientific and RAND Corporation, but SecureBio told us that there is a coverage gap. That is, companies might try to hire in-house safety teams instead of independent safety teams, but this may be inadequately done because hiring an in-house safety team introduces more internal process friction. Compared to Gryphon Scientific and RAND, SecureBio says they have a unique focus on exponential biorisks.

SecureBio’s risk analysis

How could AI turn out really badly? SecureBio claims there are credible ways AI could cause large-scale physical harm through the three backbones of society:

  1. Financial systems: Financial systems are, fortunately, disaggregated. Databases cannot speak to each other. This makes it harder for an AI to take down the whole system. However there have been precedents for financial crises having significant ramifications (e.g. the Global Financial Crisis) and currency manipulations could conceivably have material implications for one country.

  2. Weapons of mass destruction: Land-based missiles are controlled by old technology (floppy disks), making them safer from electronic intrusion. However, our increasingly digitized military command and & control infrastructure make this an obvious high-leverage attack surface for adversaries. An agentic AI could create attacks, stage fake attacks to trigger real counter-attacks, or interrupt our abilities to detect strikes, inducing humans to strike first.

  3. Bio: Consider that there have been past pandemics with extremely high fatality rates. There is historical precedent for pathogens against which humans have no immunity achieving fatality rates in excess of 90%. Any agentic AI would look at bioweapons as a way to eliminate human obstacles to its goals.

We asked SecureBio this question because we like to hear multiple perspectives, however we would consider SecureBio to be expert on the “bio” component on this, and not necessarily on the other elements.