New ideas for mitigating biotechnology misuse

Jonas Sandbrink

Mitigating the misuse and proliferation of dangerous biotechnology capabilities is a crucial part of our biosecurity strategy, especially until systems like robust pathogen detection and super PPE are in place. However, preventing the misuse of biotechnology can be tricky to work on due to the risk of drawing attention to the very things we fear. Andrew Snyder-Beattie and Ethan Alley mention strengthening the biological weapons convention as one such project in their Concrete Biosecurity Projects post.

Here, I present some new (more or less) concrete project ideas in this space that seem good and which have not been explored substantially. These should be imagined as standing next to core risk mitigation efforts that are already receiving attention, including strengthening the biological weapons convention, DNA synthesis screening, and Genetic Engineering Attribution.

If you are keen to work on any of the projects below, please register your interest through this Google Form.

1. Record-keeping for strong attribution

In most places, if you buy a gun, its serial number is registered. This enables law enforcement to link guns used in crimes to their owners and thus deter their misuse. We should create a similar system for attributing biological agents—agents that also have the potential to kill people—to their creators. This may be achieved through keeping records of the genetic sequences of organisms that scientists work on.

Record-keeping could take place at the DNA synthesis or sequencing stage. For instance, a record-keeping module could be introduced together with DNA synthesis screening into all DNA synthesis machines. Recorded DNA sequences would be encrypted (potentially hashed) to protect intellectual property (IP) and stored in a way accessible to later investigation. Once an unusual outbreak occurs, the sequence for the causative agent could be similarly encrypted and screened against the encrypted DNA records from facilities. Matches would result in a flag of facilities that have worked on the agent in question, which could automatically trigger an inspection. I call this system Retrospective Encrypted Corroboration Of Recorded DNA Sequences (RECORDS), but you might come up with something better.

A record-based system for strong attribution of biological agents may be a powerful mechanism to deter biological weapons development and use. Thus, such a system could feature as part of a future BWC compliance regime. Routine visits as part of such a compliance regime could check for active record-keeping. For instance, microbial samples collected in the laboratory could be sequenced, encrypted, and checked against facility records for gaps. Lastly, functional strong attribution could be used to identify accidental laboratory releases—it could have provided important positive or negative evidence in the COVID-19 origins debate.

A more simple initial alternative to a RECORDS system might be to create a commitment of DNA synthesis companies to cross-check existing records in the case of an unusual outbreak. For instance, members of the International Gene Synthesis Consortium keep order and customer records for 8 years which could be tapped into.

There are many technical, economic, and political challenges to making record-keeping workable. Individuals familiar with DNA synthesis and sequencing, cryptography, blockchain developers, and social studies might be able to contribute here by doing initial scoping and expanding on this idea and how to put it into practice.

2. Responsible access to genetic sequences

In most places, to buy a gun, you have to have a license. Such a licence certifies your need to have a gun and a clean criminal record. In contrast, anyone can currently access any genetic blueprint, including those of deadly pathogens. I think we need to build responsible access systems, so that certain genetic blueprints, whether already known or discovered in the future, can only be accessed with a legitimate reason.

In a recent related paper, James Smith and I argue that we might look at patient data as a parallel. For privacy reasons, access to patient data is tightly controlled. To access this patient data, researchers need to prove their credentials and need to apply with a concrete project. It seems reasonable to argue that a blueprint for a pandemic weapon should be subject to at least the same level of scrutiny. Similar to how patient data is anonymized unless absolutely required, a responsible access system could ensure that those genetic sequence fractions are shared with developers of vaccines and other countermeasures that are actually needed.

Responsible access systems are both a technical and public engagement challenge. We need to find ways to create responsible access systems while generating as least friction as possible for legitimate research. This might involve direct work with genetic sequence repositories like GenBank, European Nucleotide Archive, DNA Data Bank of Japan, and GISAID. Furthermore, responsible access needs to consider equitable access and prevent discrimination against researchers from developing countries.

Skeptics might argue that dangerous organisms are already out of the box. However, currently, we are still protected by our limited knowledge of blueprints for pandemic-capable pathogens against which we do not have any countermeasures. It seems likely that we will discover pathogens worse than those already known in the future. To prevent their proliferation, we need to build responsible access systems now.

3. Consensus-finding on risks and benefits of research

One of the reasons for the debate around so-called “gain of function research”, the enhancement of potential pandemic pathogens, having stalled is the fact that researchers disagree over the benefits and risks of research. However, this does not mean that we should not try to calculate the expected value of a given experiment; rather we should try to pool the opinions of different experts to come up with a consolidated estimate.

Consensus-finding platforms like pol.is could be useful to solve the gridlock of differing beliefs. Pol.is allows individuals to vote on others’ comments and features a machine learning algorithm that helps to identify points of consensus (summary here). Such a platform could become a cornerstone of discussions around research risks and governance. Discussions could create guidance on whether and in what form individual projects should take place—including research like the enhancement of potential pandemic pathogens. Furthermore, such consensus finding platforms could be used to create robust ranking of different categories of research by their benefits and risks. These benefit and risk rankings could be used in the comparative risk-benefit assessment of research projects, and thus help funding decisions become more based on a project’s expected value.

To drive the application of consensus-finding platforms to decisions over research strategies, individuals might trial pol.is and other platforms for evaluating the benefits and risks of different projects. A starting point might be effective altruism projects and funding decisions. In the long term, building a hub or pipeline for such inquiries could be very promising.

4. Information loops to steer funding to less risky projects

Imagine there are two sets of identical houses which only differed by one factor: in one set of houses the electricity meter was in the basement, in the other set it was in the front room. Where do you think electricity consumption would be less? Arguably, the houses with the electricity meter in the front room, where the inhabitants are presented with their consumption. less. Donnella Meadows presents this story in her famous Leverage Points paper to demonstrate the power of information loops. Can we create and leverage such information loops for reducing biotechnology risks?

In 1986, the US Toxic Release Inventory required the public disclosure of hazardous air pollutants released from factories. Within four years, Meadows claims, emissions had dropped by 40%. A similar requirement for the public disclosure of laboratory accidents would likely incentivize better laboratory practices. Requiring the public disclosure of funding risky research like the enhancement of potential pandemic pathogens might encourage more thorough review and oversight. As a large fraction of relevant information on grants is available online, a nonprofit could scrape the internet for grants by different funding bodies, assign risk scores based on high-level categories, and collate these on a website to highlight differences in risk-taking behavior.

Highlighting different risk levels of different projects to grantmakers could also lead to the preferential funding of less risky research. This might be achieved through assigning different research proposals safety and security risks scores. Assignment of such risk scores might be researcher-led (a start for which could be preregistration), grantmaker-led, or a form of automated risk assessment.

For projects aiming to create information loops, infohazards need to be considered and managed. Whether a project is net positive will depend on its specifics.

Conclusions

These ideas have not received much attention so far and may serve as a starting point for further thinking. I have spent less than 20h thinking about each of these projects, so even an initial scoping study for each of them would likely be valuable. These projects are all very interdisciplinary, and do not require a set background; rather, the more crucial skills for success will be showing initiative, original thinking, and successful mediation across different viewpoints. While I describe these projects as generally good, this is not necessarily the case for every instantiation. This is especially the case for project ideas 3 and 4. Again, if you are interested in taking charge or helping with any of these projects, please fill in the Google Form.

Acknowledgments

Many thanks to Joshua Monrad, James Wagstaff, and Andrew Snyder-Beattie for helpful feedback on this post.