Hi everyone!
@ptnhean and I have been working on something we’re excited to finally share: the first draft of a Biosecurity Statements Repository. This project grew in response to an exciting post by @Mslkmp and @Tessa A earlier this year (‘Five Tractable Biosecurity Projects You Could Start Tomorrow’).
What is it?
Our repository collects around 10–20 examples of biosecurity statements and practices from biological design tools and AI models. We grouped them based on how different developers are approaching biosecurity and dual-use risk. Users would be able to get a quick glimpse at how the current landscape looks like.
You can think of it as a map of how the field is (and isn’t yet) talking about biosecurity risks and safety.
Who is it for?
If you are a developer working on computational biology tools and know that dual-use risks exist, but are not sure how to discuss or address them—this is for you. You can think of this repository as a reference for publishing your own biosecurity statements, whether those are for your own users or for the general public.
How did we make it?
We started by looking for public statements, journal articles and practices from organisations building AI biological design tools. We then sorted and tagged them, thematically categorising them based on whether they:
acknowledge dual-use risk
how they managed access
whether they referenced established ethical standards, and
methods of evaluating risk
After getting feedback from Tessa and Max (thank you both so much!), we added more categories, such as the following, to make the repository more useful and user-friendly.
how users can interact with the tool or model
whether users nee to pay to access the tool/specific features
tool subcategories
What do we hope to get out of this?
We hope that our efforts can help to:
Establish shared norms for mitigating biological design tool risks
Give new developers some concrete examples, templates and language to start from/borrow
Nudge the field towards more transparent and standardised biosecurity practices
How do I use it and do you have an example?
The first tab provides an overview of all included tools and their corresponding categories.
The second tab includes exact wording used in the original biosecurity statements, including the paragraphs we referenced. You can see how organisations have phrased things.
Let’s have a look at an example. Based on the Repository (last updated in July 2025), you are able to see that the AlphaFold 3 tool is developed by Google DeepMind and Isomorphic Labs. It is hosted on a secure server, has no paywall, with user authentication required for access. It can be considered a protein design tool, small biomolecule design tool and has pathogen property prediction functionalities.
Based on our analysis, the organisations behind AlphaFold 3 have undertaken the following biosecurity actions: publishing a statement acknowledging dual-use risk , engaged experts in their risk assessment, conducted evaluations for dual-use risk and held community consultations.
A section below details a list of categories and values used in the repository, for those who are interested.
Feedback welcome :)
If you know developers or researchers who might find this useful, please share it with them! :) And if you have any feedback, ideas for new categories or know of relevant tools we’ve missed, we would love to hear from you :)
Acknowledgements:
Team: Shiying He, PT Nhean
Conceptualisation and feedback: Tessa Alexanian and Max Langenkamp
Tool Subcategories and Values: Adapted based on ‘Understanding AI-Facilitated Biological Weapon Development’ by the Centre for Long-Term Resilience (CLTR) and Global Risk Index for Al-enabled Biological Tools by CTLR and RAND Europe) - further reported in the table below.
Repository Categories and Values:
Category | Values |
| Date (MM/YYYY) |
|
| Countries |
|
| Organization |
|
| User’s interaction with the model | Choose ≥1 value
|
| Paywall | Choose 1 value
|
| Access Management | Choose ≥1 value
|
Tool Subcategories (adapted based on the following reports: ’Understanding AI-Facilitated Biological Weapon Development’ by the Centre for Long-Term Resilience (CLTR) and Global Risk Index for Al-enabled Biological Tools by CTLR and RAND Europe) | Choose ≥1 value(s)
|
| Biosecurity Actions | Choose ≥1 value(s)
|
| Information Hazards | Only flag if present |
Glad this exists, thanks for creating it!