Copying over the rationale for publication here, for convenience:
Rationale for Public Release
Releasing this report inevitably draws attention to a potentially destructive scientific development. We do not believe that drawing attention to threats is always the best approach for mitigating them. However, in this instance we believe that public disclosure and open scientific discussion are necessary to mitigate the risks from mirror bacteria. We have two primary reasons to believe disclosure is necessary:
1. To prevent accidents and well-intentioned development
If no serious concerns are raised, the default course of well-intentioned scientific and technological development would likely result in the eventual creation of mirror bacteria. Creating mirror life has been a long-term aspiration of many academic investigators, and efforts toward this have been supported by multiple scientific funders.1 While creating mirror bacteria is not yet possible or imminent, advances in enabling technologies are expected to make it achievable within the coming decades. It does not appear possible to develop these technologies safely (or deliberately choose to forgo them) without widespread awareness of the risks, as well as deliberate planning to mitigate them. This concern is compounded by the possibility that mirror bacteria could accidentally cause irreversible harm even without intentional misuse. Without awareness of the threat, some of the most dangerous modifications would likely be made for well-intentioned reasons, such as endowing mirror bacteria with the ability to metabolize ᴅ-glucose to allow growth in standard media.
2. To build guardrails that could reliably prevent misuse
There are currently substantial technical barriers to creating mirror bacteria. Success within a decade would require efforts akin to those of the Human Genome Project or other major scientific endeavors: a substantial number of skilled scientists collaborating for many years, with a large budget and unimpeded access to specialized goods and services. Without these resources, entities reckless enough to disregard the risks or intent upon misuse would have difficulty creating mirror bacteria on their own. Disclosure therefore greatly reduces the probability that well-intentioned funders and scientists would unwittingly aid such an effort while providing very little actionable information to those who may seek to cause harm in the near term.
Crucially, maintaining this high technical barrier in the longer term also appears achievable with a sustained effort. If well-intentioned scientists avoid developing certain critical components, such as methods relevant to assembling a mirror genome or key components of the mirror proteome, these challenges would continue to present significant barriers to malicious or reckless actors. Closely monitoring critical materials and reagents such as mirror nucleic acids would create additional obstacles. These protective measures could likely be implemented without impeding the vast majority of beneficial research, although decisions about regulatory boundaries would require broad discussion amongst the scientific community and other stakeholders, including policymakers and the public. Since ongoing advances will naturally erode technical barriers, disclosure is necessary in order to begin discussions while those barriers remain formidable.
IMO, one helpful side effect (albeit certainly not a main consideration) of making this work public, is that it seems very useful to have at least one worst-case biorisk that can be publicly discussed in a reasonable amount of detail. Previously, the whole field / cause area of biosecurity could feel cloaked in secrecy, backed up only by experts with arcane biological knowledge. This situation, although unfortunate, is probably justified by the nature of the risks! But still, it makes it hard for anyone on the outside to tell how serious the risks are, or understand the problems in detail, or feel sufficiently motivated about the urgency of creating solutions.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like it’s much healthier than having zero concrete examples and everything shrouded in secrecy.
Some of the cross-cutting things I am thinking about:
scientific norms about whether to fund / publish risky research
attempts to coordinate (on a national or international level) moratoriums against certain kinds of research
the desirability of things like metagenomic sequencing, DNA synthesis screening for harmful sequences, etc
research into broad-spectrum countermeasures like UVC light, super-PPE, pipelines for very quick vaccine development, etc
just emphasising the basic overall point that global catastrophic biorisk seems quite real and we should take it very seriously
and probably lots of other stuff!
So, I think it might be a kind of epistemic boon for all of biosecurity to have this public example, which will help clarify debates / advocacy / etc about the need for various proposed policies or investments.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like it’s much healthier than having zero concrete examples and everything shrouded in secrecy.
I think it’s true that a lot of topics are not discussed because of concerns about info hazard. But I do think we already had some concrete examples, such as some hotly debated gain of function cases, considering the possibility of something as infectious as measles but as fatal as rabies, or Myxomatosis killing 99% of rabbits.
I didn’t realize people in biosecurity were worried about infohazards. In EA circles I hear biosecurity talked about much less than the other cause areas, and now I’m wondering how much of that is the cloak of secrecy and how much is the field simply being neglected?
Copying over the rationale for publication here, for convenience:
IMO, one helpful side effect (albeit certainly not a main consideration) of making this work public, is that it seems very useful to have at least one worst-case biorisk that can be publicly discussed in a reasonable amount of detail. Previously, the whole field / cause area of biosecurity could feel cloaked in secrecy, backed up only by experts with arcane biological knowledge. This situation, although unfortunate, is probably justified by the nature of the risks! But still, it makes it hard for anyone on the outside to tell how serious the risks are, or understand the problems in detail, or feel sufficiently motivated about the urgency of creating solutions.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like it’s much healthier than having zero concrete examples and everything shrouded in secrecy.
Some of the cross-cutting things I am thinking about:
scientific norms about whether to fund / publish risky research
attempts to coordinate (on a national or international level) moratoriums against certain kinds of research
the desirability of things like metagenomic sequencing, DNA synthesis screening for harmful sequences, etc
research into broad-spectrum countermeasures like UVC light, super-PPE, pipelines for very quick vaccine development, etc
just emphasising the basic overall point that global catastrophic biorisk seems quite real and we should take it very seriously
and probably lots of other stuff!
So, I think it might be a kind of epistemic boon for all of biosecurity to have this public example, which will help clarify debates / advocacy / etc about the need for various proposed policies or investments.
I think it’s true that a lot of topics are not discussed because of concerns about info hazard. But I do think we already had some concrete examples, such as some hotly debated gain of function cases, considering the possibility of something as infectious as measles but as fatal as rabies, or Myxomatosis killing 99% of rabbits.
I didn’t realize people in biosecurity were worried about infohazards. In EA circles I hear biosecurity talked about much less than the other cause areas, and now I’m wondering how much of that is the cloak of secrecy and how much is the field simply being neglected?
Infohazards are indeed a pretty big worry of lots of the EAs working on biosecurity: https://forum.effectivealtruism.org/posts/PTtZWBAKgrrnZj73n/biosecurity-culture-computer-security-culture