IMO, one helpful side effect (albeit certainly not a main consideration) of making this work public, is that it seems very useful to have at least one worst-case biorisk that can be publicly discussed in a reasonable amount of detail. Previously, the whole field / cause area of biosecurity could feel cloaked in secrecy, backed up only by experts with arcane biological knowledge. This situation, although unfortunate, is probably justified by the nature of the risks! But still, it makes it hard for anyone on the outside to tell how serious the risks are, or understand the problems in detail, or feel sufficiently motivated about the urgency of creating solutions.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like it’s much healthier than having zero concrete examples and everything shrouded in secrecy.
Some of the cross-cutting things I am thinking about:
scientific norms about whether to fund / publish risky research
attempts to coordinate (on a national or international level) moratoriums against certain kinds of research
the desirability of things like metagenomic sequencing, DNA synthesis screening for harmful sequences, etc
research into broad-spectrum countermeasures like UVC light, super-PPE, pipelines for very quick vaccine development, etc
just emphasising the basic overall point that global catastrophic biorisk seems quite real and we should take it very seriously
and probably lots of other stuff!
So, I think it might be a kind of epistemic boon for all of biosecurity to have this public example, which will help clarify debates / advocacy / etc about the need for various proposed policies or investments.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like it’s much healthier than having zero concrete examples and everything shrouded in secrecy.
I think it’s true that a lot of topics are not discussed because of concerns about info hazard. But I do think we already had some concrete examples, such as some hotly debated gain of function cases, considering the possibility of something as infectious as measles but as fatal as rabies, or Myxomatosis killing 99% of rabbits.
I didn’t realize people in biosecurity were worried about infohazards. In EA circles I hear biosecurity talked about much less than the other cause areas, and now I’m wondering how much of that is the cloak of secrecy and how much is the field simply being neglected?
IMO, one helpful side effect (albeit certainly not a main consideration) of making this work public, is that it seems very useful to have at least one worst-case biorisk that can be publicly discussed in a reasonable amount of detail. Previously, the whole field / cause area of biosecurity could feel cloaked in secrecy, backed up only by experts with arcane biological knowledge. This situation, although unfortunate, is probably justified by the nature of the risks! But still, it makes it hard for anyone on the outside to tell how serious the risks are, or understand the problems in detail, or feel sufficiently motivated about the urgency of creating solutions.
By disclosing the risks of mirror bacteria, there is finally a concrete example to discuss, which could be helpful even for people who are actually even more worried about, say, infohazardous-bioengineering-technique-#5, than they are about mirror life. Just being able to use mirror life as an example seems like it’s much healthier than having zero concrete examples and everything shrouded in secrecy.
Some of the cross-cutting things I am thinking about:
scientific norms about whether to fund / publish risky research
attempts to coordinate (on a national or international level) moratoriums against certain kinds of research
the desirability of things like metagenomic sequencing, DNA synthesis screening for harmful sequences, etc
research into broad-spectrum countermeasures like UVC light, super-PPE, pipelines for very quick vaccine development, etc
just emphasising the basic overall point that global catastrophic biorisk seems quite real and we should take it very seriously
and probably lots of other stuff!
So, I think it might be a kind of epistemic boon for all of biosecurity to have this public example, which will help clarify debates / advocacy / etc about the need for various proposed policies or investments.
I think it’s true that a lot of topics are not discussed because of concerns about info hazard. But I do think we already had some concrete examples, such as some hotly debated gain of function cases, considering the possibility of something as infectious as measles but as fatal as rabies, or Myxomatosis killing 99% of rabbits.
I didn’t realize people in biosecurity were worried about infohazards. In EA circles I hear biosecurity talked about much less than the other cause areas, and now I’m wondering how much of that is the cloak of secrecy and how much is the field simply being neglected?
Infohazards are indeed a pretty big worry of lots of the EAs working on biosecurity: https://forum.effectivealtruism.org/posts/PTtZWBAKgrrnZj73n/biosecurity-culture-computer-security-culture