How can we improve Infohazard Governance in EA Biosecurity?

Or: “Why EA biosecurity epistemics are whack”

The effective altruism (EA) biosecurity community focuses on reducing the risks associated with global biological catastrophes (GCBRs). This includes preparing for pandemics, improving global surveillance, and developing technologies to mitigate the risks of engineered pathogens. While the work of this community is important, there are significant challenges to developing good epistemics, or practices for acquiring and evaluating knowledge, in this area.

One major challenge is the issue of infohazards. Infohazards are ideas or information that, if widely disseminated, could cause harm. In the context of biosecurity, this could mean that knowledge of specific pathogens or their capabilities could be used to create bioweapons. As a result, members of the EA biosecurity community are often cautious about sharing information, particularly in online forums where it could be easily disseminated. [1]

The issue of infohazards is not straightforward. Even senior biosecurity professionals may have different thresholds for what they consider to be an infohazard. This lack of consensus can make it difficult for junior members to learn what is appropriate to share and discuss. Furthermore, it can be challenging for senior members to provide feedback on the appropriateness of specific information without risking further harm if that information is disseminated to a wider audience. At the moment, all EA biosecurity community-building efforts are essentially gate-kept by Open Phil, whose staff are particularly cautious about infohazards, even compared to experts in the field at [redacted]. Open Phil staff time is chronically scarce, making it impossible to copy and critique their heuristics on infohazards, threat models, and big-picture biosecurity strategy from 1:1 conversations. [2]

Challenges for cause and intervention prioritisation

These challenges can lead to a lack of good epistemics within the EA biosecurity community, as well as a deference culture where junior members defer to senior members without fully understanding the reasoning behind their decisions. This can result in a failure to adequately assess the risks associated with GCBRs and make well-informed decisions.

The lack of open discourse on biosecurity risks in the EA community is particularly concerning when compared to the thriving online discourse on AI alignment, another core area of longtermism for the EA movement. While there are legitimate reasons for being cautious about sharing information related to biosecurity, this caution may lead to a lack of knowledge sharing and limited opportunities for junior members of the community to learn from experienced members.

In the words of a biosecurity researcher who commented on this draft:

“Because of this lack of this discussion, it seems that some junior biosecurity EAs fixate on the “gospel of EA biosecurity interventions” — the small number of ideas seen as approved, good, and safe to think about. These ideas seem to take up most of the mind space for many junior folks thinking about what to do in biosecurity. I’ve been asked “So, you’re working in biosecurity, are you going to do PPE or UVC?” one too many times. There are many other interesting defence-dominant interventions, and I get the sense that even some experienced folks are reluctant to explore this landscape.”

Another example is the difficulty of comparing biorisk and AI risk without engaging in potentially infohazardous concrete threat models. While both are considered core cause areas of longtermism, it is challenging to determine how to prioritise these risks without evaluating the likelihood of a catastrophic event. For example, while humanity is resilient and could quickly recover from a catastrophe, it could face extinction from biorisk. However, thinking about the likelihood of this scenario is already infohazardous, making it difficult to determine how to prioritise resources and efforts.

Challenges for transparent and trustworthy advocacy

In the words of one of my mentors:

“The information hazard issue has even wider implications when it touches on policy advising: Deference to senior EAs and concern for infohazards mean that advocates for biosecurity policies cannot fully disclose their reasoning for specific policy suggestions. This means that a collaborative approach that takes non-EAs along to understand the reason behind policy asks and invites scrutiny and feedback is not possible. This kind of motivated, non-evidence-based advocacy makes others suspicious, which is already leading to a backlash against EA in the biosecurity space.”

Another person added:

“As one example, I had a conversation with a professor at a top school—someone who is broadly longtermism sympathetic and familiar with EA ideas—who told me they can’t understand how EA biosecurity folks expect to solve a problem without being able to discuss its nature.”

Picking up one of the aforementioned “gospel interventions”, let’s look at the details of stockpiling high-end personal protective equipment (PPE) for use in the event of a GCBR. While there are good arguments[3] that such equipment could be effective in preventing the spread of certain pathogens, stockpiling enough PPE for even a small fraction of the world’s population would be incredibly expensive. For example, stockpiling enough powered air-purifying respirators (PAPRs) for just 1% of the world’s population (80 million people) would cost $40 billion, assuming a low price of $500 per PAPR and ignoring storage and management costs. In addition, the shelf life of a PAPR is limited to around five years.

To justify this level of spending, stockpiling advocates need to make strong arguments that GCBRs could cause irretrievable destruction and that PPE could be an effective means of preventing this. However, these arguments require a detailed understanding of the novel risks associated with GCBR-level pathogens and the concrete limitations of bread-and-butter PPE in unprecedented scenarios.

What are known best practices?

I don’t know what the best practices are here, but I feel like other communities must have faced the issue of balancing inadvertent harm against the desire for open epistemics. I’m going to throw out a few quick ideas of things that might help, but I would really appreciate comments on good practices from other communities that manage information hazards or responsible disclosure effectively. For example, the UK’s Ministry of Defence implements a “need-to-know” principle, where classified information is only shared with individuals who require it for their specific tasks.

A few quick ideas:

  1. An infohazard manual, which, even if leaning towards the conservative side, provides clearer guidance on what’s info hazardous. The aim is to curb the self-amplifying reticence that pushes people away from critical dialogues. This example is a good start. Please note that it doesn’t echo a consensus among senior community members.

  2. An infohazard hotline, recognizing the complexities of making judgment calls around infohazards. It offers a trusted figure in the community whom newcomers in biosecurity can text anytime with queries like, “Is this an infohazard?” or “What venues are appropriate for discussing this, if at all?”

  3. A secured, safely gatekept online forum that allows for more controlled and moderated online exchange, promotes the establishment of feedback loops and clear guidelines, and fosters a more collaborative and transparent approach to addressing GCBRs. While there are challenges to establishing and moderating such a forum, it could play a crucial role in promoting effective knowledge sharing and collaboration within the EA biosecurity community.

Without open discourse and feedback loops within the biosecurity community, it may be difficult to develop such a nuanced understanding of the risks associated with GCBRs and the effectiveness of different risk mitigation strategies. This could result in a failure to adequately prepare for potential pandemics and other GCBRs. I hope this post crowdsources more ideas and best practices for infohazard governance.

Thanks to Tessa Alexanian, Rahul Arora, Jonas Sandbrink, and several anonymous contributors for their helpful feedback and encouragement in posting this!

  1. ^

    It should go without saying, but it’s worth reiterating: the potential harm from bioinfohazards is very real. Our goal should not be to dismiss these risks but to find better ways of managing them. This post is not a call for less caution but rather for more nuance and collaborative thinking in how we apply that caution.

  2. ^

    Potential Conflict of Interest: My research is funded by Open Philanthropy’s Biosecurity Scholarship.

  3. ^

    Shameless plug for my paper on this