How can we improve Infohazard Governance in EA Biosecurity?
Or: “Why EA biosecurity epistemics are whack”
The effective altruism (EA) biosecurity community focuses on reducing the risks associated with global biological catastrophes (GCBRs). This includes preparing for pandemics, improving global surveillance, and developing technologies to mitigate the risks of engineered pathogens. While the work of this community is important, there are significant challenges to developing good epistemics, or practices for acquiring and evaluating knowledge, in this area.
One major challenge is the issue of infohazards. Infohazards are ideas or information that, if widely disseminated, could cause harm. In the context of biosecurity, this could mean that knowledge of specific pathogens or their capabilities could be used to create bioweapons. As a result, members of the EA biosecurity community are often cautious about sharing information, particularly in online forums where it could be easily disseminated. [1]
The issue of infohazards is not straightforward. Even senior biosecurity professionals may have different thresholds for what they consider to be an infohazard. This lack of consensus can make it difficult for junior members to learn what is appropriate to share and discuss. Furthermore, it can be challenging for senior members to provide feedback on the appropriateness of specific information without risking further harm if that information is disseminated to a wider audience. At the moment, all EA biosecurity community-building efforts are essentially gate-kept by Open Phil, whose staff are particularly cautious about infohazards, even compared to experts in the field at [redacted]. Open Phil staff time is chronically scarce, making it impossible to copy and critique their heuristics on infohazards, threat models, and big-picture biosecurity strategy from 1:1 conversations. [2]
Challenges for cause and intervention prioritisation
These challenges can lead to a lack of good epistemics within the EA biosecurity community, as well as a deference culture where junior members defer to senior members without fully understanding the reasoning behind their decisions. This can result in a failure to adequately assess the risks associated with GCBRs and make well-informed decisions.
The lack of open discourse on biosecurity risks in the EA community is particularly concerning when compared to the thriving online discourse on AI alignment, another core area of longtermism for the EA movement. While there are legitimate reasons for being cautious about sharing information related to biosecurity, this caution may lead to a lack of knowledge sharing and limited opportunities for junior members of the community to learn from experienced members.
In the words of a biosecurity researcher who commented on this draft:
“Because of this lack of this discussion, it seems that some junior biosecurity EAs fixate on the “gospel of EA biosecurity interventions” — the small number of ideas seen as approved, good, and safe to think about. These ideas seem to take up most of the mind space for many junior folks thinking about what to do in biosecurity. I’ve been asked “So, you’re working in biosecurity, are you going to do PPE or UVC?” one too many times. There are many other interesting defence-dominant interventions, and I get the sense that even some experienced folks are reluctant to explore this landscape.”
Another example is the difficulty of comparing biorisk and AI risk without engaging in potentially infohazardous concrete threat models. While both are considered core cause areas of longtermism, it is challenging to determine how to prioritise these risks without evaluating the likelihood of a catastrophic event. For example, while humanity is resilient and could quickly recover from a catastrophe, it could face extinction from biorisk. However, thinking about the likelihood of this scenario is already infohazardous, making it difficult to determine how to prioritise resources and efforts.
Challenges for transparent and trustworthy advocacy
In the words of one of my mentors:
“The information hazard issue has even wider implications when it touches on policy advising: Deference to senior EAs and concern for infohazards mean that advocates for biosecurity policies cannot fully disclose their reasoning for specific policy suggestions. This means that a collaborative approach that takes non-EAs along to understand the reason behind policy asks and invites scrutiny and feedback is not possible. This kind of motivated, non-evidence-based advocacy makes others suspicious, which is already leading to a backlash against EA in the biosecurity space.”
Another person added:
“As one example, I had a conversation with a professor at a top school—someone who is broadly longtermism sympathetic and familiar with EA ideas—who told me they can’t understand how EA biosecurity folks expect to solve a problem without being able to discuss its nature.”
Picking up one of the aforementioned “gospel interventions”, let’s look at the details of stockpiling high-end personal protective equipment (PPE) for use in the event of a GCBR. While there are good arguments[3] that such equipment could be effective in preventing the spread of certain pathogens, stockpiling enough PPE for even a small fraction of the world’s population would be incredibly expensive. For example, stockpiling enough powered air-purifying respirators (PAPRs) for just 1% of the world’s population (80 million people) would cost $40 billion, assuming a low price of $500 per PAPR and ignoring storage and management costs. In addition, the shelf life of a PAPR is limited to around five years.
To justify this level of spending, stockpiling advocates need to make strong arguments that GCBRs could cause irretrievable destruction and that PPE could be an effective means of preventing this. However, these arguments require a detailed understanding of the novel risks associated with GCBR-level pathogens and the concrete limitations of bread-and-butter PPE in unprecedented scenarios.
What are known best practices?
I don’t know what the best practices are here, but I feel like other communities must have faced the issue of balancing inadvertent harm against the desire for open epistemics. I’m going to throw out a few quick ideas of things that might help, but I would really appreciate comments on good practices from other communities that manage information hazards or responsible disclosure effectively. For example, the UK’s Ministry of Defence implements a “need-to-know” principle, where classified information is only shared with individuals who require it for their specific tasks.
A few quick ideas:
An infohazard manual, which, even if leaning towards the conservative side, provides clearer guidance on what’s info hazardous. The aim is to curb the self-amplifying reticence that pushes people away from critical dialogues. This example is a good start. Please note that it doesn’t echo a consensus among senior community members.
An infohazard hotline, recognizing the complexities of making judgment calls around infohazards. It offers a trusted figure in the community whom newcomers in biosecurity can text anytime with queries like, “Is this an infohazard?” or “What venues are appropriate for discussing this, if at all?”
A secured, safely gatekept online forum that allows for more controlled and moderated online exchange, promotes the establishment of feedback loops and clear guidelines, and fosters a more collaborative and transparent approach to addressing GCBRs. While there are challenges to establishing and moderating such a forum, it could play a crucial role in promoting effective knowledge sharing and collaboration within the EA biosecurity community.
Without open discourse and feedback loops within the biosecurity community, it may be difficult to develop such a nuanced understanding of the risks associated with GCBRs and the effectiveness of different risk mitigation strategies. This could result in a failure to adequately prepare for potential pandemics and other GCBRs. I hope this post crowdsources more ideas and best practices for infohazard governance.
Thanks to Tessa Alexanian, Rahul Arora, Jonas Sandbrink, and several anonymous contributors for their helpful feedback and encouragement in posting this!
- ^
It should go without saying, but it’s worth reiterating: the potential harm from bioinfohazards is very real. Our goal should not be to dismiss these risks but to find better ways of managing them. This post is not a call for less caution but rather for more nuance and collaborative thinking in how we apply that caution.
- ^
Potential Conflict of Interest: My research is funded by Open Philanthropy’s Biosecurity Scholarship.
- ^
Shameless plug for my paper on this
- Biosecurity Culture, Computer Security Culture by 30 Aug 2023 17:07 UTC; 129 points) (
- Biosecurity Culture, Computer Security Culture by 30 Aug 2023 16:40 UTC; 103 points) (LessWrong;
- Can a terrorist attack cause human extinction? Not on priors by 2 Dec 2023 8:20 UTC; 43 points) (
- 22 Aug 2023 22:40 UTC; 8 points) 's comment on Why we should fear any bioengineered fungus and give fungi research attention by (
Hi Nadia, thanks for writing this post! It’s a thorny topic, and I think people are doing the field a real service when they take the time to write about problems as they see them –– I particularly appreciate that you wrote candidly about challenges involving influential funders.
Infohazards truly are a wicked problem, with lots of very compelling arguments pushing in different directions (hence the lack of consensus you alluded to), and it’s frustratingly difficult to devise sound solutions. But I think infohazards are just one of many factors contributing to the overall opacity in the field causing some of these epistemic problems, and I’m a bit more hopeful about other ways of reducing that opacity. For example, if the field had more open discussions about things that are not very infohazardous (e.g., comparing strategies for pursuing well-defined goals, such as maintaining the norm against biological weapons), I suspect it’d mitigate the consequences of not being able to discuss certain topics (e.g. detailed threat models) openly. Of course, that just raises the question of what is and isn’t an infohazard (which itself may be infohazardous...), but I do think there are some areas where we could pretty safety move in the direction of more transparency.
I can’t speak for other organisations, but I think my organisation (Effective Giving, where I lead the biosecurity grantmaking program) could do a lot to be more transparent just by overcoming obstacles to transparency that are unrelated to infohazards. These include the (time) costs of disseminating information; concerns about how transparency might affect certain key relationships, e.g. with prospective donors whom we might advise in the future; and public relations considerations more generally; and they’re definitely very real obstacles, but they generally seem more tractable than the infohazard issue.
I think we (again, just speaking for Effective Giving’s biosecurity program) have a long way to go, and I’d personally be quite disappointed if we didn’t manage to move in the direction of sharing more of our work during my tenure. This post was a good reminder of that, so thanks again for writing it!
These are really important points, thanks for starting a discussion on this topic. It seems like the infohazard manual by Chris Bakerlee and Tessa Alexanian is an excellent way forward. What do you think it is lacking? I am not an expert here; I’m not trying to imply their framework is complete, I genuinely want to learn more about what’s needed and it’s hard to convey tone via text.
You also mention Chris and Tessa’s manual “doesn’t echo a consensus among senior community members”. This surprises me, because I consider Chris to be a key senior community member in the biosecurity space. He is quite literally Open Philanthropy Project’s senior program associate in biosecurity and pandemic preparedness. Tessa also seems to be a leader in the biosecurity space; she has run all the biosecurity information sessions I’ve attended, has been the featured guest on the most popular biosecurity podcasts I recommend people listen to, and she has written extensively on possible new projects in the biosecurity space. At the bottom of the manual, Chris and Tessa thank a bunch of other people for contributing. This list of people encompasses a big percentage of who I consider to be senior members of the EA biosecurity community.
You clearly didn’t write this in a vacuum—indeed you seem to have written this post with feedback from Tessa—so I am again asking with genuine curiosity, what do other senior biosecurity community members think we should do about infohazards? And are these EA aligned people with different frameworks? Or are you referencing people outside the EA community who do work in government bio weapons programs or academic synthetic biology researchers?
Thanks very much for your time!
Thanks for this comment, and thanks to Nadia for writing the post, I’m really happy to see it up on the forum!
Chris and I wrote the guidance for reading groups and early entrants to the field; this was partly because we felt that new folks are most likely to feel stuck/intimidated/forced-into-deference/etc. and because it’s where we most often found ourselves repeating the same advice over and over.
I think there are people whose opinions I respect who would disagree with the guidance in a few ways:
We recommend a few kinds of interpersonal interventions, and some people think this is a poor way to manage information hazards, and the community should aim to have much more explicit / regimented policies
We recommend quite a bit of caution about information hazards, which more conservative people might consider an attention hazard in and of itself (drawing attention to the fact that information that would enable harm could be generated)
We recommend quite a bit of caution about information hazards, which less conservative people might consider too encouraging of deference or secrecy (e.g. people who have run into more trouble doing successful advocacy or recruiting/fostering talent, people who have different models of infohazard dyanmics, people who are worried that a lack of transparency worsens the community’s prioritization)
We don’t cover a lot of common scenarios, as Nadia noted in her comment
(Side note: it’s always both flattering and confusing to be considered a “senior member” of this community. I suppose it’s true, because EA is very young, but I have many collaborators and colleagues who have decade(s) of experience working full-time on biorisk reduction, which I most certainly do not.)
I think part of this is that you are quite active on the forum, give talks at conferences, etc., making you much more visible to newcomers in the field. Others in biosecurity have decades of experience but are less visible to newcomers. Thus, it is understandable to infer that you are a “senior member.”
Thanks, really helpful context!
Looking around and realizing you’re the grown up now can be startling. When did I sign up for this responsibility????
Thanks for your comment!
On what is lacking: It was written for reading groups, which is already a softly gatekept space. It doesn’t provide guidance on other communication channels: what people could write blogs or tweets about, what is safe to talk to LLMs about, what about google docs, etc. Indeed, I was concerned about infinitely abstract galaxy-brain infohazard potential from this very post.
On dissent:
I wanted to double down on the message in the document itself that is preliminary and not the be-all-end-all.
I have reached out to one person I have in mind within EA biosecurity who pushed back on the infohazard guidance document to give them the option to share their disagreement, potentially anonymously.
Thank you, super helpful contect!
Thanks for writing this, I found it helpful for understanding the biosecurity space better!
I wanted to ask if you had advice for handling the issue around difficulties for biosecurity in cause prioritisation as a community builder.
I think it is easy to build an intuitive case for biohazards not being very important or an existential risk, and this is often done by my group members (even good fits for biosecurity like biologists and engineers), who then dismiss the area in favour of other things. They (and me) do not have access to the threat models which people in biosecurity are actually worried about, making it extremely difficult to evaluate. An example of this kind of thinking is David Thorstad’s post on overestimating risks from biohazards which I thought was somewhat disappointing epistemically: https://ineffectivealtruismblog.com/2023/07/08/exaggerating-the-risks-part-9-biorisk-grounds-for-doubt/.
I suppose the options for managing this situation are:
Encourage deference to the field that biosecurity is worth working on relative to other EA areas.
Create some kind of resource which isn’t an infohazard in itself, but would be able to make a good case of biosecurity’s importance by perhaps gesturing at some credible threat models.
Permit the status quo which seems to probably lead to an underprioritisation of biosecurity.
2 seems best if it is at all feasible, but am unsure what to do between 1 and 3.
This is more a response to “it is easy to build an intuitive case for biohazards not being very important or an existential risk”, rather than your proposals...
My feeling is that it is fairly difficult to make the case that biological hazards present an existential as opposed to catastrophic risk and that this matters for some EA types selecting their career paths, but it doesn’t matter as much in the grand scale of advocacy? The set of philosophical assumptions under which “not an existential risk” can be rounded to “not very important” seems common in the EA community, but extremely uncommon outside of it.
My best guess is that any existential biorisk scenarios probably route through civilisational collapse, and that those large-scale risks are most likely a result of deliberate misuse, rather than accidents. This seems importantly different from AI risk (though I do think you might run into trouble with reckless or careless actors in bio as well).
I think a focus on global catastrophic biological risks already puts one’s focus in a pretty different (and fairly neglected) place from many people working on reducing pandemic risks, and that the benefit of trying to get into the details of whether a specific threat is existential or catastrophic doesn’t really outweigh the costs of potentially generating infohazards.
My guess is that (2) will be fairly hard to achieve, because the sorts of threat models that are sufficiently detailed to be credible to people trying to do hardcore existential-risk-motivated cause prioritization are dubiously cost-benefitted from an infohazard perspective.
Nice comment, to respond to your options
Deference doesn’t seem ideal, seems against the norms of the EA community
Like you say seems very feesable. I would be surprised if there wasn’t something like this already? And even you could make the point that the threat models used aren’t even the highest risk—others that you don’t talk about could be even worse.
Obviously not ideal
Another data point from a post on Reflective Altruism about biorisk:
Thanks for posting this, Nadia!
I would go further, and say that it is challenging to determine whether biorisk should be one of the core areas of longtermism. FWIW, the superforecasters and domain experts of The Existential Risk Persuasion Tournament (XPT) predicted the extinction risk until 2100 from engineered pathogens to be 13.5 % (= 0.01/0.074) and 1.82 times (= 0.01/0.0055) that of nuclear war. This is seemingly in contrast with nuclear not being a core area of longtermis (unlike AI and bio).
I personally think both superforecasters and domain experts are greatly overestimating nuclear extinction risk (I guess it is more like 10^-6 in the next 100 years). However, I find it plausible that extinction risk from engineered pathogens is also much lower than the 3 % bio existential risk from 2021 to 2120 guessed by Toby Ord in The Precipice. David Thorstad will be exploring this in a series (the 1st post is already out).
My impression was that nuclear risk has usually ended up as a somewhat lower priority for EAs because it’s less neglected?
Thanks for asking, Jeff!
According to 80,000 Hours’ profiles on nuclear war and catastrophic pandemics, it looks like scale, neglectedness and solvability play similar roles:
The scale of nuclear war might be 10 % that of catastrophic pandemics:
“We think the direct existential risk from nuclear war (i.e. not including secondary effects) is less than 0.01%. The indirect existential risk seems around 10 times higher”. So existential nuclear risk is less than 0.1 %, which might be interpreted as 0.01 %?
“Overall, we think the risk [from “existential biological catastrophe”] is around 0.1%, and very likely to be greater than 0.01%, but we haven’t thought about this in detail”.
Catastrophic pandemics might be 3 times as neglected as nuclear war:
“This issue is not as neglected as most other issues we prioritise. Current spending is between $1 billion and $10 billion per year (quality-adjusted).1” So maybe 3 billion (geometric mean)?
“As a result, our quality-adjusted estimate suggests that current spending is around $1 billion per year. (For comparison with other significant risks, we estimate that hundreds of billions per year are spent on climate change, while tens of millions are spent on reducing risks from AI.)”
It sounds like they think reducing the risk from catastrophic pandemics is more tractable:
“Making progress on nuclear security seems somewhat tractable. While many routes to progress face significant political controversy, there may also be some more neglected ways to reduce this risk.2”
“There are promising existing approaches to improving biosecurity, including both developing technology that could reduce these risks (e.g. better bio-surveillance), and working on strategy and policy to develop plans to prevent and mitigate biological catastrophes.”
So you may be right that the level of risk is not a major driver for nuclear war not being a core area. However, I guess other organisations believe the bio existential risk to be higher than 80,000 Hours, whereas few will have higher estimates for nuclear existential risk.
+1 for the idea of a gatekept biosecurity forum.
My model of dealing with info-hazards in policy advocacy is to remember that most people are not scope sensitive, and that we should go with the least infohazardous justification for a given policy idea.
Most policy ideas in GCBR response can be justified based on the risk of natural pandemics, and many policy ideas in GCBR prevention can be justified based on risks of accidental release. Discussing the risks of deliberate bioterrorism using engineered pathogens is only needed to justify a very small subset of GCBR prevention policy ideas.
Do you think the PPE/PAPR example is part of that very small subset? It just happens to be the area I started working on by deference, and I might’ve gotten unlucky.
Or is the crux here response vs prevention?
I think PAPR / developing very high quality PPE can probably be justified on the basis of accidental release risks and discussing deliberate threats wouldn’t add much to the argument, but stockpiles for basic PPE would be easily justified on just natural threats
I think in addition to policymakers not being scope sensitive, they’re also rarely thinking in terms of expected value, such that concern around accidents can drive similar action to concern around deliberate threats, since the probability of accidents is greater
Actually big caveat here is that policymakers in defence / national security departments might be more responsive to the deliberate threat risk, since that falls more clearly within their scope
Upvoted.
There needs to be more infosec people. 80k is on the ball on this. If you train a large number of people and they’re well networked, you still get a lot of duds who don’t know critical basics, like how conversations near smartphones are compromised by default, but you also pump out top-performers, like the people who know that smartphone-free dark zones stick out like a sore thumb in 3d space. It’s the top-performers who can do things like weigh the costs and benefits of crowdsourcing strategies like a closed-off forum dedicated to biosecurity, since that cost benefit analysis requires people who know critical details, like how everyone on such a forum would be using insecure operating systems, or how major militaries and intelligence agencies around the world can completely sign-change their evaluations/esteem of GOF research, at unpredictable times, and in contravention of previous agreements and norms (e.g. yearslong periods of what appears to be consensus opposition to GOF research). I think that evaluation can be done, I’m currently leaning towards “no”, but I don’t have nearly enough on-the-ground exposure to the disruptions and opportunity costs caused by the current paradigm, so I can only weigh in.
Bringing more people in also introduces liabilities. The best advice I can think of is skilling existing people up, e.g. by having them read good books about counterintelligence and infosec (I currently don’t have good models for how to distinguish good books from bad books, you need to find people you trust who already know which is which). Actually, I think I might be able to confidently recommend the security mindset tag on Lesswrong and the CFAR handbook, both of those should consistently allow more good work and broader perspectives to be handled by fewer people.
An infohazard manual seems like a great way to distill best practice and streamline the upskilling process for people, but there should be multiple different manuals depending on the roles. There should not be one single manual, Chris and Tessa’s manual is far from optimal upskilling (compared to distillations of the security mindset tag and the CFAR handbook alone), you can even have someone make updated versions and roles on the go (e.g. several times per year per reader). But one way or another, each should be distributed and read in printed form AND NOT digital form (even if that means a great waste of paper and ink and space).
A related point that I have observed in myself:
I think dual-use technologies have a higher potential for infohazards. I have a preference for not needing to be “secretive,” i.e., not needing to be mindful about what information I can share publicly. Probably there is also some deference going on where I shied away from working on more infohazard-y seeming technologies since I wasn’t sure how to deal with selectively sharing information. Accordingly, I have preferred to work on biorisk mitigation strategies that have little dual-use potential and, thus, low infohazard risk. (In my case far-UVC, another example would be PPE).
The problem with this is that it might be much more impactful for me to work on a biorisk mitigation technology that has more dual-use potential, but I haven’t pursued this work because of infohazard vibes and uncertainty about how to deal with that.
Another difficulty, especially for junior people, is that working on projects with significant infohazard risk could prevent you from showing your work and proving your competence. Since you might not be able to share your work publicly, this could reduce your chances of career advancement since you (seemingly) have a smaller track record.
This response is very late since I only just came across this post, but I was wondering if the author had any more details on what the backlash against EA that they mentioned specifically entailed? I haven’t been able to find any information about this on the web, unless it’s specifically related to the backlash against Esvelt’s arguments regarding DNA synthesis screening or those discussing the effects of LLMs? Is the biosecurity community also, for example, undergoing a backlash against the arguments for plausible biological existential risk that EAs are making?
I can’t speak for the author, and while I’d classify these as examples of suspicion and/or criticism of EA biosecurity rather than a “backlash against EA”, here are some links:
Will splashy philanthropy cause the biosecurity field to focus on the wrong risks?, Filippa Lentzos, 2019 (also linked and discussed on the forum)
Exaggerating the risks post series, Reflective Altruism, 2022-2024
Recent criticism specifically of AI-Bio risks, such as Propaganda or Science: Open Source AI and Bioterrorism Risk
I’ll also say I’ve heard criticism of “securitising health” which is much less about EAs in biosecurity and more clashing concerns between groups that prioritise global health and national security, where EA biosecurity folks often end up seen as more aligned with the national security concerns due to prioritising risks from deliberate misuse of biology.
Thanks Tessa. I actually came to this post and asked this question because it was quoted in the ‘Exaggerating the risks’ series, but then this post didn’t give any examples to back up this claim, which Thorstad has then quoted. I had come across this article by Undark which includes statements by some experts that are quite critical of Kevin Esvelt’s advocacy regarding nucleic acid synthesis. I think the Lentzos article is the kind of example I was wondering about—although I’m still not sure if it directly shows that the failure to justify their position on the details of the source of risk itself is the problem. (Specifically, I think the key thing Lentzos is saying is the risks Open Phil is worrying about are extremely unlikely in the near-term—which is true, they just think it’s more important for longtermist reasons and are therefore 1) more worried about what happens in the medium and long term and 2) still worried about low risk, high harm events. So the dispute doesn’t seem to me to be necessarily related to the details of catastrophic biorisk itself.)
Great post! One thing that came to mind is caution truly is the “norm” that is really pointed at when starting doing biosecurity-relevant work in EA, which has had its tradeoffs with me, some of which you’ve pointed out
It feels very plausible to me that “many people know about biorisk thread models” is the most important lever to impact biorisk. I’ve heard that many state bioweapons programs were started because states found out that other states thought bioweapons were powerful. If mere rumors caused them to invest millions in bioweapons, then preventing those rumors would have been an immensely powerful intervention, and preventing further such rumors is critically important.
You know what would be a great way to teach this. Just make up an info hazard type event. Tabletop war game it. Say the thing is called a mome rath and it’s anti-memetic or something(Go read worth the candle). Have the experts explain how they would treat the problem, and use that as a guide for how to interact with info hazards.
Maybe I am focused on the government piece of this but there’s probably damaging information that would hurt national security if it got out. That’s why we have classification systems in place. If we can’t even have the experts talk about it, then we need to really think about why that is and give them a softball to explain it. (Then think about why they’re thinking this, etc.)
Look, there is a risk. We just need to be able to explain it so that people don’t go looking at the door of forbidden knowledge thinking, “I really need to know everything in there.” When really what is behind the door is just a bunch of formulas that the current batch of 3d printers know not to make.
Counterpoint: make it boring and now one will be interested. Instead if Rokos Basilisk, think of calling it IRS-CP Form 23A.