What are the biggest misconceptions about biosecurity and pandemic risk?

Link post

by Anemone Franz and Tessa Alexanian

80,000 Hours ranks preventing catastrophic pandemics as one of the most pressing problems in the world, and we have advised many of our readers to work in biosecurity to have high-impact careers.

But biosecurity is a complex field, and while the threat is undoubtedly large, there’s a lot of disagreement about how best to conceptualise and mitigate the risks. We wanted to get a better sense of how the people thinking about these threats every day perceive the risks.

So we decided to talk to more than a dozen biosecurity experts to better understand their views.

To make them feel comfortable speaking candidly, we granted the experts we spoke to anonymity. Sometimes disagreements in this space can get contentious, and certainly many of the experts we spoke to disagree with one another. We don’t endorse every position they’ve articulated below.

We think, though, that it’s helpful to lay out the range of expert opinions from people who we think are trustworthy and established in the field. We hope this will inform our readers about ongoing debates and issues that are important to understand — and perhaps highlight areas of disagreement that need more attention.

The group of experts includes policymakers serving in national governments, grantmakers for foundations, and researchers in both academia and the private sector. Some of them identify as being part of the effective altruism community, while others do not. All the experts are mid-career or at a more senior level. Experts chose to provide their answers either in calls or in written form.

Below, we highlight 14 responses from these experts about misconceptions and mistakes that they believe are common in the field of biosecurity, particularly as it relates to people working on global catastrophic risks and in the effective altruism community.

Here are some of the areas of disagreement that came up:

  • What lessons should we learn from COVID-19?

  • Is it better to focus on standard approaches to biosecurity or search for the highest-leverage interventions?

  • Should we prioritise preparing for the most likely pandemics or the most destructive pandemics — and is there even a genuine trade-off between these priorities?

  • How big a deal are “information hazards” in biosecurity?

  • How should people most worried about global catastrophic risks engage with the rest of the biosecurity community?

  • How big a threat are bioweapons?

For an overview of this area, you can read our problem profile on catastrophic pandemics. (If you’re not very familiar with biosecurity, that article may provide helpful context for understanding the experts’ opinions below.)

Here’s what the experts said.

Expert 1: Failures of imagination and appeals to authority

In discussions around biosecurity, I frequently encounter a failure of imagination. Individuals, particularly those in synthetic biology and public health sectors, tend to rely excessively on historical precedents, making it difficult for them to conceive of novel biological risks or the potential for bad actors within a range of fields. This narrow mindset hinders proactive planning and compromises our ability to adequately prepare for novel threats.

Another frequent problem is appeal to authority. Many people tend to suspend their own critical reasoning when a viewpoint is confidently presented by someone they perceive as an authoritative figure. This can stymie deeper reflections on pressing biosecurity issues and becomes especially problematic when compounded by information cascades. In such scenarios, an uncritically accepted idea from an authoritative source can perpetuate as fact, sometimes going unquestioned for years.

There are various appeals to tradition that frequently skews discussions in broader biosecurity communities. For example, the adage that “nature is the worst bioterrorist” has gained such widespread acceptance that it has become a memetic obstacle to more nuanced understanding. We can be biassed in the data we pay attention to, and in times of unprecedented technological progress, we are at great risk of falling prey to our assumptions and “fighting the last war.”

For example, before the detonation of the first atomic bomb, the idea that a single device could wreak destruction on an entire city was nearly inconceivable based on prior warfare technologies. Previous estimates of damage were based on conventional explosives and did not remotely approximate the catastrophic impact of atomic bombs.

The advent of nuclear technology led to a situation where traditional calculations and historical precedents were starkly insufficient in predicting the risks and outcomes. It would be wise to avoid the same errors of reasoning when thinking about future risks from biology.

Expert 2: Exaggerating small-scale risks and Western chauvinism

There is a vast difference between a disease-causing agent or pathogen and something that could actually create massive harm. That creates bad policies, when people think that they’re the same thing. It’s technically possible for someone to grow anthrax spores in their bathtub, but the amount of damage they could do is constrained by the fact that anthrax typically doesn’t spread between people. So, focusing on the biological agents with the largest capability for harm would probably mean prioritising risks outside of anthrax.

I also think there’s a lot of chauvinism in biosecurity. This is less true for those that work internationally, and more true for people who work in the US. People always think well, of course Johns Hopkins University isn’t going to be part of a biological weapons programme. But that’s not the viewpoint of Russia and China and DPRK. Stop being so jingoistic. Stop thinking Western governments are always well-meaning and non-Western governments are not. The best way to view a US action is: what would you think if you found out that China was researching the same thing?

The other point about chauvinism is that the risks aren’t the same in the rich and the poor world. The rich world ignores the biosecurity of faecal-oral diseases because we have good sanitation infrastructure. Conversely, antibiotic resistance is top of mind for rich countries that use antibiotics as the cornerstone of a biological response, but will not change the consequences of an attack in a poor country that has limited stocks of antibiotics.

Expert 3: Useless reports and the world beyond effective altruism

I think there’s plenty of failure modes in terms of how people go about trying to do good things in the world. Writing a report, sending it into the void, and then expecting impact is a massive failure mode.

Another failure mode is assuming that things that are high status or visible within EA1 are the most effective. A lot of the best work in biosecurity comes from people that most EAs have just no idea exist, and those people should be more of the role models that EAs have.

Expert 4: Overconfidence in threat models

I think some people, including senior people, are overconfident in their threat models and have an overly simplistic view of certain issues. There are also a lot of other people who are under-confident and deferential in a way that I think is bad. So something to do with hierarchy and forming views based on deference is probably a big misconception.

Generally EA biosecurity has failed to really grapple with or seriously weigh the benefits of things like gain-of-function research or pandemic discovery. I think the analysis of these things can be quite simplistic and people tend to form the bottom line first. I think most EAs think it is probably bad to discover new pandemic viruses in nature without thinking too much about it. I also think that’s probably bad, but a lot of people could benefit from more openness to being wrong.

Expert 5: Interventions that don’t address the largest threats

I’m generally just witheringly sceptical of incrementalist biosecurity interventions like just hiring more doctors.

In global catastrophic risk, obviously you do want to reduce risk in an incremental way. But you do it by closing off particular parts of a threat landscape.

I tend to favour interventions that are supported by a story that says something like, but for a given intervention, this particular threat would have killed 10 billion or 8 billion — but instead it only kills less than a billion, or something like that.

COVID was some evidence in favour of my position being correct. The main factor in saving lives was the vaccine. It’s a very heavy-tailed distribution in terms of actual impact. It’s probably an archetypical EA vibe to have, but it’s one I am unapologetic in having.

Expert 6: Over-reliance on past experience and overconfidence about the future

There are two general categories of common failures in most policy efforts. One is based on an over-reliance on past experience and the other is based on a flawed sense of confidence about the future.

For biosecurity and pandemic risk, we shouldn’t overly rely on successful interventions of the past in order to protect us against future biological incidents — such as vaccination and other medical countermeasures. One of the major successes during the COVID-19 pandemic was the rapid development of vaccines. So it’s easy to understand why folks would want to double down on an approach that worked. In fact, it’s an approach that has worked for combatting and eradicating infectious diseases for many decades.

However, medical countermeasures should not be the cornerstone of our playbook moving forward, not least of all because we’ve seen their limitations in preventing disease transmission — plus anti-vaccine sentiments.

Plus, if we think that medical countermeasures are the answer to all of our pandemic problems, we may be motivated to use AI or research automation to design and develop medical countermeasures at any cost — even if the tools and approaches we use could be misused.

On the other hand, a future world where biology becomes increasingly programmable is often imagined by those with confidence and optimism about the future. But it’s this same confidence in biology for good that prevents visualisation of a world where biology can be misused by those with less expertise and resources.

These optimists are often the same ones that believe nothing robust should be done to address dual-use risk because it will limit innovation and its potential life-changing transformations. But it doesn’t quite make sense how innovations through AI and machine learning can be so life-changing for good without commensurate potential to be life-changing for bad. We need to think about minimising the risk with the same level of rigour as maximising the good.

Expert 7: Resistance from governments and antagonising expert communities

I think people overestimate how likely it is that an intervention that prevents global catastrophic biological risks (GCBRs) would be adopted by governments. In my experience, it’s challenging to get interventions that provide measurable, real-world public health benefits funded and adopted — unless they can be proven to pay for themselves and offer regular benefits that can be reported to decision makers.

Interventions that may never be used or only be used very occasionally also can’t be tested for their accuracy and usefulness in a real-world setting. Similar interventions used routinely can fail in unexpected ways or for reasons that weren’t considered, so getting regular feedback based on real-world settings is crucial.

This is especially important in determining whether decision makers and the public trust the information provided, as a lack of trust or confidence can undermine a perfectly good early warning system. I also see people treat GCBR preparedness, public health, and global health interventions as different things when in reality, there are solutions that can be used for multiple purposes and can provide a better case for adoption if considered jointly.

Another issue I see is people positioning themselves in opposition to pathogen researchers, synthetic biologists, AI developers, or people working in public health due to disagreements about how they work and the information they generate. I’ve heard frustrations that these groups won’t engage on these issues from people who talk about them and to them in an extremely negative way.

I’m not at all surprised that these groups aren’t helpful and collaborative when they’re being insulted. These communities offer a wealth of knowledge we desperately need to solve these problems effectively and taking an adversarial stance causes them to respond in kind.

When you engage these groups as collaborators, I’ve found that they are incredibly enthusiastic to get involved and adapt their practices. And when they’re not, it’s because they see a real-world harm as a consequence of caution that the biosecurity community hasn’t recognized or valued enough.

This approach also leaves people in biosecurity re-inventing the wheel when existing solutions and infrastructure could be adapted to be more biosecurity-forward. People working with infectious diseases regularly have a great solution in mind that we could be advocating for instead of trying to come up with something new, having had no experience of dealing with a real outbreak.

It can also lead to biosecurity folks developing an intervention that would fail or not be adopted in practice. There seems to be this thinking that pathogen and public health researchers are too short-sighted to care about GCBRs or aren’t clever/​focussed enough to come up with solutions to the problems we worry about. In reality, they’ve often tried and either the intervention didn’t go as well as expected, or they couldn’t get anyone to pay for it.

Expert 8: Preparing for the “Big One” looks like preparing for the smaller ones

I believe some in the effective altruism/​longtermism community see a strong disconnect between “ordinary” pandemics of the kind we have recently experienced or smaller, and “existential threats.” While size and severity surely matter, the systems for responding to one are the same ones that will respond to the other — and failure to control a smaller one is the most likely way in which we get to a larger one. So investments in preparing for the “Big One” should primarily (maybe not exclusively) be those that also prepare for the small and medium-sized ones.

Expert 9: ChatGPT is not dangerous

The one that I hear most recently is that ChatGPT is going to let people be weaponeers. That drives me berserk because telling someone step-by-step how to make a virus does not in any way allow them to make a weapon. It doesn’t tell them how to assemble it, doesn’t tell them how to grow it up, it doesn’t tell them how to purify it, test it, deploy it, none of that.

If you lined up all the steps in a timeline of how to create a bioweapon, learning what the steps are would be this little chunk on the far left side. It would take maybe 10 hours to figure that out with Google, and maybe ChatGPT will give you that in one hour.

But there’s months and years worth of the rest of the process, and yet everyone’s running around saying, “ChatGPT is going to build a whole generation of weaponeers!” No way.

Another dubious take is having information hazards rule everything. That we should never do anything to create new information hazards.

To me, that is crazy. Because you basically are saying you’re going to shut off your ability to understand the threat just because you’re worried about possibly informing someone somewhere a little bit more, despite the massive pile of public literature that already exists.

It really ends up being an argument about who’s going to be faster, the weaponeer or the person working on the defensive side. I would much rather have that contest than unilaterally disarm, which is what the information hazard argument insists we do.

I would also argue weaponisation is a years-long timeline.

We have never in human history created a virus from scratch to exploit a new vulnerability in a biological system. I can’t imagine that taking place in less than a couple of years. Over the course of those years the individuals will be throwing off lots of signals that would be detectable.

And so the idea that those weaponeers would race to the finish in six months and have a little tube that they could throw around — I just don’t buy it. I see no evidence of what humans have done so far that would get us anywhere near any of that. And telling ourselves stories about how hard that is or how easy that is, I think is really harmful.

We should at least be realistic in our estimates and judgments of how hard or easy that process would be. And right now, we’re shortcutting all of that and just writing science fiction.

Expert 10: The barriers to bioweapons are not that high

There are a lot of misconceptions out there, in part because the academic community that writes about these things has been very small for a very long time. A lot of the writings that are out there on biological weapons are just wrong.

It’s hard because a lot of it is classified, but a simple example would be the claim that a country would never use a communicable biological weapon because it would blow back on their own people. The fact is that the Soviet Union perfected smallpox as a biological weapon. So the fact that they did it tells me that countries will do it.

One misconception that may once have been somewhat true — but now it’s much less true, and in the future it will be even less true — is that tacit knowledge is required to make biological weapons. That the barrier is so high.

I don’t think that’s the case. I actually don’t think that was the case 20 years ago or 30 years ago, but it’s certainly not the case today.

I think it depends on which specific biological weapon. It’s relatively easy with the information available today for a determined small group or individual to successfully produce and deliver biological weapons. There was a book written about this that studied the Soviet program, and the author spoke to many former Soviet bioweaponeers. I think it’s natural that they would exaggerate their craft and describe how “only people with special skills like me could do this.” I think that biassed the conclusions of that book.

Expert 11: We don’t need silver bullets

Some people think laws and rules, and the tools and technologies used to implement them, do not control bad actors. They only influence the behaviour of good actors. But this misses the deterrent effect of laws and rules.

Some people look only for silver bullets and they discount solutions that might be circumvented. This misses the point that raising barriers to malign misuse of technology can still be highly beneficial.

Expert 12: Three failure modes

A few misconceptions and mistakes:

  1. Failure to engage broadly with knowledge and expertise in the community and trying to reinvent things from scratch.

  2. Failure to account for the fact that biosecurity is exceptionally broad and other people are likely solving for different problems and have different threat models.

  3. Failure to get out into the world and talk with people — most information still isn’t written down.

Expert 13: Groupthink and failing to be scope sensitive

I think that there’s a lack of scope sensitivity. People are not treating the catastrophic potential outcomes in biosecurity as six billion times more concerning — or whatever the right multiplier would be — than the cases where one or two people could get impacted.

I also think there’s some groupthink in the effective altruism biosecurity community, and there is a very important need for people to work in and be exposed to other communities that have some very different perspectives on the issues to understand:

  • Why are these the ideas that the effective altruism community has produced?

  • Why aren’t they being implemented?

  • Why do people disagree with them?

I think that there’s a whole set of sort of feasibility and implementation dynamics that are not well understood by many EAs in this field. I think it’s really important that people expose themselves to the organisations and the people that have been working on this for a long time, the established institutions and scientists, to have a better understanding of why people are doing things the way they do. Sometimes you may still disagree and feel a different approach would be better, but there’s usually a logic to it that is not always fully grasped.

Expert 14: COVID-19 does not equal biological weapons

One common misconception is that because SARS-CoV-2 [the virus that causes COVID-19] was able to cause this horrific human pandemic, any biological weapon would be equally devastating. That this pandemic exposed our vulnerability to biological weapons and that state and non-state actors are immediately going to exploit that vulnerability and attack with biological weapons.

I think that is misleading because this virus evolved to be as infectious and contagious as it was. What Mother Nature is able to do with pathogens, I think, exceeds what humans can still do in this respect.

COVID-19 does not equal biological weapons, bluntly. This is a very specific pandemic that emerged that is not the same as what any country has been able to do in the past or any non-state actor is able to do.

There is also an assumption that because this vulnerability was revealed, lots of states and non-state actors are going to launch biological weapons programmes.

I don’t necessarily think that’s the case. Most countries and terrorist groups don’t have or want or need biological weapons because they will not serve their purposes. We should not expect this dramatic change in calculus based on this one natural pandemic. We’ve had other pandemics historically, lots of flu pandemics that have not resulted in wholesale changes in the way that states or terrorist groups view these things.

Because the reality is biological weapons just don’t work that well and aren’t going to meet the objectives of countries and groups that are looking for new capabilities.

There are lots of misconceptions about how easy it is to develop and use a biological weapon. There are a lot of steps that go into converting a pathogen into an actual weapon that is capable of causing mass casualties, and that process is not necessarily going to be published in the open literature.

It’s not something that people can just learn in grad school, right? There are very arcane knowledge and skills that go into this that are just not readily accessible. So there’s a very common downplaying of the role of tacit knowledge in this process of weaponization. It’s very easy for people to claim that if you can culture a pathogen, and find it in nature, then you can then turn it into a weapon to cause a pandemic or mass casualties.

And that’s just not the case. We’ve seen nation states struggle to develop these weapons and non-state actors have been very unsuccessful with them because they are very challenging to acquire and develop and produce and use properly.

I guess the last misconception I’ll highlight is that advances in life sciences about technology are democratising the proliferation of biological weapons — that this technology, these advances are easily converted into misuse.

And again, that’s just not the case. The technologies that are developed are usually developed for very specific civilian scientific purposes that would need to be modified to be capable of causing mass casualties. That’s not a simple, straightforward, easy process by any means.

People overestimate the degree of threats to biosecurity, which again, I’m not saying that there aren’t any, but I’m saying the tendency is more to exaggerate the threats than not.

And frequently this comes from people who don’t understand the biology of these organisms, and they come out of different epistemic communities. They don’t understand the nuances and the complexities involved in what it takes to actually create a biological threat.

It’s safe to say that the knowledge and capabilities are increasing because of advances in dual-use technologies. But that’s very much a latent capability that’s increasing. It’s another thing to say: OK, this latent capability will transform into something capable of misuse, just assuming that there’s someone out there with an intent and motivation to do so.

That latent capability for modifying pathogens or producing them is clearly expanding in terms of our level of knowledge and the number of people out there who are able to do it. But it doesn’t necessarily mean that the intent is growing. And that’s the part that people kind of lose sight of and don’t pay attention to.

If someone is going to want to misuse biology, there has to be this intent or motivation. And that’s the part that people just kind of take for granted.