People in bunkers, “sardines” and why biorisks may be overrated as a global priority

Hi! This is my first post on EA Forum. I’ve been a fan of Effective Altruism and following for a few years though, and I’m currently going through the 80,000 Hours career planning course.

I originally posted this as a Shortform because I couldn’t find the option to create a regular post. I later found it so I’m now reposting it here in case that helps more people see it and maybe share comments/​feedback. When I posted it as a Shortform I got a thoughtful comment from Linch about information hazards. I didn’t realize there was a any kind of taboo or concern about discussing biorisks when I wrote this, so apologies if this violates any community norms and let me know if it’s serious enough to warrant me taking this down.

I’m going to make the case here that certain problem areas currently prioritized highly in the longtermist EA community are overweighted in their importance/​scale. In particular I’ll focus on biorisks, but this could also apply to other risks such as non-nuclear global war and perhaps other areas as well.

I’ll focus on biorisks because that is currently highly prioritized by both Open Philanthropy and 80,000 Hours and probably other EA groups as well. If I’m right that biotechnology risks should be deprioritized, that would relatively increase the priority of other issues like AI, growing Effective Altruism, global priorities research, nanotechnology risks and others by a significant amount. So it could help allocate more resources to those areas which still pose existential threats to humanity.

I won’t be taking issue with the longtermist worldview here. In fact, I’ll assume the longtermist worldview is correct. Rather, I’m questioning whether biorisks really pose a significant existential/​extinction risk to humanity. I don’t doubt that they could lead to major global catastrophes which it would be really good to avert. I just think that it’s extremely unlikely for them to lead to total human extinction or permanent civilization collapse.

This started when I was reading about disaster shelters. Nick Beckstead has a paper considering whether they could be a useful avenue for mitigating existential risks [1]. He concludes there could be a couple of special scenarios where they are that need further research, but by and large new refuges don’t seem like a great investment because there are already so many existing shelters and other things which could serve to protect people from many global catastrophes. Specifically, the world already has a lot of government bunkers, private shelters, people working on submarines, and 100-200 uncontacted peoples which are likely to produce survivors from certain otherwise devastating events. [1]

A highly lethal engineered pandemic is among the biggest risks considered from biotechnology. This could potentially wipe out billions of people and lead to a collapse of civilization. But it’s extremely unlikely not to spare at least a few hundred or thousand people among those who have access to existing bunkers or other disaster shelters, people who are working on submarines, and among the dozens of tribes and other peoples living in remote isolation. Repopulating the Earth and rebuilding civilization would not be fast or easy, but these survivors could probably do it over many generations.

So are humans immune then from all existential risks thanks to preppers, “sardines” [2] and uncontacted peoples? No. There are certain globally catastrophic events which would likely spare no one. A superintelligent malevolent AI could probably hunt everyone down. The feared nanotechnological “gray goo” scenario could wreck all matter on the planet. A nuclear war extreme enough that it contaminated all land on the planet with radioactivity—even though it would likely have immediate survivors—might create such a mess that no humans would last long-term. There are probably others as well.

I’ve gone out on a bit of a limb here to claim that biorisks aren’t an existential risk. I’m not a biotech expert, so there could be some biorisks that I’m not aware of. For example, could there be some kind of engineered virus that contaminates all food sources on the planet? I don’t know and would be interested to hear from folks about that. This could be similar to a long-lasting global nuclear fallout in that it would have immediate survivors but not long-term survivors. However, mostly the biorisks I have seen people focus on seem to be lethal virulent engineered pandemics that target humans. As I’ve said, it seems unlikely this would kill all the humans in bunkers/​shelters, submarines and on remote parts of the planet.

Even if there is some kind of lesser-known biotech risk which could be existential, my bottom-line claim is that there seems to be an important line between real existential risks that would annihilate all humans and near-existential risks that would spare some people in disaster shelters and shelter-like situations. I haven’t seen this line discussed much and I think it could help with better prioritizing global problem areas for the EA community.

--

[1]: “How much could refuges help us recover from a global catastrophe?” https://​​web.archive.org/​​web/​​20181231185118/​​https://​​www.fhi.ox.ac.uk/​​wp-content/​​uploads/​​1-s2.0-S0016328714001888-main.pdf

[2]: I just learned that sailors use this term for submariners which is pretty fun. https://​​www.operationmilitarykids.org/​​what-is-a-navy-squid-11-slang-nicknames-for-navy-sailors/​​