Hi Vasco, nice post thanks for writing it! I haven’t had the time to look into all your details so these are some thoughts written quickly.
I worked on a project for Open Phil quantifying the likely number of terrorist groups pursuing bioweapons over the next 30 years, but didn’t look specifically at attack magnitudes (I appreciate the push to get a public-facing version of the report published—I’m on it!). That work was as an independent contractor for OP, but I now work for them on the GCR Cause Prio team. All that to say these are my own views, not OP’s.
I think this is a great post grappling with the empirics of terrorism. And I agree with the claim that the history of terrorism implies an extinction-level terrorist attack is unlikely. However, for similar reasons to Jeff Kaufman, I don’t think this strongly undermines the existential threat from non-state actors. This is for three reasons, one methodological and two qualitative:
The track record of bioterrorism in particular is too sparse to make empirical projections with much confidence. I think the rarity of bioterror and general small magnitudes of attacks justifies a prior against bioterror as a significant threat, but only a weak one. It also justifies a prior that we should expect at least a handful of groups to attempt bioterror over the next 10-30 years. To take the broader set of terror attacks as having strong implications for future bioterror, one would need to think that ‘terrorism’ is a compelling reference class for bio x-risk—which my next two points dispute.
The vast majority of terror groups (‘violent non-state actors’ is a more generally applicable handle) would not want to cause extinction. Omnicidality is a fairly rare motivation—most groups have specific political aims, or ideological motivations that are predicated on a particular people/country/sect/whatever thriving and overcoming its enemies. Aiming for civilisational collapse is slightly more prevalent, though still uncommon. And for all of history, there hasn’t been a viable path to omnicide or x-risk anyway. So the kind of actor that presents a bio x-risk is probably going to be very different to the kinds of actor that make up the track record of terrorism.
The vast majority of terror attacks are kinetic—involving explosives, firearms, vehicles, melee weapons. The exceptions are chemical and biological weapons. The biological weapons chosen are generally non-transmissible—anthrax, botulinum toxin, ricin, etc. This means that chem and bio attacks also rely on delivery mechanisms that have to get each individual victim to come into contact with the agent. An attack with a pandemic-class agent would not rely on such delivery. It would be strikingly different in complexity of development, attack modality, targeting specificity, and many other dimensions. I.e. it would be very unlike almost all previous terrorist attacks. The ability to carry out such an attack is also fairly unprecedented—it may only emerge with subsequent developments in biotechnology, especially from the convergence of AI and biotechnology.
So overall, compared to the threat model of future bio x-risk, I think the empirical track record of bioterrorism is too weak (point 1), and the broader terrorism track record is based on actors with very different motivations (point 2) using very different attack modalities (point 3). The latter two points are grounded in a particular worldview—that within coming years/decades biotechnology will enable biological weapons with catastrophic potential. I think that worldview is certainly contestable, but I think the track record of terrorism is not the most fruitful line of attack against it.
On a meta-level, the fact that XPT superforecasters are so much higher than what your model outputs suggests that they also think the right reference class approach is OOMs higher. And this is despite my suspicion that the XPT supers are too low and too indexed on past base-rates.
You emailed asking for reading recommendations—in lieu of my actual report (which will take some time to get to a publishable state), here’s my structured bibliography! In particular I’d recommend Binder & Ackermann 2023 (CBRN Terrorism) and McCann 2021 (Outbreak: A Comprehensive Analysis of Biological Terrorism).
Hi Ben, I’m curious if this public-facing report is out yet, and if not, where could someone reading this in the future look to check (so you don’t have to repeatedly field the same question)?
> I appreciate the push to get a public-facing version of the report published—I’m on it!
Great points, and thanks for the reading suggestions, Ben! I am also happy to know you plan to publish a report describing your findings.
I qualitatively agree with everything you have said. However, I would like to see a detailed quantitative model estimating AI or bio extinction risk (which handled well infohazards). Otherwise, I am left wondering about how much higher extinction risk will become accounting not only for increased capabilities, but also increased safety.
On a meta-level, the fact that XPT superforecasters are so much higher than what your model outputs suggests that they also think the right reference class approach is OOMs higher. And this is despite my suspicion that the XPT supers are too low and too indexed on past base-rates.
To clarify, my best guess is also many OOMs higher than the headline number of my post. I think XPT’s superforecaster prediction of 0.01 % human extinction risk due an engineered pathogen by 2100 (Table 3) is reasonable.
However, I wonder whether superforecasters are overestimating the risk because their nuclear extinction risk by 2100 of 0.074 % seems way too high. I estimated a 0.130 % chance of a nuclear war before 2050 leading to an injection of soot into the stratosphere of at least 47 Tg, so around 0.39 % (= 0.00130*75/25) before 2100. So, for the superforecasters to be right, extinction conditional on at least 47 Tg would have to be around 20 % (= 0.074/0.39) likely. This appears extremely pessimistic. From Xia 2022 (see top tick in the 3rd bar from the right in Fig. 5a):
With the most optimistic case—100% livestock crop feed to humans, no household waste and equitable global food distribution—there would be enough food production for everyone under the 47 Tg case.
This scenario is the most optimistic in Xia 2022, but it is pessimist in a number of ways (search for “High:” here):
“Scenarios assume that all stored food is consumed in Year 1”, i.e. no rationing.
“We do not consider farm-management adaptations such as changes in cultivar selection, switching to more cold-tolerating crops or greenhouses31 and alternative food sources such as mushrooms, seaweed, methane single cell protein, insects32, hydrogen single cell protein33 and cellulosic sugar34”.
“Large-scale use of alternative foods, requiring little-to-no light to grow in a cold environment38, has not been considered but could be a lifesaving source of emergency food if such production systems were operational”.
“Byproducts of biofuel have been added to livestock feed and waste27. Therefore, we add only the calories from the final product of biofuel in our calculations”. However, it would have been better to redirect to humans the crops used to produce biofuels.
So 20 % chance of extinction conditional on at least 47 Tg does sound very high to me, which makes me think superforecasters are overestimating nuclear extinction risk quite a lot. This in turn makes me wonder whether they are also overestimating other risks which I have investigated less.
So overall, compared to the threat model of future bio x-risk, I think the empirical track record of terrorism is too weak (point 1)
Nitpick. I think you meant bioterrorism, not terrorism which includes more data.
Nitpick. I think you meant bioterrorism, not terrorism which includes more data.
Thanks! Fixed.
I don’t know the nuclear field well, so don’t have much to add. If I’m following your comment though, it seems like you have your own estimate of the chance of nuclear war raising 47+ Tg of soot, and on the basis of that infer the implied probability supers give to extinction conditional on such a war. Why not instead infer that supers have a higher forecast of nuclear war than your 0.39% by 2100? E.g. a ~1.6% chance of nuclear war with 47+ Tg and a 5% chance of extinction conditional on it. I may be misunderstanding your comment. Though to be clear, I think it’s very possible the supers were not thinking things through in similar detail to you—there were a fair number of questions in the XPT.
I am left wondering about how much higher extinction risk will become accounting not only for increased capabilities, but also increased safety
I don’t think I follow this sentence? Is it that one might expect future advances in defensive biotech/other tech to counterbalance offensive tech development, and that without a detailed quant model you expect the defensive side to be under-counted?
Why not instead infer that supers have a higher forecast of nuclear war than your 0.39% by 2100? E.g. a ~1.6% chance of nuclear war with 47+ Tg and a 5% chance of extinction conditional on it.
Fair point! Here is another way of putting my point. I estimated a probability of 3.29*10^-6 for a 50 % population loss due to the climatic effects of nuclear war before 2050, so around 0.001 % (= 3.29*10^-6*75/25) before 2100. Superforecasters’ 0.074 % nuclear extinction risk before 2100 is 74 times my risk for a 50 % population loss due to climatic effects. My estimate may be off to some extent, and I only focussed on the climatic effects, not the indirect deaths caused by infrastructure destruction, but my best guess has to be many OOMs off for superforecasters prediction to be in the right OOM. This makes me believe superforecasters’ are overestimating nuclear extinction risk.
Is it that one might expect future advances in defensive biotech/other tech to counterbalance offensive tech development [?]
Yes, in the same way that the risk of global warming is often overestimated due to neglecting adaptation.
without a detailed quant model you expect the defensive side to be under-counted?
I expect the defensive side to be under-counted, but not necessarily due to lack of quantitative models. However, I think using quantitative models makes it less likely that the defensive side is under-counted. I have not thought much about this; I am just expressing my intuitions.
Hi Vasco, nice post thanks for writing it! I haven’t had the time to look into all your details so these are some thoughts written quickly.
I worked on a project for Open Phil quantifying the likely number of terrorist groups pursuing bioweapons over the next 30 years, but didn’t look specifically at attack magnitudes (I appreciate the push to get a public-facing version of the report published—I’m on it!). That work was as an independent contractor for OP, but I now work for them on the GCR Cause Prio team. All that to say these are my own views, not OP’s.
I think this is a great post grappling with the empirics of terrorism. And I agree with the claim that the history of terrorism implies an extinction-level terrorist attack is unlikely. However, for similar reasons to Jeff Kaufman, I don’t think this strongly undermines the existential threat from non-state actors. This is for three reasons, one methodological and two qualitative:
The track record of bioterrorism in particular is too sparse to make empirical projections with much confidence. I think the rarity of bioterror and general small magnitudes of attacks justifies a prior against bioterror as a significant threat, but only a weak one. It also justifies a prior that we should expect at least a handful of groups to attempt bioterror over the next 10-30 years. To take the broader set of terror attacks as having strong implications for future bioterror, one would need to think that ‘terrorism’ is a compelling reference class for bio x-risk—which my next two points dispute.
The vast majority of terror groups (‘violent non-state actors’ is a more generally applicable handle) would not want to cause extinction. Omnicidality is a fairly rare motivation—most groups have specific political aims, or ideological motivations that are predicated on a particular people/country/sect/whatever thriving and overcoming its enemies. Aiming for civilisational collapse is slightly more prevalent, though still uncommon. And for all of history, there hasn’t been a viable path to omnicide or x-risk anyway. So the kind of actor that presents a bio x-risk is probably going to be very different to the kinds of actor that make up the track record of terrorism.
The vast majority of terror attacks are kinetic—involving explosives, firearms, vehicles, melee weapons. The exceptions are chemical and biological weapons. The biological weapons chosen are generally non-transmissible—anthrax, botulinum toxin, ricin, etc. This means that chem and bio attacks also rely on delivery mechanisms that have to get each individual victim to come into contact with the agent. An attack with a pandemic-class agent would not rely on such delivery. It would be strikingly different in complexity of development, attack modality, targeting specificity, and many other dimensions. I.e. it would be very unlike almost all previous terrorist attacks. The ability to carry out such an attack is also fairly unprecedented—it may only emerge with subsequent developments in biotechnology, especially from the convergence of AI and biotechnology.
So overall, compared to the threat model of future bio x-risk, I think the empirical track record of bioterrorism is too weak (point 1), and the broader terrorism track record is based on actors with very different motivations (point 2) using very different attack modalities (point 3). The latter two points are grounded in a particular worldview—that within coming years/decades biotechnology will enable biological weapons with catastrophic potential. I think that worldview is certainly contestable, but I think the track record of terrorism is not the most fruitful line of attack against it.
On a meta-level, the fact that XPT superforecasters are so much higher than what your model outputs suggests that they also think the right reference class approach is OOMs higher. And this is despite my suspicion that the XPT supers are too low and too indexed on past base-rates.
You emailed asking for reading recommendations—in lieu of my actual report (which will take some time to get to a publishable state), here’s my structured bibliography! In particular I’d recommend Binder & Ackermann 2023 (CBRN Terrorism) and McCann 2021 (Outbreak: A Comprehensive Analysis of Biological Terrorism).
Hi Ben, I’m curious if this public-facing report is out yet, and if not, where could someone reading this in the future look to check (so you don’t have to repeatedly field the same question)?
> I appreciate the push to get a public-facing version of the report published—I’m on it!
Great points, and thanks for the reading suggestions, Ben! I am also happy to know you plan to publish a report describing your findings.
I qualitatively agree with everything you have said. However, I would like to see a detailed quantitative model estimating AI or bio extinction risk (which handled well infohazards). Otherwise, I am left wondering about how much higher extinction risk will become accounting not only for increased capabilities, but also increased safety.
To clarify, my best guess is also many OOMs higher than the headline number of my post. I think XPT’s superforecaster prediction of 0.01 % human extinction risk due an engineered pathogen by 2100 (Table 3) is reasonable.
However, I wonder whether superforecasters are overestimating the risk because their nuclear extinction risk by 2100 of 0.074 % seems way too high. I estimated a 0.130 % chance of a nuclear war before 2050 leading to an injection of soot into the stratosphere of at least 47 Tg, so around 0.39 % (= 0.00130*75/25) before 2100. So, for the superforecasters to be right, extinction conditional on at least 47 Tg would have to be around 20 % (= 0.074/0.39) likely. This appears extremely pessimistic. From Xia 2022 (see top tick in the 3rd bar from the right in Fig. 5a):
This scenario is the most optimistic in Xia 2022, but it is pessimist in a number of ways (search for “High:” here):
“Scenarios assume that all stored food is consumed in Year 1”, i.e. no rationing.
“We do not consider farm-management adaptations such as changes in cultivar selection, switching to more cold-tolerating crops or greenhouses31 and alternative food sources such as mushrooms, seaweed, methane single cell protein, insects32, hydrogen single cell protein33 and cellulosic sugar34”.
“Large-scale use of alternative foods, requiring little-to-no light to grow in a cold environment38, has not been considered but could be a lifesaving source of emergency food if such production systems were operational”.
“Byproducts of biofuel have been added to livestock feed and waste27. Therefore, we add only the calories from the final product of biofuel in our calculations”. However, it would have been better to redirect to humans the crops used to produce biofuels.
So 20 % chance of extinction conditional on at least 47 Tg does sound very high to me, which makes me think superforecasters are overestimating nuclear extinction risk quite a lot. This in turn makes me wonder whether they are also overestimating other risks which I have investigated less.
Nitpick. I think you meant bioterrorism, not terrorism which includes more data.
Thanks! Fixed.
I don’t know the nuclear field well, so don’t have much to add. If I’m following your comment though, it seems like you have your own estimate of the chance of nuclear war raising 47+ Tg of soot, and on the basis of that infer the implied probability supers give to extinction conditional on such a war. Why not instead infer that supers have a higher forecast of nuclear war than your 0.39% by 2100? E.g. a ~1.6% chance of nuclear war with 47+ Tg and a 5% chance of extinction conditional on it. I may be misunderstanding your comment. Though to be clear, I think it’s very possible the supers were not thinking things through in similar detail to you—there were a fair number of questions in the XPT.
I don’t think I follow this sentence? Is it that one might expect future advances in defensive biotech/other tech to counterbalance offensive tech development, and that without a detailed quant model you expect the defensive side to be under-counted?
Fair point! Here is another way of putting my point. I estimated a probability of 3.29*10^-6 for a 50 % population loss due to the climatic effects of nuclear war before 2050, so around 0.001 % (= 3.29*10^-6*75/25) before 2100. Superforecasters’ 0.074 % nuclear extinction risk before 2100 is 74 times my risk for a 50 % population loss due to climatic effects. My estimate may be off to some extent, and I only focussed on the climatic effects, not the indirect deaths caused by infrastructure destruction, but my best guess has to be many OOMs off for superforecasters prediction to be in the right OOM. This makes me believe superforecasters’ are overestimating nuclear extinction risk.
Yes, in the same way that the risk of global warming is often overestimated due to neglecting adaptation.
I expect the defensive side to be under-counted, but not necessarily due to lack of quantitative models. However, I think using quantitative models makes it less likely that the defensive side is under-counted. I have not thought much about this; I am just expressing my intuitions.