Great points, and thanks for the reading suggestions, Ben! I am also happy to know you plan to publish a report describing your findings.
I qualitatively agree with everything you have said. However, I would like to see a detailed quantitative model estimating AI or bio extinction risk (which handled well infohazards). Otherwise, I am left wondering about how much higher extinction risk will become accounting not only for increased capabilities, but also increased safety.
On a meta-level, the fact that XPT superforecasters are so much higher than what your model outputs suggests that they also think the right reference class approach is OOMs higher. And this is despite my suspicion that the XPT supers are too low and too indexed on past base-rates.
To clarify, my best guess is also many OOMs higher than the headline number of my post. I think XPTās superforecaster prediction of 0.01 % human extinction risk due an engineered pathogen by 2100 (Table 3) is reasonable.
However, I wonder whether superforecasters are overestimating the risk because their nuclear extinction risk by 2100 of 0.074 % seems way too high. I estimated a 0.130 % chance of a nuclear war before 2050 leading to an injection of soot into the stratosphere of at least 47 Tg, so around 0.39 % (= 0.00130*75/ā25) before 2100. So, for the superforecasters to be right, extinction conditional on at least 47 Tg would have to be around 20 % (= 0.074/ā0.39) likely. This appears extremely pessimistic. From Xia 2022 (see top tick in the 3rd bar from the right in Fig. 5a):
With the most optimistic caseā100% livestock crop feed to humans, no household waste and equitable global food distributionļ»æāthere would be enough food production for everyone under the 47āTg case.
This scenario is the most optimistic in Xia 2022, but it is pessimist in a number of ways (search for āHigh:ā here):
āScenarios assume that all stored food is consumed in Year 1ā, i.e. no rationing.
āWe do not consider farm-management adaptations such as changes in cultivar selection, switching to more cold-tolerating crops or greenhouses31 and alternative food sources such as mushrooms, seaweed, methane single cell protein, insects32, hydrogen single cell protein33 and cellulosic sugar34ā.
āLarge-scale use of alternative foods, requiring little-to-no light to grow in a cold environment38, has not been considered but could be a lifesaving source of emergency food if such production systems were operationalā.
āByproducts of biofuel have been added to livestock feed and waste27. Therefore, we add only the calories from the final product of biofuel in our calculationsā. However, it would have been better to redirect to humans the crops used to produce biofuels.
So 20 % chance of extinction conditional on at least 47 Tg does sound very high to me, which makes me think superforecasters are overestimating nuclear extinction risk quite a lot. This in turn makes me wonder whether they are also overestimating other risks which I have investigated less.
So overall, compared to the threat model of future bio x-risk, I think the empirical track record of terrorism is too weak (point 1)
Nitpick. I think you meant bioterrorism, not terrorism which includes more data.
Nitpick. I think you meant bioterrorism, not terrorism which includes more data.
Thanks! Fixed.
I donāt know the nuclear field well, so donāt have much to add. If Iām following your comment though, it seems like you have your own estimate of the chance of nuclear war raising 47+ Tg of soot, and on the basis of that infer the implied probability supers give to extinction conditional on such a war. Why not instead infer that supers have a higher forecast of nuclear war than your 0.39% by 2100? E.g. a ~1.6% chance of nuclear war with 47+ Tg and a 5% chance of extinction conditional on it. I may be misunderstanding your comment. Though to be clear, I think itās very possible the supers were not thinking things through in similar detail to youāthere were a fair number of questions in the XPT.
I am left wondering about how much higher extinction risk will become accounting not only for increased capabilities, but also increased safety
I donāt think I follow this sentence? Is it that one might expect future advances in defensive biotech/āother tech to counterbalance offensive tech development, and that without a detailed quant model you expect the defensive side to be under-counted?
Why not instead infer that supers have a higher forecast of nuclear war than your 0.39% by 2100? E.g. a ~1.6% chance of nuclear war with 47+ Tg and a 5% chance of extinction conditional on it.
Fair point! Here is another way of putting my point. I estimated a probability of 3.29*10^-6 for a 50 % population loss due to the climatic effects of nuclear war before 2050, so around 0.001 % (= 3.29*10^-6*75/ā25) before 2100. Superforecastersā 0.074 % nuclear extinction risk before 2100 is 74 times my risk for a 50 % population loss due to climatic effects. My estimate may be off to some extent, and I only focussed on the climatic effects, not the indirect deaths caused by infrastructure destruction, but my best guess has to be many OOMs off for superforecasters prediction to be in the right OOM. This makes me believe superforecastersā are overestimating nuclear extinction risk.
Is it that one might expect future advances in defensive biotech/āother tech to counterbalance offensive tech development [?]
Yes, in the same way that the risk of global warming is often overestimated due to neglecting adaptation.
without a detailed quant model you expect the defensive side to be under-counted?
I expect the defensive side to be under-counted, but not necessarily due to lack of quantitative models. However, I think using quantitative models makes it less likely that the defensive side is under-counted. I have not thought much about this; I am just expressing my intuitions.
Great points, and thanks for the reading suggestions, Ben! I am also happy to know you plan to publish a report describing your findings.
I qualitatively agree with everything you have said. However, I would like to see a detailed quantitative model estimating AI or bio extinction risk (which handled well infohazards). Otherwise, I am left wondering about how much higher extinction risk will become accounting not only for increased capabilities, but also increased safety.
To clarify, my best guess is also many OOMs higher than the headline number of my post. I think XPTās superforecaster prediction of 0.01 % human extinction risk due an engineered pathogen by 2100 (Table 3) is reasonable.
However, I wonder whether superforecasters are overestimating the risk because their nuclear extinction risk by 2100 of 0.074 % seems way too high. I estimated a 0.130 % chance of a nuclear war before 2050 leading to an injection of soot into the stratosphere of at least 47 Tg, so around 0.39 % (= 0.00130*75/ā25) before 2100. So, for the superforecasters to be right, extinction conditional on at least 47 Tg would have to be around 20 % (= 0.074/ā0.39) likely. This appears extremely pessimistic. From Xia 2022 (see top tick in the 3rd bar from the right in Fig. 5a):
This scenario is the most optimistic in Xia 2022, but it is pessimist in a number of ways (search for āHigh:ā here):
āScenarios assume that all stored food is consumed in Year 1ā, i.e. no rationing.
āWe do not consider farm-management adaptations such as changes in cultivar selection, switching to more cold-tolerating crops or greenhouses31 and alternative food sources such as mushrooms, seaweed, methane single cell protein, insects32, hydrogen single cell protein33 and cellulosic sugar34ā.
āLarge-scale use of alternative foods, requiring little-to-no light to grow in a cold environment38, has not been considered but could be a lifesaving source of emergency food if such production systems were operationalā.
āByproducts of biofuel have been added to livestock feed and waste27. Therefore, we add only the calories from the final product of biofuel in our calculationsā. However, it would have been better to redirect to humans the crops used to produce biofuels.
So 20 % chance of extinction conditional on at least 47 Tg does sound very high to me, which makes me think superforecasters are overestimating nuclear extinction risk quite a lot. This in turn makes me wonder whether they are also overestimating other risks which I have investigated less.
Nitpick. I think you meant bioterrorism, not terrorism which includes more data.
Thanks! Fixed.
I donāt know the nuclear field well, so donāt have much to add. If Iām following your comment though, it seems like you have your own estimate of the chance of nuclear war raising 47+ Tg of soot, and on the basis of that infer the implied probability supers give to extinction conditional on such a war. Why not instead infer that supers have a higher forecast of nuclear war than your 0.39% by 2100? E.g. a ~1.6% chance of nuclear war with 47+ Tg and a 5% chance of extinction conditional on it. I may be misunderstanding your comment. Though to be clear, I think itās very possible the supers were not thinking things through in similar detail to youāthere were a fair number of questions in the XPT.
I donāt think I follow this sentence? Is it that one might expect future advances in defensive biotech/āother tech to counterbalance offensive tech development, and that without a detailed quant model you expect the defensive side to be under-counted?
Fair point! Here is another way of putting my point. I estimated a probability of 3.29*10^-6 for a 50 % population loss due to the climatic effects of nuclear war before 2050, so around 0.001 % (= 3.29*10^-6*75/ā25) before 2100. Superforecastersā 0.074 % nuclear extinction risk before 2100 is 74 times my risk for a 50 % population loss due to climatic effects. My estimate may be off to some extent, and I only focussed on the climatic effects, not the indirect deaths caused by infrastructure destruction, but my best guess has to be many OOMs off for superforecasters prediction to be in the right OOM. This makes me believe superforecastersā are overestimating nuclear extinction risk.
Yes, in the same way that the risk of global warming is often overestimated due to neglecting adaptation.
I expect the defensive side to be under-counted, but not necessarily due to lack of quantitative models. However, I think using quantitative models makes it less likely that the defensive side is under-counted. I have not thought much about this; I am just expressing my intuitions.