I think, for basically Richard Ngoâs reasons, I weakly disagree with a strong version of Tobiasâs original claim that:
if engineered pandemics, or âunforeseenâ and âotherâ anthropogenic risks have a chance of 3% each of causing extinction, wouldnât you expect to see smaller versions of these risks (that kill, say, 10% of people, but donât result in extinction) much more frequently? But we donât observe that.
(Whether/âhow much I disagree depends in part on what âmuch more frequentlyâ is meant to imply.)
I also might agree with:
I think a reasonable prior on this sort of thing would have killing 10% of people not much more likely than killing 100% of people
(Again, depends in part on what âmuch more likelyâ would mean.)
But I was very surprised to read:
[...] and actually IMO it should be substantially less likely.
And I mostly agree with Tobiasâs response to that.
The point that thereâs a narrower band of asteroid sizes that would cause ~10% of the populationâs death than of asteroid sizes that would cause 100% makes sense. But I believe there are also many more asteroids in that narrow band. E.g., Ord writes:
While an impact with an asteroid in this smaller size range [0-10km across] would be much less likely to cause an existential catastrophe [than an impact from one thatâs greater than 10km across], this may be more than offset by their much higher probability of impact.
[...] the probability of an EArthâs impact in an average century is about one in 6,000 for asteroids between one and ten kilometres in size, and about one in 1.5 million for those above ten kilometres.
And for pandemics it seems clearly far more likely to have one that kills a massive number of people than one that kills everyone. Indeed, my impression is that most x-risk researchers think itâd be notably more likely for a pandemic to cause existential catastrophe through something like causing collapse or reduction in population to below minimum viable population levels, rather than by âdirectlyâ killing everyone. (Iâm not certain of that, though.)
Iâd guess, without much basis, that:
the distribution of impacts from a source of risk would vary between the different sources of risks
For pandemics, killing (letâs say) 5-25% really is much more likely than killing 100%. (This isnât why I make this guess, but it seems worth noting: Metaculus currently predicts a 6% chance COVID-19 kills >100million people by the end of the year, which would put it close to that 5-25% range. I really donât know how much stock to put in that.)
For asteroids, it appears based on Ord that the same is true (âsmallerâ catastrophes much likelier than complete extinction events).
For AI, the Bostrom/âYudkowsky-style scenario might actually be more likely to kill 100% than 5-25%, or at least not much less likely. But issues involving less âdiscontinuousâ development, some misuse by humans rather than a âtreacherous turnâ, multiple AI systems at similar levels, etc., might be much more likely to kill 5-25% rather than 100%. (I havenât thought about this much.)
You could have made the exact same argument in 1917, in 1944, etc. and you would have been wildly wrong.
I guess Iâd have to see the details of what the modelling wouldâve looked like, but it seems plausible to me that a model in which 10% events are much less likely than 1% events, which are in turn much less likely than 0.1% events, wouldâve led to good predictions. And cherry-picking two years where that wouldâve failed doesnât mean the predictions wouldâve been foolish ex ante.
E.g., if the model said something as bad as the Spanish Flu would happen every two centuries, then predicting a 0.5% (or whatever) chance of it in 1917 wouldâve made sense. And if you looked at that prediction alongside the surrounding 199 predictions, the predictor might indeed seem well-calibrated. (These numbers are made up for the example only.)
(Also, the Defence in Depth paper seems somewhat relevant to this matter, and is great in any case.)
I think, for basically Richard Ngoâs reasons, I weakly disagree with a strong version of Tobiasâs original claim that:
(Whether/âhow much I disagree depends in part on what âmuch more frequentlyâ is meant to imply.)
I also might agree with:
(Again, depends in part on what âmuch more likelyâ would mean.)
But I was very surprised to read:
And I mostly agree with Tobiasâs response to that.
The point that thereâs a narrower band of asteroid sizes that would cause ~10% of the populationâs death than of asteroid sizes that would cause 100% makes sense. But I believe there are also many more asteroids in that narrow band. E.g., Ord writes:
And for pandemics it seems clearly far more likely to have one that kills a massive number of people than one that kills everyone. Indeed, my impression is that most x-risk researchers think itâd be notably more likely for a pandemic to cause existential catastrophe through something like causing collapse or reduction in population to below minimum viable population levels, rather than by âdirectlyâ killing everyone. (Iâm not certain of that, though.)
Iâd guess, without much basis, that:
the distribution of impacts from a source of risk would vary between the different sources of risks
For pandemics, killing (letâs say) 5-25% really is much more likely than killing 100%. (This isnât why I make this guess, but it seems worth noting: Metaculus currently predicts a 6% chance COVID-19 kills >100million people by the end of the year, which would put it close to that 5-25% range. I really donât know how much stock to put in that.)
For asteroids, it appears based on Ord that the same is true (âsmallerâ catastrophes much likelier than complete extinction events).
For AI, the Bostrom/âYudkowsky-style scenario might actually be more likely to kill 100% than 5-25%, or at least not much less likely. But issues involving less âdiscontinuousâ development, some misuse by humans rather than a âtreacherous turnâ, multiple AI systems at similar levels, etc., might be much more likely to kill 5-25% rather than 100%. (I havenât thought about this much.)
I guess Iâd have to see the details of what the modelling wouldâve looked like, but it seems plausible to me that a model in which 10% events are much less likely than 1% events, which are in turn much less likely than 0.1% events, wouldâve led to good predictions. And cherry-picking two years where that wouldâve failed doesnât mean the predictions wouldâve been foolish ex ante.
E.g., if the model said something as bad as the Spanish Flu would happen every two centuries, then predicting a 0.5% (or whatever) chance of it in 1917 wouldâve made sense. And if you looked at that prediction alongside the surrounding 199 predictions, the predictor might indeed seem well-calibrated. (These numbers are made up for the example only.)
(Also, the Defence in Depth paper seems somewhat relevant to this matter, and is great in any case.)
Yeah I take back what I said about it being substantially less likely, that seems wrong.