I wonder if you’d be willing to be a bit more vocal about this. For example, the second most upvoted comment (27 karma right now) takes me to task for saying that “most experts are deeply skeptical of Ord’s claim” (1/30 existential biorisk in the next 100 years).
I take that to be uncontroversial. Would you be willing to say so?
David, as someone who’s generally a big fan of your work, it’s kind of on you to provide evidence that most experts are ‘deeply skeptical’ of Ord’s claim. And here’s the thing, you might not even be wrong about it! But your level of confidence in this claim is ‘uncontroversial’, and yet the evidence you provide does not match it. I find it strange/disappointing that you don’t address this, given that it’s a common theme on your blog that EAs often make overconfident claims.
“A group of health researchers from King’s College”
“an expert panel on risks posed by bioweapons” convened by one of the above researchers
“David Sarpong and colleagues”
Which you then use to conclude “Experts widely believe that existential biorisk in this century is quite low. The rest of us could do worse than to follow their example.” But you haven’t argued for this. What’s the numerator and denominator here? How are you so certain without calculating the proportion? What does ‘widely believe’ mean? Doesn’t Ord also think existential biorisk is ‘quite low’? 3.33% makes sense as ‘quite low’ to me, maybe you mean ‘exceedinly low’/‘vanishingly small chance’ or something like that instead?
Then, in part 11, you appeal to how in the recent XPT study Superforecasters reduced their median risk in existential risk from bio to 0.1% to 0.01%, but you don’t mention in the same study[1] that domain experts increasedtheir x-risk on the same question from 0.7% to 1%. So in this study when the “experts” don’t match your viewpoint, you suddenly only mention the non-experts and decline to mention that expert consensus moved in the opposite direction to what you expect, or your case expects. And even then, a1% vs 3.33% difference in subjective risk estimation doesn’t sound like a gap that merits describing a ‘deep scepticism’ of the latter from the former to me.
I like your work, and I think that you successfully ‘kicked the tires’ on the Aum Shinrikyo case present in The Precipice, for example. But you conclude this mini-series in part 11 by saying this:
“But experts are largely unconvinced that there is a serious risk of large-scale biological attacks, particularly on a scale that could lead to existential catastrophe.”
But it also turns out, from what I can tell, that most EAs don’t think so either! So maybe you’re just going after Ord here, but then again I think that ~1% v 3.33% estimation of risk doesn’t seem as big a difference as you claim. But I don’t think that’s what you’re restricting your claims to, since you also mention ‘many leading effective altruists’ and also use this to push-back on your perceived issue with how EAs treat ‘expert’ evidence, for example. But much like your critiques of EA xrisk work, I think you continually fail to produce either arguments, or good arguments, for this particular claim that can justify the strength of your position.
the second most upvoted comment (27 karma right now) takes me to task for saying that “most experts are deeply skeptical of Ord’s claim” (1/30 existential biorisk in the next 100 years).
I take that to be uncontroversial. Would you be willing to say so?
I asked because I’m interested—what makes you think most experts don’t think biorisk is such a big threat, beyond a couple of papers?
Thanks Peter!
I wonder if you’d be willing to be a bit more vocal about this. For example, the second most upvoted comment (27 karma right now) takes me to task for saying that “most experts are deeply skeptical of Ord’s claim” (1/30 existential biorisk in the next 100 years).
I take that to be uncontroversial. Would you be willing to say so?
David, as someone who’s generally a big fan of your work, it’s kind of on you to provide evidence that most experts are ‘deeply skeptical’ of Ord’s claim. And here’s the thing, you might not even be wrong about it! But your level of confidence in this claim is ‘uncontroversial’, and yet the evidence you provide does not match it. I find it strange/disappointing that you don’t address this, given that it’s a common theme on your blog that EAs often make overconfident claims.
For example, in Part 10 of ‘Exaggerating the risks’ you evidence for the claim of ‘most experts’ is only:
“A group of health researchers from King’s College”
“an expert panel on risks posed by bioweapons” convened by one of the above researchers
“David Sarpong and colleagues”
Which you then use to conclude “Experts widely believe that existential biorisk in this century is quite low. The rest of us could do worse than to follow their example.” But you haven’t argued for this. What’s the numerator and denominator here? How are you so certain without calculating the proportion? What does ‘widely believe’ mean? Doesn’t Ord also think existential biorisk is ‘quite low’? 3.33% makes sense as ‘quite low’ to me, maybe you mean ‘exceedinly low’/‘vanishingly small chance’ or something like that instead?
Then, in part 11, you appeal to how in the recent XPT study Superforecasters reduced their median risk in existential risk from bio to 0.1% to 0.01%, but you don’t mention in the same study[1] that domain experts increased their x-risk on the same question from 0.7% to 1%. So in this study when the “experts” don’t match your viewpoint, you suddenly only mention the non-experts and decline to mention that expert consensus moved in the opposite direction to what you expect, or your case expects. And even then, a 1% vs 3.33% difference in subjective risk estimation doesn’t sound like a gap that merits describing a ‘deep scepticism’ of the latter from the former to me.
I like your work, and I think that you successfully ‘kicked the tires’ on the Aum Shinrikyo case present in The Precipice, for example. But you conclude this mini-series in part 11 by saying this:
“But experts are largely unconvinced that there is a serious risk of large-scale biological attacks, particularly on a scale that could lead to existential catastrophe.”
But it also turns out, from what I can tell, that most EAs don’t think so either! So maybe you’re just going after Ord here, but then again I think that ~1% v 3.33% estimation of risk doesn’t seem as big a difference as you claim. But I don’t think that’s what you’re restricting your claims to, since you also mention ‘many leading effective altruists’ and also use this to push-back on your perceived issue with how EAs treat ‘expert’ evidence, for example. But much like your critiques of EA xrisk work, I think you continually fail to produce either arguments, or good arguments, for this particular claim that can justify the strength of your position.
Page 66 on the pdf
I asked because I’m interested—what makes you think most experts don’t think biorisk is such a big threat, beyond a couple of papers?