David, as someone whoâs generally a big fan of your work, itâs kind of on you to provide evidence that most experts are âdeeply skepticalâ of Ordâs claim. And hereâs the thing, you might not even be wrong about it! But your level of confidence in this claim is âuncontroversialâ, and yet the evidence you provide does not match it. I find it strange/âdisappointing that you donât address this, given that itâs a common theme on your blog that EAs often make overconfident claims.
âA group of health researchers from Kingâs Collegeâ
âan expert panel on risks posed by bioweaponsâ convened by one of the above researchers
âDavid Sarpong and colleaguesâ
Which you then use to conclude âExperts widely believe that existential biorisk in this century is quite low. The rest of us could do worse than to follow their example.â But you havenât argued for this. Whatâs the numerator and denominator here? How are you so certain without calculating the proportion? What does âwidely believeâ mean? Doesnât Ord also think existential biorisk is âquite lowâ? 3.33% makes sense as âquite lowâ to me, maybe you mean âexceedinly lowâ/ââvanishingly small chanceâ or something like that instead?
Then, in part 11, you appeal to how in the recent XPT study Superforecasters reduced their median risk in existential risk from bio to 0.1% to 0.01%, but you donât mention in the same study[1] that domain experts increasedtheir x-risk on the same question from 0.7% to 1%. So in this study when the âexpertsâ donât match your viewpoint, you suddenly only mention the non-experts and decline to mention that expert consensus moved in the opposite direction to what you expect, or your case expects. And even then, a1% vs 3.33% difference in subjective risk estimation doesnât sound like a gap that merits describing a âdeep scepticismâ of the latter from the former to me.
I like your work, and I think that you successfully âkicked the tiresâ on the Aum Shinrikyo case present in The Precipice, for example. But you conclude this mini-series in part 11 by saying this:
âBut experts are largely unconvinced that there is a serious risk of large-scale biological attacks, particularly on a scale that could lead to existential catastrophe.â
But it also turns out, from what I can tell, that most EAs donât think so either! So maybe youâre just going after Ord here, but then again I think that ~1% v 3.33% estimation of risk doesnât seem as big a difference as you claim. But I donât think thatâs what youâre restricting your claims to, since you also mention âmany leading effective altruistsâ and also use this to push-back on your perceived issue with how EAs treat âexpertâ evidence, for example. But much like your critiques of EA xrisk work, I think you continually fail to produce either arguments, or good arguments, for this particular claim that can justify the strength of your position.
David, as someone whoâs generally a big fan of your work, itâs kind of on you to provide evidence that most experts are âdeeply skepticalâ of Ordâs claim. And hereâs the thing, you might not even be wrong about it! But your level of confidence in this claim is âuncontroversialâ, and yet the evidence you provide does not match it. I find it strange/âdisappointing that you donât address this, given that itâs a common theme on your blog that EAs often make overconfident claims.
For example, in Part 10 of âExaggerating the risksâ you evidence for the claim of âmost expertsâ is only:
âA group of health researchers from Kingâs Collegeâ
âan expert panel on risks posed by bioweaponsâ convened by one of the above researchers
âDavid Sarpong and colleaguesâ
Which you then use to conclude âExperts widely believe that existential biorisk in this century is quite low. The rest of us could do worse than to follow their example.â But you havenât argued for this. Whatâs the numerator and denominator here? How are you so certain without calculating the proportion? What does âwidely believeâ mean? Doesnât Ord also think existential biorisk is âquite lowâ? 3.33% makes sense as âquite lowâ to me, maybe you mean âexceedinly lowâ/ââvanishingly small chanceâ or something like that instead?
Then, in part 11, you appeal to how in the recent XPT study Superforecasters reduced their median risk in existential risk from bio to 0.1% to 0.01%, but you donât mention in the same study[1] that domain experts increased their x-risk on the same question from 0.7% to 1%. So in this study when the âexpertsâ donât match your viewpoint, you suddenly only mention the non-experts and decline to mention that expert consensus moved in the opposite direction to what you expect, or your case expects. And even then, a 1% vs 3.33% difference in subjective risk estimation doesnât sound like a gap that merits describing a âdeep scepticismâ of the latter from the former to me.
I like your work, and I think that you successfully âkicked the tiresâ on the Aum Shinrikyo case present in The Precipice, for example. But you conclude this mini-series in part 11 by saying this:
âBut experts are largely unconvinced that there is a serious risk of large-scale biological attacks, particularly on a scale that could lead to existential catastrophe.â
But it also turns out, from what I can tell, that most EAs donât think so either! So maybe youâre just going after Ord here, but then again I think that ~1% v 3.33% estimation of risk doesnât seem as big a difference as you claim. But I donât think thatâs what youâre restricting your claims to, since you also mention âmany leading effective altruistsâ and also use this to push-back on your perceived issue with how EAs treat âexpertâ evidence, for example. But much like your critiques of EA xrisk work, I think you continually fail to produce either arguments, or good arguments, for this particular claim that can justify the strength of your position.
Page 66 on the pdf