IIRC, Toby Ord’s estimates of the risk of human extinction in the Precipice basically come entirely from AI and everything else is a rounding error. Since then, AI has only become more pressing. I think it is probably fair to say that “AI is the most pressing x-risk” is a dominant view.
I don’t think we should defer too much to Ord’s x-risk estimates, but since we’re talking about them here goes:
Ord’s estimate of total natural risk is 1 in 10,000, which is 160 times less than the total anthropogenic risk (1 in 6).
Risk from engineered pandemics (1 in 30) is within an order of magnitude of risk from misaligned AI (1 in 10), so it’s hardly a rounding error (although simeon_c recently argued that Ord “vastly overestimates” biorisk).
IIRC, Toby Ord’s estimates of the risk of human extinction in the Precipice basically come entirely from AI and everything else is a rounding error. Since then, AI has only become more pressing. I think it is probably fair to say that “AI is the most pressing x-risk” is a dominant view.
No, you’re probably thinking of anthropogenic risk. AI is 1⁄10, whereas the total estimated x-risk is 1⁄6.
I don’t think we should defer too much to Ord’s x-risk estimates, but since we’re talking about them here goes:
Ord’s estimate of total natural risk is 1 in 10,000, which is 160 times less than the total anthropogenic risk (1 in 6).
Risk from engineered pandemics (1 in 30) is within an order of magnitude of risk from misaligned AI (1 in 10), so it’s hardly a rounding error (although simeon_c recently argued that Ord “vastly overestimates” biorisk).
Ah yes that’s right. Still AI contributes the majority of x-risk.