Nitpicking here, but I do not believe that AI is the most pressing x-risk problem, as opposed to a pressing one:
Itās pretty much generally agreed upon in the EA community that the development of unaligned AGI is the most pressing problem
Added 2022-08-09: The original claim was that AGI is the most pressing problem from a longtermist point of view, so Iāve edited this comment to clarify that I mean problem, not x-risk. To prove that AGI is the most pressing problem, one needs to prove that itās more cost-effective to work on AGI safety than to work on any other x-risk and any broad intervention to improve the future. (For clarity, a āpressingā problem is one thatās cost-effective to allocate resources to at current margins.)
Itās far from obvious to me that this is a dominant view: in 2021, Ben Todd said that broad interventions like improving institutional decision-making and reducing great power conflict were the largest resource gap in the EA cause portfolio.
IIRC, Toby Ordās estimates of the risk of human extinction in the Precipice basically come entirely from AI and everything else is a rounding error. Since then, AI has only become more pressing. I think it is probably fair to say that āAI is the most pressing x-riskā is a dominant view.
I donāt think we should defer too much to Ordās x-risk estimates, but since weāre talking about them here goes:
Ordās estimate of total natural risk is 1 in 10,000, which is 160 times less than the total anthropogenic risk (1 in 6).
Risk from engineered pandemics (1 in 30) is within an order of magnitude of risk from misaligned AI (1 in 10), so itās hardly a rounding error (although simeon_c recently argued that Ord āvastly overestimatesā biorisk).
You think thereās an x-risk more urgent than AI? What could be? Nanotech isnāt going to be invented within 20 years, there arenāt any asteroids about to hit the earth, climate tail risks only come into effect next century, deadly pandemics or supervolcanic eruptions are inevitable on long timescales but arenāt common enough to be the top source of risk in the time until AGI is invented. The only way anything is more risky than AI within 50 years is if you expect something like a major war leading to usage of enough nuclear or biological weapons that everyone dies, and I really doubt thatās more than 10% likely in the next half century.
Okay, fine. I agree that itās hard to come up with an x-risk more urgent than AGI. (Though hereās one: digital people being instantiated and made to suffer in large numbers would be an s-risk, and could potentially outweigh the risk of damage done by misaligned AGI over the long term.)
Nitpicking here, but I do not believe that AI is the most pressing
x-riskproblem, as opposed to a pressing one:Added 2022-08-09: The original claim was that AGI is the most pressing problem from a longtermist point of view, so Iāve edited this comment to clarify that I mean problem, not x-risk. To prove that AGI is the most pressing problem, one needs to prove that itās more cost-effective to work on AGI safety than to work on any other x-risk and any broad intervention to improve the future. (For clarity, a āpressingā problem is one thatās cost-effective to allocate resources to at current margins.)
Itās far from obvious to me that this is a dominant view: in 2021, Ben Todd said that broad interventions like improving institutional decision-making and reducing great power conflict were the largest resource gap in the EA cause portfolio.
IIRC, Toby Ordās estimates of the risk of human extinction in the Precipice basically come entirely from AI and everything else is a rounding error. Since then, AI has only become more pressing. I think it is probably fair to say that āAI is the most pressing x-riskā is a dominant view.
No, youāre probably thinking of anthropogenic risk. AI is 1ā10, whereas the total estimated x-risk is 1ā6.
I donāt think we should defer too much to Ordās x-risk estimates, but since weāre talking about them here goes:
Ordās estimate of total natural risk is 1 in 10,000, which is 160 times less than the total anthropogenic risk (1 in 6).
Risk from engineered pandemics (1 in 30) is within an order of magnitude of risk from misaligned AI (1 in 10), so itās hardly a rounding error (although simeon_c recently argued that Ord āvastly overestimatesā biorisk).
Ah yes thatās right. Still AI contributes the majority of x-risk.
You think thereās an x-risk more urgent than AI? What could be? Nanotech isnāt going to be invented within 20 years, there arenāt any asteroids about to hit the earth, climate tail risks only come into effect next century, deadly pandemics or supervolcanic eruptions are inevitable on long timescales but arenāt common enough to be the top source of risk in the time until AGI is invented. The only way anything is more risky than AI within 50 years is if you expect something like a major war leading to usage of enough nuclear or biological weapons that everyone dies, and I really doubt thatās more than 10% likely in the next half century.
Okay, fine. I agree that itās hard to come up with an x-risk more urgent than AGI. (Though hereās one: digital people being instantiated and made to suffer in large numbers would be an s-risk, and could potentially outweigh the risk of damage done by misaligned AGI over the long term.)