It’s a common misconception that those who want to mitigate AI risk think there’s a high chance AI wipes out humanity this century. But opinions vary and proponents of mitigating AI risk may still think the likelihood is low. Crowd forecasts have placed the probability of a catastrophe caused by AI as around 5% this century, and extinction caused by AI as around 2.5% this century. But even these low probabilities are worth trying to reduce when what’s at stake is millions or billions of lives. How willing would you be to take a pill at random from a pile of 100 if you knew 5 were poison? And the risk is higher for timeframes beyond this century.
I think the above could be improved with forecasts of extinction risk from prominent AI safety proponents like Yudkowsky and Christiano if they’ve made them but I’m not aware of whether they have or not.
It’s a common misconception that those who want to mitigate AI risk think there’s a high chance AI wipes out humanity this century. But opinions vary and proponents of mitigating AI risk may still think the likelihood is low. Crowd forecasts have placed the probability of a catastrophe caused by AI as around 5% this century, and extinction caused by AI as around 2.5% this century. But even these low probabilities are worth trying to reduce when what’s at stake is millions or billions of lives. How willing would you be to take a pill at random from a pile of 100 if you knew 5 were poison? And the risk is higher for timeframes beyond this century.
I think the above could be improved with forecasts of extinction risk from prominent AI safety proponents like Yudkowsky and Christiano if they’ve made them but I’m not aware of whether they have or not.