I’m on the more optimistic side. I think there’s only a few percent chance that AI kills everyone. Maybe 1 or 2%.
I guess the risk of human extinction over the next 10 years is like 10^-7. A typical mammal species lasts around 1 M years, which suggests an extinction risk of 10^-5 over 10 years, and I think humans are much more resilient. Mammals usually go extinct due to competition from other species or climate change, and I believe both of these are way less likely to drive humans extinct. Species living in larger areas are also less likely to go extinct, and humans live all across the globe.
I am open to updating to a much higher risk than suggested by the above priors, but I would need much stronger evidence. For example, a catastrophe caused by AI killing 1 M people in 1 year, or a quantitative model outputting a high risk of extinction with inputs informed by empirical data. I am not aware of any quantitative model outputting the probabulity of AI causing human extinction.
Thanks for the post, Matthew!
I guess the risk of human extinction over the next 10 years is like 10^-7. A typical mammal species lasts around 1 M years, which suggests an extinction risk of 10^-5 over 10 years, and I think humans are much more resilient. Mammals usually go extinct due to competition from other species or climate change, and I believe both of these are way less likely to drive humans extinct. Species living in larger areas are also less likely to go extinct, and humans live all across the globe.
I am open to updating to a much higher risk than suggested by the above priors, but I would need much stronger evidence. For example, a catastrophe caused by AI killing 1 M people in 1 year, or a quantitative model outputting a high risk of extinction with inputs informed by empirical data. I am not aware of any quantitative model outputting the probabulity of AI causing human extinction.