Would be interested to see your reasoning for this, if you have it laid out somewhere.
I have not engaged so much with AI risk, but my views about it are informed by considerations in the 2 comments in this thread. Mammal species usually last 1 M years, and I am not convinced by arguments for extinction risk being much higher (I would like to see a detailed quantitative model), so I start from a prior of 10^-6 extinction risk per year. Then I guess the risk is around 10 % as high as that because humans currently have tight control of AI development.
Is it mainly because you think itās ~impossible for AGI/āASI to happen in that time? Or because itās ~impossible for AGI/āASI to cause human extinction?
To be consistent with 10^-7 extinction risk, I would guess 0.1 % chance of gross world product growing at least 30 % in 1 year until 2027, due to bottlenecks whose effects are not well modelled in Tom Davidsonās model, and 0.01 % chance of human extinction conditional on that.
Interesting. Obviously I donāt want to discourage you from the bet, but Iām surprised you are so confident based on this! I donāt think the prior of mammal species duration is really relevant at all, when for 99.99% of the last 1M years there hasnāt been any significant technology. Perhaps more relevant is homo sapiens wiping out all the less intelligent hominids (and many other species).
On the question of priors, I liked AGI Catastrophe and Takeover: Some Reference Class-Based Priors. It is unclear to me whether extinction risk has increased in the last 100 years. I estimated an annual nuclear extinction risk of 5.93*10^-12, which is way lower than the prior for wild mammals of 10^-6.
I see in your comment on that post, you say āhuman extinction would not necessarily be an existential catastropheā and āSo, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?ā. To be clear: what Iām interested in here is human extinction (not any broader conception of āexistential catastropheā), and the bet is about that.
To be clear: what Iām interested in here is human extinction (not any broader conception of āexistential catastropheā), and the bet is about that.
See my comment on that post for why I donāt agree. I agree nuclear extinction risk is low (but probably not that low)[1]. ASI is really the only thing that is likely to kill every last human (and I think it is quite likely to do that given it will be way more powerful than anything else[2]).
But too be clear, global catastrophic /ā civilisational collapse risk from nuclear is relatively high (these often get conflated with āextinctionā).
I have not engaged so much with AI risk, but my views about it are informed by considerations in the 2 comments in this thread. Mammal species usually last 1 M years, and I am not convinced by arguments for extinction risk being much higher (I would like to see a detailed quantitative model), so I start from a prior of 10^-6 extinction risk per year. Then I guess the risk is around 10 % as high as that because humans currently have tight control of AI development.
To be consistent with 10^-7 extinction risk, I would guess 0.1 % chance of gross world product growing at least 30 % in 1 year until 2027, due to bottlenecks whose effects are not well modelled in Tom Davidsonās model, and 0.01 % chance of human extinction conditional on that.
Interesting. Obviously I donāt want to discourage you from the bet, but Iām surprised you are so confident based on this! I donāt think the prior of mammal species duration is really relevant at all, when for 99.99% of the last 1M years there hasnāt been any significant technology. Perhaps more relevant is homo sapiens wiping out all the less intelligent hominids (and many other species).
On the question of priors, I liked AGI Catastrophe and Takeover: Some Reference Class-Based Priors. It is unclear to me whether extinction risk has increased in the last 100 years. I estimated an annual nuclear extinction risk of 5.93*10^-12, which is way lower than the prior for wild mammals of 10^-6.
I see in your comment on that post, you say āhuman extinction would not necessarily be an existential catastropheā and āSo, if advanced AI, as the most powerful entity on Earth, were to cause human extinction, I guess existential risk would be negligible on priors?ā. To be clear: what Iām interested in here is human extinction (not any broader conception of āexistential catastropheā), and the bet is about that.
Agreed.
See my comment on that post for why I donāt agree. I agree nuclear extinction risk is low (but probably not that low)[1]. ASI is really the only thing that is likely to kill every last human (and I think it is quite likely to do that given it will be way more powerful than anything else[2]).
But too be clear, global catastrophic /ā civilisational collapse risk from nuclear is relatively high (these often get conflated with āextinctionā).
Not only do I think it will kill every last human, I think itās quite likely it will wipe out all known carbon-based life.