Unlike an asteroid impact, which leaves behind a lifeless or barely habitable world, the AI systems that destroyed humanity would presumably continue to exist and function. These AI systems would likely go on to build their own civilization, and that AI civilization could itself eventually expand outward and colonize the cosmos.
This is by no means certain. For example we should still be worried about extinction via misuse, which could kill off humans before AI is developed enough to be self-replicating/​ autonomous. For example, the development of bioweapons. Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.
Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.
Toby and Matthew, what is your guess for the probability of human extinction over the next 10 years? I personally guess 10^-7. I think disagreements are often driven by different assessment of the risk.
This is by no means certain. For example we should still be worried about extinction via misuse, which could kill off humans before AI is developed enough to be self-replicating/​ autonomous. For example, the development of bioweapons. Yes, it is unlikely these cause extinction, but if they do, no humans means no AI (after all the power-plants fail). Seems to imply moving forward with a lot of caution.
Toby and Matthew, what is your guess for the probability of human extinction over the next 10 years? I personally guess 10^-7. I think disagreements are often driven by different assessment of the risk.