Well probability of AGI doom doesn’t depend on probability that AI can ‘conquer the world’.
It only depends on the probability that AI can disrupt the world sufficiently that the latent tensions in human societies, plus all the other global catastrophic risks that other technologies could unleash (e.g. nukes, bioweapons), would lead to some vicious downward spirals, eventually culminating in human extinction.
This doesn’t require AGI or ASI. It could just happen through very good AI-generated propaganda, that’s deployed at scale, in multiple languages, in a mass-customized way, by any ‘bad actors’ who want to watch the world burn. And there are many millions of such people.
Well probability of AGI doom doesn’t depend on probability that AI can ‘conquer the world’.
It only depends on the probability that AI can disrupt the world sufficiently that the latent tensions in human societies, plus all the other global catastrophic risks that other technologies could unleash (e.g. nukes, bioweapons), would lead to some vicious downward spirals, eventually culminating in human extinction.
This doesn’t require AGI or ASI. It could just happen through very good AI-generated propaganda, that’s deployed at scale, in multiple languages, in a mass-customized way, by any ‘bad actors’ who want to watch the world burn. And there are many millions of such people.