This has always been the least convincing part of the AI risk argument for me. I’ll probably sketch out more in depth objections in a post someday, but heres a preliminary argument:
First, the scenarios where the AI takes over quickly seem to assume a level of omnipotence and omnisicence on the part of an AGI that is extremely unlikely. For example, the premise of “every single person in the world suddenly dies” (with no explanation given). No plan in the history of intelligence has reached that level of perfection. There is no test data on “subjugate all of humanity at once”, and because knowledge requires empirical testing and evidence, mistakes will be made. I think that since taking over the world is insanely hard, this will be enough to cause failure.
Secondly, the scenarios where the AI takes over slowly have the problem that if the accumulation is slow, there’s enough time for multiple different AI with different goals to exist and take form. If the AI risk reasoning is correct, it’s likely they’ll deduce that the other AI’s are their ultimately biggest threat. They’ll either war with each other, or prematurely attack humanity to ensure no more AI’s are made.
Once either of these AI is discovered, the problem reduces down to a conventional war between an AI and the entire rest of planet earth. I’d be interested in seeing a military analysis of how a conventional war with AI would go. My intuition is that if it occurred today, the AI would be screwed, as it needs electricity to live and we don’t. Also pretty much all military equipment existing has at least some manual components. That may change as time goes on.
This has always been the least convincing part of the AI risk argument for me. I’ll probably sketch out more in depth objections in a post someday, but heres a preliminary argument:
First, the scenarios where the AI takes over quickly seem to assume a level of omnipotence and omnisicence on the part of an AGI that is extremely unlikely. For example, the premise of “every single person in the world suddenly dies” (with no explanation given). No plan in the history of intelligence has reached that level of perfection. There is no test data on “subjugate all of humanity at once”, and because knowledge requires empirical testing and evidence, mistakes will be made. I think that since taking over the world is insanely hard, this will be enough to cause failure.
Secondly, the scenarios where the AI takes over slowly have the problem that if the accumulation is slow, there’s enough time for multiple different AI with different goals to exist and take form. If the AI risk reasoning is correct, it’s likely they’ll deduce that the other AI’s are their ultimately biggest threat. They’ll either war with each other, or prematurely attack humanity to ensure no more AI’s are made.
Once either of these AI is discovered, the problem reduces down to a conventional war between an AI and the entire rest of planet earth. I’d be interested in seeing a military analysis of how a conventional war with AI would go. My intuition is that if it occurred today, the AI would be screwed, as it needs electricity to live and we don’t. Also pretty much all military equipment existing has at least some manual components. That may change as time goes on.