the AI capability level that poses a meaningful risk of human takeover comes earlier than the AI capability level that poses a meaningful risk of AI takeover.
I don’t think it comes meaningfully earlier. It might only be a few months (an AI capable of doing the work of a military superpower would be capable of doing most work involved in AI R&D, precipitating an intelligence explosion). And the humans wielding the power will lose it to the AI too, unless they halt all further development of AI (which seems unlikely, due to hubris/complacency, if nothing else).
starting off with almost none (which will be true of the ASI)
Any ASI worthy of the name would probably be able to go straight for an unstoppable nanotech computronium grey goo scenario.
I don’t think it comes meaningfully earlier. It might only be a few months (an AI capable of doing the work of a military superpower would be capable of doing most work involved in AI R&D, precipitating an intelligence explosion). And the humans wielding the power will lose it to the AI too, unless they halt all further development of AI (which seems unlikely, due to hubris/complacency, if nothing else).
Any ASI worthy of the name would probably be able to go straight for an unstoppable nanotech computronium grey goo scenario.