It seems really useful to me to understand better how likely states will end up calling the shots.
Yes, absolutely. I think this largely depends on the extent to which political elites appreciate AI’s importance; I expect that political elites will appreciate AI and take action in a few years, years before an intelligence explosion. I want to read/think/talk about this.
While big tech companies will probably come up with more strategies, I’m skeptical about their ability to not be nationalized or closely supervised by states. In response to your specific suggestions:
I think states are broadly able to seize property in their territory. To secure autonomy, I think a corporation would have to get the government to legally bind itself. I can’t imagine the US or China doing this. Perhaps a US corporation could make a deal with another government and move its relevant hardware to that state before the US appreciates AI or before the US has time to respond? That would be quite radical. Given the major national security implications of AI, even such a move might not guarantee autonomy. But I think corporations would probably have to move somehow to maintain autonomy if there was political will and a public mandate for nationalization.
I don’t understand. But if the US and China appreciate AI’s national security implications, they won’t be distracted.
I don’t understand “assembling . . . ability,” but corporations intentionally making AI feel nonthreatening is interesting. I hadn’t thought about this. Hmm. This might be a factor. But there’s only so much that making systems feel nonthreatening can do. If political elites appreciate AI, then it won’t matter whether currently-deployed AI systems feel nonthreatening: there will be oversight. It’s also very possible that the US will have a Sputnik moment for AI and then there’s strong pressure for a national AI project independent of the current state of private AI in the US.
Yes, absolutely. I think this largely depends on the extent to which political elites appreciate AI’s importance; I expect that political elites will appreciate AI and take action in a few years, years before an intelligence explosion. I want to read/think/talk about this.
While big tech companies will probably come up with more strategies, I’m skeptical about their ability to not be nationalized or closely supervised by states. In response to your specific suggestions:
I think states are broadly able to seize property in their territory. To secure autonomy, I think a corporation would have to get the government to legally bind itself. I can’t imagine the US or China doing this. Perhaps a US corporation could make a deal with another government and move its relevant hardware to that state before the US appreciates AI or before the US has time to respond? That would be quite radical. Given the major national security implications of AI, even such a move might not guarantee autonomy. But I think corporations would probably have to move somehow to maintain autonomy if there was political will and a public mandate for nationalization.
I don’t understand. But if the US and China appreciate AI’s national security implications, they won’t be distracted.
I don’t understand “assembling . . . ability,” but corporations intentionally making AI feel nonthreatening is interesting. I hadn’t thought about this. Hmm. This might be a factor. But there’s only so much that making systems feel nonthreatening can do. If political elites appreciate AI, then it won’t matter whether currently-deployed AI systems feel nonthreatening: there will be oversight. It’s also very possible that the US will have a Sputnik moment for AI and then there’s strong pressure for a national AI project independent of the current state of private AI in the US.