I am not speaking for the DoD, the US government, or any of my employers.
I think that your claim about technological inevitability is premised on the desire of states to regulate key technologies, sometimes mediated by public pressure. All of the examples listed were blocked for decades by regulation, sometimes supplemented with public fear, soft regulation, etc. That’s fine so long as, say, governments don’t consider advancements in the field a core national interest. The US and China do, and often in an explicitly securitized form.
China’s leadership – including President Xi Jinping – believes that being at the forefront in AI technology is critical to the future of global military and economic power competition.
English-language Coverage of the US tends to avoid such sweeping statements, because readers have more local context, because political disagreement is more public, and because readers expect it.
But the DoD in the most recent National Defense Strategy identified AI as a secondary priority. Trump and Biden identified it as an area to maintain and advance national leadership in. And, of course, with the US at the head they don’t need to do as much in the way of directing people, since the existing system is delivering adequate results.
Convincing the two global superpowers not to develop militarily useful technology while tensions are rising is going to be the first time in history that has ever been accomplished.
That’s not to say that we can’t slow it down. But AI very much is inevitable if it is useful, and it seems like it will be very useful.
A number of things. Firstly, this criticism may be straightforwardly correct; it may be pursuing something that is the first time in history (I’m less convinced eg bioweapons regulation etc) ; nonetheless, other approaches to TAI governance seem similar (eg trust 1 actor to develop a transformative and risky technology and not use it for ill). It may indeed require such change, or at least change of perceptionof the potential and danger of AI (which is possible).
Secondly, this may not be the case. Foundation models (our present worry) may be no more (or even less) beneficial in military contexts than narrow systems. Moreover, foundation models, developed by private actors, seem pretty challenging to their power in a way that neither the Chinese government nor US military is likely to accept. Thus, AI development may continue without dangerous model growth.
Finally, very little development of foundation models are driven by military actors, and the actors that do develop it may be constructed as legitimately trying to challenge state power. If we are on a path to TAI (we may not be), then it seems in the near term only a very small number of actors, all private, could develop it. Maybe the US Military could gain the capacity to, but it seems hard at the moment for them to
I am not speaking for the DoD, the US government, or any of my employers.
I think that your claim about technological inevitability is premised on the desire of states to regulate key technologies, sometimes mediated by public pressure. All of the examples listed were blocked for decades by regulation, sometimes supplemented with public fear, soft regulation, etc. That’s fine so long as, say, governments don’t consider advancements in the field a core national interest. The US and China do, and often in an explicitly securitized form.
Quoting CNAS
English-language Coverage of the US tends to avoid such sweeping statements, because readers have more local context, because political disagreement is more public, and because readers expect it.
But the DoD in the most recent National Defense Strategy identified AI as a secondary priority. Trump and Biden identified it as an area to maintain and advance national leadership in. And, of course, with the US at the head they don’t need to do as much in the way of directing people, since the existing system is delivering adequate results.
Convincing the two global superpowers not to develop militarily useful technology while tensions are rising is going to be the first time in history that has ever been accomplished.
That’s not to say that we can’t slow it down. But AI very much is inevitable if it is useful, and it seems like it will be very useful.
A number of things. Firstly, this criticism may be straightforwardly correct; it may be pursuing something that is the first time in history (I’m less convinced eg bioweapons regulation etc) ; nonetheless, other approaches to TAI governance seem similar (eg trust 1 actor to develop a transformative and risky technology and not use it for ill). It may indeed require such change, or at least change of perceptionof the potential and danger of AI (which is possible). Secondly, this may not be the case. Foundation models (our present worry) may be no more (or even less) beneficial in military contexts than narrow systems. Moreover, foundation models, developed by private actors, seem pretty challenging to their power in a way that neither the Chinese government nor US military is likely to accept. Thus, AI development may continue without dangerous model growth. Finally, very little development of foundation models are driven by military actors, and the actors that do develop it may be constructed as legitimately trying to challenge state power. If we are on a path to TAI (we may not be), then it seems in the near term only a very small number of actors, all private, could develop it. Maybe the US Military could gain the capacity to, but it seems hard at the moment for them to