One of the weaker parts of the Situational Awareness essay is Leopold’s discussion of international AI governance.
He argues the notion of an international treaty on AI “fanciful”, claiming that:
It would be easy to “break out” of treaty restrictions
There would be strong incentives to do so
So the equilibrium is unstable
That’s basically it—international cooperation gets about 140 words of analysis in the 160 page document.
I think this is seriously underargued. Right now it seems harmful to propagate a meme like “International AI cooperation is fanciful”.
This is just a quick take, but I think it’s the case that:
It might not be easy to break out of treaty restrictions. Of course it will be hard to monitor and enforce a treaty. But there’s potential to make it possible through hardware mechanisms, cloud governance, inspections, and other mechanisms that we haven’t even thought of yet. Lots of people are paying attention to this challenge and working on it.
There might not be strong incentives to do so. Decisionmakers may take the risks seriously and calculate the downsides of an all-out race exceed the potential benefits of winning. Credible benefit-sharing and shared decision-making institutions may convince states they’re better off cooperating than trying to win a race.
International cooperation might not be all-or-nothing. Even if we can’t (or shouldn’t!) institute something like a global pause, cooperation on more narrow issues to mitigate threats from AI misuse and loss of control could be possible. Even in the midst of the Cold War, the US and USSR managed to agree on issues like arms control, non-proliferation, and technologies like anti-ballistic missile tech.
(I critiqued a critique of Aschenbrenner’s take on international AI governance here, so I wanted to clarify that I actually do think his model is probably wrong here.)
One of the weaker parts of the Situational Awareness essay is Leopold’s discussion of international AI governance.
He argues the notion of an international treaty on AI “fanciful”, claiming that:
It would be easy to “break out” of treaty restrictions
There would be strong incentives to do so
So the equilibrium is unstable
That’s basically it—international cooperation gets about 140 words of analysis in the 160 page document.
I think this is seriously underargued. Right now it seems harmful to propagate a meme like “International AI cooperation is fanciful”.
This is just a quick take, but I think it’s the case that:
It might not be easy to break out of treaty restrictions. Of course it will be hard to monitor and enforce a treaty. But there’s potential to make it possible through hardware mechanisms, cloud governance, inspections, and other mechanisms that we haven’t even thought of yet. Lots of people are paying attention to this challenge and working on it.
There might not be strong incentives to do so. Decisionmakers may take the risks seriously and calculate the downsides of an all-out race exceed the potential benefits of winning. Credible benefit-sharing and shared decision-making institutions may convince states they’re better off cooperating than trying to win a race.
International cooperation might not be all-or-nothing. Even if we can’t (or shouldn’t!) institute something like a global pause, cooperation on more narrow issues to mitigate threats from AI misuse and loss of control could be possible. Even in the midst of the Cold War, the US and USSR managed to agree on issues like arms control, non-proliferation, and technologies like anti-ballistic missile tech.
(I critiqued a critique of Aschenbrenner’s take on international AI governance here, so I wanted to clarify that I actually do think his model is probably wrong here.)
Can I tweet this? I think it’s a good take
Tweet away! 🫡