I think you are somewhat missing the point. The point of a treaty with an enforcement mechanism which includes bombing data centers is not to engage in implicit nuclear blackmail, which would indeed be dumb (from a game theory perspective). It is to actually stop AI training runs. You are not issuing a “threat” which you will escalate into greater and greater forms of blackmail if the first one is acceded to; the point is not to extract resources in non-cooperative ways. It is to ensure that the state of the world is one where there is no data center capable of performing AI training runs of a certain size.
The counterfactual here is between two treaties that are identical, except one includes the policy “bomb datacentres in nuclear armed nations” and one does not. The only case where they differ is the scenario where a nuclear armed nation starts building GPU clusters. In which case, policy A demands resorting to nuclear blackmail when all other avenues have been exhausted, but policy B does not.
I think a missing ingredient here is the scenario that led up to this policy. If there had already been a warning shot where an AI built in a GPT-4 sized cluster killed millions of people, then it is plausible that such a clause might work, because both parties are putting clusters in the “super-nukes” category.
If this hasn’t happened, or the case for clusters being dangerous is seen as flimsy, then we are essentially back at the “china threatens to bomb openAI” scenario. I think this is a terrible scenario, unless you actually do think that nuclear war is preferable to large data-clusters being built. (to be clear, i think the chance of each individual data cluster causing the apocalypse is miniscule).
The counterfactual here is between two treaties that are identical, except one includes the policy “bomb datacentres in nuclear armed nations” and one does not. The only case where they differ is the scenario where a nuclear armed nation starts building GPU clusters. In which case, policy A demands resorting to nuclear blackmail when all other avenues have been exhausted, but policy B does not.
I think a missing ingredient here is the scenario that led up to this policy. If there had already been a warning shot where an AI built in a GPT-4 sized cluster killed millions of people, then it is plausible that such a clause might work, because both parties are putting clusters in the “super-nukes” category.
If this hasn’t happened, or the case for clusters being dangerous is seen as flimsy, then we are essentially back at the “china threatens to bomb openAI” scenario. I think this is a terrible scenario, unless you actually do think that nuclear war is preferable to large data-clusters being built. (to be clear, i think the chance of each individual data cluster causing the apocalypse is miniscule).