I believe “Victory” here means avoiding catastrophic AGI, which could require encouraging more cooperative international relations.
As for your point on Liberalism vs Realism Richard, I think it is captured by at least 6 arguments listed in the post (Brussels effect, AI development/governance collaborations, growing the political capital, AI superpower, US-China differential, military pathways). Indeed when I read:
“the dynamics surrounding catastrophic AGI development feel much better-described by realism than by liberalism—it feels like that’s the default that things will likely fall back to …”
I decompose “dynamics” and “things” ultimately as actions taken by relevant actors (if necessary, here is my post-length explanation of what I mean by that). At the risk of frustrating generations of IR schools of thought, I’d push the framework further by saying that “Realism” could very roughly be reduced to “the way relevant actors think when they have a low level of trust in relevant actors from different nations”. And to make sure the other half of IR scholars also shriek, “Liberalism” would be reduced to the “the way relevant actors think when they have a high level of trust in relevant actors from different nations” (I would also reduce “nations” to “groups of actors”, but that’s not necessary to the main point.)
The implications of that lower level of international trust (aka Realism) for the debate on whether the EU is relevant, is that the validity of these 6 arguments changes (e.g. in a more Realist world, the Brussels effect is weaker than in a Liberal world). Let me know, Richard, if you think that there is separate, independent causal link between international trust levels and the relevance of the EU that doesn’t rely on these 6 arguments/factors, and I can add it to the main post.
For the concrete impact of governance on AGI, which you tamgent seem to allude to, the implications of Realism are deeper: mistrust definitely alters all the “game theoretical” results. This reduces the range of options that an AGI-concerned decision-maker has. A plausible example is his.her inability to establish a joint alignment testing protocol/international standard. Is there anything that can be achieved with mistrust/realism that cannot be achieved with trust/liberalism? I don’t think so. (That doesn’t mean that increasing this level of international trust is a cost-effective intervention though).
I believe “Victory” here means avoiding catastrophic AGI, which could require encouraging more cooperative international relations.
As for your point on Liberalism vs Realism Richard, I think it is captured by at least 6 arguments listed in the post (Brussels effect, AI development/governance collaborations, growing the political capital, AI superpower, US-China differential, military pathways). Indeed when I read:
I decompose “dynamics” and “things” ultimately as actions taken by relevant actors (if necessary, here is my post-length explanation of what I mean by that). At the risk of frustrating generations of IR schools of thought, I’d push the framework further by saying that “Realism” could very roughly be reduced to “the way relevant actors think when they have a low level of trust in relevant actors from different nations”. And to make sure the other half of IR scholars also shriek, “Liberalism” would be reduced to the “the way relevant actors think when they have a high level of trust in relevant actors from different nations” (I would also reduce “nations” to “groups of actors”, but that’s not necessary to the main point.)
The implications of that lower level of international trust (aka Realism) for the debate on whether the EU is relevant, is that the validity of these 6 arguments changes (e.g. in a more Realist world, the Brussels effect is weaker than in a Liberal world). Let me know, Richard, if you think that there is separate, independent causal link between international trust levels and the relevance of the EU that doesn’t rely on these 6 arguments/factors, and I can add it to the main post.
For the concrete impact of governance on AGI, which you tamgent seem to allude to, the implications of Realism are deeper: mistrust definitely alters all the “game theoretical” results. This reduces the range of options that an AGI-concerned decision-maker has. A plausible example is his.her inability to establish a joint alignment testing protocol/international standard. Is there anything that can be achieved with mistrust/realism that cannot be achieved with trust/liberalism? I don’t think so. (That doesn’t mean that increasing this level of international trust is a cost-effective intervention though).