Intuitive reaction: I think these are all valuable arguments to have explicitly laid out, thanks for doing so. I think they don’t quite capture my main intuitions about the value of EU-directed governance work, though; let me try explain those below.
One intuition draws from the classic distinction between realism and liberalism in international relations. Broadly speaking, I see the EU as being most relevant from a liberalist perspective; whereas it’s much less relevant from a realist perspective. And although I think of both sides as having important perspectives, the dynamics surrounding catastrophic AGI development feel much better-described by realism than by liberalism—it feels like that’s the default that things will likely fall back to if the world gets much more chaotic and scary, and there are potential big shifts in the global balance of power.
Second intuition: when it comes to governing AGI, I expect that acting quickly and decisively will be crucial. I can kinda see the US govt. being able to do this (especially by spinning off new agencies, or by presidential power). I have a lot more trouble seeing the EU being able to do this, even in a best-case scenario (does the EU even have the ability in theory, let alone in practice, to empower fast-moving organisations with specific mandates)?
Compared with your arguments, I think these two intuitions are more focused on working backwards from a “theory of victory” to figure out what’s useful today (as opposed to working forwards towards gaining more influence). Our overall thinking about theories of victory is still so nascent, though, that it feels like there’s a lot of option value in having people going down a bunch of different pathways. Plus I have a few other intuitions in favour of the value of EU-directed governance research: firstly, I think people often overestimate the predictability of AGI development. E.g. a European DeepMind popping up within the next decade or two doesn’t seem that much less plausible than the original DeepMind popping up in England. Might just take a few outlier founders to make that happen. Secondly, separate from progress on AI itself, it does seem plausible that the EU will have significant influence over the chip supply chain going forward (right now most notably via ASML, as you mention).
Overall I do think people with a strong comparative advantage should do EU-governance-related things, I’m just very uncertain how strong the comparative advantage needs to be for that to be one of the best career pathways (although I do know at least a few people whose comparative advantage does seem strong enough for that to be the correct move).
I think liberalism vs realism is an interesting lens but the conclusion doesn’t seem right to me. You say you’re working backwards from a theory of victory, but at least that argument was working backwards from a theory of catastrophe. I think this is an is-ought problem, and if we want things to go well then we might want to actively encourage more cooperative IR, whilst also not ignoring the powerful forces.
I believe “Victory” here means avoiding catastrophic AGI, which could require encouraging more cooperative international relations.
As for your point on Liberalism vs Realism Richard, I think it is captured by at least 6 arguments listed in the post (Brussels effect, AI development/governance collaborations, growing the political capital, AI superpower, US-China differential, military pathways). Indeed when I read:
“the dynamics surrounding catastrophic AGI development feel much better-described by realism than by liberalism—it feels like that’s the default that things will likely fall back to …”
I decompose “dynamics” and “things” ultimately as actions taken by relevant actors (if necessary, here is my post-length explanation of what I mean by that). At the risk of frustrating generations of IR schools of thought, I’d push the framework further by saying that “Realism” could very roughly be reduced to “the way relevant actors think when they have a low level of trust in relevant actors from different nations”. And to make sure the other half of IR scholars also shriek, “Liberalism” would be reduced to the “the way relevant actors think when they have a high level of trust in relevant actors from different nations” (I would also reduce “nations” to “groups of actors”, but that’s not necessary to the main point.)
The implications of that lower level of international trust (aka Realism) for the debate on whether the EU is relevant, is that the validity of these 6 arguments changes (e.g. in a more Realist world, the Brussels effect is weaker than in a Liberal world). Let me know, Richard, if you think that there is separate, independent causal link between international trust levels and the relevance of the EU that doesn’t rely on these 6 arguments/factors, and I can add it to the main post.
For the concrete impact of governance on AGI, which you tamgent seem to allude to, the implications of Realism are deeper: mistrust definitely alters all the “game theoretical” results. This reduces the range of options that an AGI-concerned decision-maker has. A plausible example is his.her inability to establish a joint alignment testing protocol/international standard. Is there anything that can be achieved with mistrust/realism that cannot be achieved with trust/liberalism? I don’t think so. (That doesn’t mean that increasing this level of international trust is a cost-effective intervention though).
Thank you Richard – I think your second intuition is a great point. Does this rephrasing capture your point? I included the notion of decisiveness as well, which is related. If so, and with your permission, I would add to the main post.
+++ 7. EU-level policymaking is slower to react to events like AGI and/or less decisive Compared to the US or China, the EU has little centralized power and resources. All 27 EU member states preserve significant control over EU decision-making, as embodied by veto powers and “executive” procedures that still require member states’ collective approval. This is occasionally made more difficult by the European Parliament, which includes 7 political parties and 705 members and whose power has been growing over the years. As a result, most decisions from the European Commission require consultations with many stakeholders and therefore take time. Moreover, the EU-level public budget represents ~1.5% of GDP, compared to ~20% of GDP in the US – so even when there is agreement, it is unclear whether it can garner the resources to whip up a decisive response. This structure also prevents the EU to produce the equivalent of US executive orders.
How does it affect the EU’s relevance?
If the development of AGI requires a quick or well-resourced policy response from government, EU-level policymaking might not be as influential as American or Chinese policymaking.
+++
My opinion: This factor is definitely relevant, so thanks for bringing it up; I think it crucially depends on takeoff speeds though. (Anyone with a better understanding of US policymaking should correct me if I am wrong here:) If AGI requires strong and one-off government interventions within a 3-6 month window, US policymaking offers a considerable advantage through its strong and quick executive powers. However, for interventions with time horizons of over 6 months where emergency is not as important, the case for executive orders fades as far as I understand. Decision-making therefore falls onto the legislative power, which has been deadlocked for the past 12 years. On the EU’s side however, legislative power has been particularly strong – spurning tech policies with relative ease. Even though the EU procedure is slow, I have the intuition its impact is more structural than US executive orders (please correct me?). In the cases where AGI safety requires a government intervention within 3-6 months, my hope would be that institutions are already in place to guarantee that this quick intervention takes place – e.g. a regulatory agency that has been mandated to do AI code probing and auditing for >20 years before takeoff, for whom AI labs alignment would be common practice.
This conversation makes me realize a more constructive version of this post would list “important factors” for the relevance of the EU rather than arguments in favour and against. Ah well.
Intuitive reaction: I think these are all valuable arguments to have explicitly laid out, thanks for doing so. I think they don’t quite capture my main intuitions about the value of EU-directed governance work, though; let me try explain those below.
One intuition draws from the classic distinction between realism and liberalism in international relations. Broadly speaking, I see the EU as being most relevant from a liberalist perspective; whereas it’s much less relevant from a realist perspective. And although I think of both sides as having important perspectives, the dynamics surrounding catastrophic AGI development feel much better-described by realism than by liberalism—it feels like that’s the default that things will likely fall back to if the world gets much more chaotic and scary, and there are potential big shifts in the global balance of power.
Second intuition: when it comes to governing AGI, I expect that acting quickly and decisively will be crucial. I can kinda see the US govt. being able to do this (especially by spinning off new agencies, or by presidential power). I have a lot more trouble seeing the EU being able to do this, even in a best-case scenario (does the EU even have the ability in theory, let alone in practice, to empower fast-moving organisations with specific mandates)?
Compared with your arguments, I think these two intuitions are more focused on working backwards from a “theory of victory” to figure out what’s useful today (as opposed to working forwards towards gaining more influence). Our overall thinking about theories of victory is still so nascent, though, that it feels like there’s a lot of option value in having people going down a bunch of different pathways. Plus I have a few other intuitions in favour of the value of EU-directed governance research: firstly, I think people often overestimate the predictability of AGI development. E.g. a European DeepMind popping up within the next decade or two doesn’t seem that much less plausible than the original DeepMind popping up in England. Might just take a few outlier founders to make that happen. Secondly, separate from progress on AI itself, it does seem plausible that the EU will have significant influence over the chip supply chain going forward (right now most notably via ASML, as you mention).
Overall I do think people with a strong comparative advantage should do EU-governance-related things, I’m just very uncertain how strong the comparative advantage needs to be for that to be one of the best career pathways (although I do know at least a few people whose comparative advantage does seem strong enough for that to be the correct move).
I think liberalism vs realism is an interesting lens but the conclusion doesn’t seem right to me. You say you’re working backwards from a theory of victory, but at least that argument was working backwards from a theory of catastrophe. I think this is an is-ought problem, and if we want things to go well then we might want to actively encourage more cooperative IR, whilst also not ignoring the powerful forces.
I believe “Victory” here means avoiding catastrophic AGI, which could require encouraging more cooperative international relations.
As for your point on Liberalism vs Realism Richard, I think it is captured by at least 6 arguments listed in the post (Brussels effect, AI development/governance collaborations, growing the political capital, AI superpower, US-China differential, military pathways). Indeed when I read:
I decompose “dynamics” and “things” ultimately as actions taken by relevant actors (if necessary, here is my post-length explanation of what I mean by that). At the risk of frustrating generations of IR schools of thought, I’d push the framework further by saying that “Realism” could very roughly be reduced to “the way relevant actors think when they have a low level of trust in relevant actors from different nations”. And to make sure the other half of IR scholars also shriek, “Liberalism” would be reduced to the “the way relevant actors think when they have a high level of trust in relevant actors from different nations” (I would also reduce “nations” to “groups of actors”, but that’s not necessary to the main point.)
The implications of that lower level of international trust (aka Realism) for the debate on whether the EU is relevant, is that the validity of these 6 arguments changes (e.g. in a more Realist world, the Brussels effect is weaker than in a Liberal world). Let me know, Richard, if you think that there is separate, independent causal link between international trust levels and the relevance of the EU that doesn’t rely on these 6 arguments/factors, and I can add it to the main post.
For the concrete impact of governance on AGI, which you tamgent seem to allude to, the implications of Realism are deeper: mistrust definitely alters all the “game theoretical” results. This reduces the range of options that an AGI-concerned decision-maker has. A plausible example is his.her inability to establish a joint alignment testing protocol/international standard. Is there anything that can be achieved with mistrust/realism that cannot be achieved with trust/liberalism? I don’t think so. (That doesn’t mean that increasing this level of international trust is a cost-effective intervention though).
Thank you Richard – I think your second intuition is a great point. Does this rephrasing capture your point? I included the notion of decisiveness as well, which is related. If so, and with your permission, I would add to the main post.
+++
7. EU-level policymaking is slower to react to events like AGI and/or less decisive
Compared to the US or China, the EU has little centralized power and resources. All 27 EU member states preserve significant control over EU decision-making, as embodied by veto powers and “executive” procedures that still require member states’ collective approval. This is occasionally made more difficult by the European Parliament, which includes 7 political parties and 705 members and whose power has been growing over the years. As a result, most decisions from the European Commission require consultations with many stakeholders and therefore take time. Moreover, the EU-level public budget represents ~1.5% of GDP, compared to ~20% of GDP in the US – so even when there is agreement, it is unclear whether it can garner the resources to whip up a decisive response. This structure also prevents the EU to produce the equivalent of US executive orders.
How does it affect the EU’s relevance?
If the development of AGI requires a quick or well-resourced policy response from government, EU-level policymaking might not be as influential as American or Chinese policymaking.
+++
My opinion:
This factor is definitely relevant, so thanks for bringing it up; I think it crucially depends on takeoff speeds though. (Anyone with a better understanding of US policymaking should correct me if I am wrong here:) If AGI requires strong and one-off government interventions within a 3-6 month window, US policymaking offers a considerable advantage through its strong and quick executive powers. However, for interventions with time horizons of over 6 months where emergency is not as important, the case for executive orders fades as far as I understand. Decision-making therefore falls onto the legislative power, which has been deadlocked for the past 12 years. On the EU’s side however, legislative power has been particularly strong – spurning tech policies with relative ease. Even though the EU procedure is slow, I have the intuition its impact is more structural than US executive orders (please correct me?). In the cases where AGI safety requires a government intervention within 3-6 months, my hope would be that institutions are already in place to guarantee that this quick intervention takes place – e.g. a regulatory agency that has been mandated to do AI code probing and auditing for >20 years before takeoff, for whom AI labs alignment would be common practice.
This conversation makes me realize a more constructive version of this post would list “important factors” for the relevance of the EU rather than arguments in favour and against. Ah well.