Reasons why EU laws/policies might be important for AI outcomes
Based on some reading and conversations, I think there are two main categories of reasons why EU laws/policies (including regulations)[1] might be important for AI risk outcomes, with each category containing several more specific reasons.[2] This post attempts to summarise those reasons.
But note that:
I wrote this quite quickly, and this isn’t my area of expertise.
My aim is not to make people focus more on the EU, just to make it clearer what some possible reasons for that focus are. (Overall I actually think the EU is probably already getting enough attention from AI governance people.)
Please comment if you know of relevant prior work, if you have disagreements or think something should be added, and/or if you think I should make this a top-level post.
EU laws/policies might influence AI development/deployment in the EU itself, which could matter if one of the following happens:
EU might lead: An EU-based actor might become the/a leader in AI development.
EU might be close behind a leader:An EU-based actor might become one of the main “laggards” in the pursuit of highly advanced AI development, such that its behaviour could affect the behaviour of the leader(s).
There may be many important advanced AI developers/deployers, including EU actors:
If so, there might be (say) 3-10 quite important AI developers/deployers, rather than the only important actors being the “leader” and the main 1-2 “laggards”.
EU actors seem decently likely to be among those 3-10 actors, and more likely than most states, regions, or companies elsewhere are.
Some impressions and hot takes
I think longtermists tend to care about the EU mostly for the first rather than the second set of reasons. And that does seem like the correct focus to me.
Longtermists who are only a bit familiar with the topic of the EU’s importance for AI tend to focus mostly on the Brussels effect, but actually people more familiar with the topic tend to also place significant weight on copying, soft power / shifting norms, and providing a testing ground. I think we should place significant weight on all four of those reasons.
Longtermists tend to think it’s very unlikely that an EU-based actor might become the/a leader in AI development. But I’m not sure I’ve seen careful analysis of that question or careful consideration of the second and third reasons in that second category. I’d appreciate someone pointing me to or creating such analyses.
Why do I mean by copying?
Copying would be policymakers/regulators or policy influencers (e.g., advocates) elsewhere copying, adapting, or taking inspiration from EU laws/policies when creating or pushing for laws/policies in their own jurisdictions. I imagine there are several reasons this might happen (this probably isn’t comprehensive, and I don’t know if each of these are actually noteworthy):
Busyness on the part of the policymakers/regulators or policy influencers
Lack of expertise on the part of the policymakers/regulators or policy influencers
The EU laws/policies would’ve been tested and (maybe) shown to work at least decently well
Copying may be more defensible than creating something new, or may make it easier to deflect blame to the EU policymakers if something goes wrong
What do I mean by soft power / shifting norms?
Thiswould be things like:
Making it seem like the sort of thing the EU is doing is common, standard, what sensible people do, etc.
Shifting the Overton window
EU actors using diplomacy, advocacy, etc. to influence other actors to do some things similarly to how the EU is doing them
What do I mean by providing a testing ground?
I primarily mean actually providing real lessons on what works, what doesn’t, how best to craft policies, what unanticipated effects occur, what actors get angry about what, etc., such that these lessons can then actually inform policymakers/regulators or policy influencers elsewhere. I.e., not just making something seem more defensible or easier to convince people of, but actually informing what laws/policies are pursued and how they’re crafted.
This could for example occur via longtermist actors pushing in the EU for the sort of things they think would be good in the US, UK, and China, then using lessons from the EU to inform what the push for in those other jurisdictions.
Reasons the EU could be good for this include that it’s “lower stakes” (since it seems less likely to lead in AI development) and it seems “ahead” on and more receptive to substantial AI regulations.
Some additional thoughts
There may well be important reasons I’m missing. Some possibilities:
Affecting geopolitics
Facilitating international treaties, standards, etc.[4]
This post is just about reasons EU laws/policies might be important for AI outcomes, which doesn’t include all reasons why working on EU laws/policies might be important for AI outcomes. The latter would also include:
Gaining career capital (knowledge, skills, connections, credibility) that can be used for other work (including but not limited to AI-related law/policy work elsewhere).
Doing background research with some relevance to risks, laws, and policies outside of the EU.
I think it could be valuable to similarly explore why regions other than the US, China, EU, and UK might matter for AI development/deployment. I expect similar reasons would apply elsewhere as well, though with different strength and maybe with some reasons added or removed.
For example, I’ve heard it argued that Singapore could be surprisingly important for reducing AI risk in part because China often copies Singaporean laws/policies.
Again, my aim with this post is not to make people focus more on the EU, and overall I think the EU is probably getting enough attention from AI governance people.
[1] I know this topic has been written about in multiple existing posts and papers (e.g., many of the posts tagged European Union). But I seem to recall that (a) those I read mostly focused just on the Brussels effect and (b) those I read contained especially little mention of the second category of reasons the EU might matter for AI risk. The post How Europe might matter for AI governance is largely an exception to that and is also worth reading; I see its breakdown and my breakdown as complementary.
My thanks to Lukas Finnveden, Mathias Bonde, and Neil Dullaghan for helpful comments on an earlier draft. This does not imply their endorsement of this post’s claims.
A reviewer wrote: “Don’t forget directives and decisions!
Regulation implies a very specific thing in EU policy, if you say (including regulations) that to some extent implies that this article does not concern other EU measures such as directives, decisions, or opinions.
I know this topic has been written about in multiple existing posts and papers (e.g., many of the posts tagged European Union). But I seem to recall that (a) those I read mostly focused just on the Brussels effect and (b) those I read contained especially little mention of the second category of reasons the EU might matter for AI risk. The post How Europe might matter for AI governance is largely an exception to that and is also worth reading; I see its breakdown and my breakdown as complementary.
A reviewer noted “In worlds where you think EU policy can have an effect abroad, the absence of EU policies could also have an effect too right?
The absence of a united EU position on AI internationally might allow room for worse policies to advance, for actors with good policies to lack allies with enough clout. Something like acts of omission or “ally vacuum” (not sure if the latter is already a concept somewhere)”
I’ve heard it argued that Singapore could be surprisingly important for reducing AI risk in part because China often copies Singaporean laws/policies.
Reasons why EU laws/policies might be important for AI outcomes
Based on some reading and conversations, I think there are two main categories of reasons why EU laws/policies (including regulations)[1] might be important for AI risk outcomes, with each category containing several more specific reasons.[2] This post attempts to summarise those reasons.
But note that:
I wrote this quite quickly, and this isn’t my area of expertise.
My aim is not to make people focus more on the EU, just to make it clearer what some possible reasons for that focus are. (Overall I actually think the EU is probably already getting enough attention from AI governance people.)
Please comment if you know of relevant prior work, if you have disagreements or think something should be added, and/or if you think I should make this a top-level post.
Note: I drafted this quickly, then wanted to improve it based on feedback & on things I read/remembered since writing it. But I then realised I’ll never make the time to do that, so I’m just posting this~as-is anyway since maybe it’ll be a bit useful to some people. See also Collection of work on whether/how much people should focus on the EU if they’re interested in AI governance for longtermist/x-risk reasons.
Summary of the reasons
EU laws/policies might influence AI development/deployment elsewhere (especially in the US, China, the UK), via one of the following:[3]
The Brussels effect
Copying (for other reasons)
Soft power / shifting norms
Providing a testing ground
EU laws/policies might influence AI development/deployment in the EU itself, which could matter if one of the following happens:
EU might lead: An EU-based actor might become the/a leader in AI development.
EU might be close behind a leader: An EU-based actor might become one of the main “laggards” in the pursuit of highly advanced AI development, such that its behaviour could affect the behaviour of the leader(s).
There may be many important advanced AI developers/deployers, including EU actors:
We could find ourselves in a scenario with highly multipolar development/deployment, slow/continuous takeoff, and/or more misuse/structural risk than accident risk.
If so, there might be (say) 3-10 quite important AI developers/deployers, rather than the only important actors being the “leader” and the main 1-2 “laggards”.
EU actors seem decently likely to be among those 3-10 actors, and more likely than most states, regions, or companies elsewhere are.
Some impressions and hot takes
I think longtermists tend to care about the EU mostly for the first rather than the second set of reasons. And that does seem like the correct focus to me.
Longtermists who are only a bit familiar with the topic of the EU’s importance for AI tend to focus mostly on the Brussels effect, but actually people more familiar with the topic tend to also place significant weight on copying, soft power / shifting norms, and providing a testing ground. I think we should place significant weight on all four of those reasons.
Longtermists tend to think it’s very unlikely that an EU-based actor might become the/a leader in AI development. But I’m not sure I’ve seen careful analysis of that question or careful consideration of the second and third reasons in that second category. I’d appreciate someone pointing me to or creating such analyses.
Why do I mean by copying?
Copying would be policymakers/regulators or policy influencers (e.g., advocates) elsewhere copying, adapting, or taking inspiration from EU laws/policies when creating or pushing for laws/policies in their own jurisdictions. I imagine there are several reasons this might happen (this probably isn’t comprehensive, and I don’t know if each of these are actually noteworthy):
Busyness on the part of the policymakers/regulators or policy influencers
Lack of expertise on the part of the policymakers/regulators or policy influencers
The EU laws/policies would’ve been tested and (maybe) shown to work at least decently well
Copying may be more defensible than creating something new, or may make it easier to deflect blame to the EU policymakers if something goes wrong
What do I mean by soft power / shifting norms?
This would be things like:
Making it seem like the sort of thing the EU is doing is common, standard, what sensible people do, etc.
Shifting the Overton window
EU actors using diplomacy, advocacy, etc. to influence other actors to do some things similarly to how the EU is doing them
What do I mean by providing a testing ground?
I primarily mean actually providing real lessons on what works, what doesn’t, how best to craft policies, what unanticipated effects occur, what actors get angry about what, etc., such that these lessons can then actually inform policymakers/regulators or policy influencers elsewhere. I.e., not just making something seem more defensible or easier to convince people of, but actually informing what laws/policies are pursued and how they’re crafted.
This could for example occur via longtermist actors pushing in the EU for the sort of things they think would be good in the US, UK, and China, then using lessons from the EU to inform what the push for in those other jurisdictions.
Reasons the EU could be good for this include that it’s “lower stakes” (since it seems less likely to lead in AI development) and it seems “ahead” on and more receptive to substantial AI regulations.
Some additional thoughts
There may well be important reasons I’m missing. Some possibilities:
Affecting geopolitics
Facilitating international treaties, standards, etc.[4]
This post is just about reasons EU laws/policies might be important for AI outcomes, which doesn’t include all reasons why working on EU laws/policies might be important for AI outcomes. The latter would also include:
Gaining career capital (knowledge, skills, connections, credibility) that can be used for other work (including but not limited to AI-related law/policy work elsewhere).
Doing background research with some relevance to risks, laws, and policies outside of the EU.
I think it could be valuable to similarly explore why regions other than the US, China, EU, and UK might matter for AI development/deployment. I expect similar reasons would apply elsewhere as well, though with different strength and maybe with some reasons added or removed.
For example, I’ve heard it argued that Singapore could be surprisingly important for reducing AI risk in part because China often copies Singaporean laws/policies.
Again, my aim with this post is not to make people focus more on the EU, and overall I think the EU is probably getting enough attention from AI governance people.
[1] I know this topic has been written about in multiple existing posts and papers (e.g., many of the posts tagged European Union). But I seem to recall that (a) those I read mostly focused just on the Brussels effect and (b) those I read contained especially little mention of the second category of reasons the EU might matter for AI risk. The post How Europe might matter for AI governance is largely an exception to that and is also worth reading; I see its breakdown and my breakdown as complementary.
My thanks to Lukas Finnveden, Mathias Bonde, and Neil Dullaghan for helpful comments on an earlier draft. This does not imply their endorsement of this post’s claims.
A reviewer wrote: “Don’t forget directives and decisions!
Regulation implies a very specific thing in EU policy, if you say (including regulations) that to some extent implies that this article does not concern other EU measures such as directives, decisions, or opinions.
https://europa.eu/european-union/law/legal-acts_en″
I know this topic has been written about in multiple existing posts and papers (e.g., many of the posts tagged European Union). But I seem to recall that (a) those I read mostly focused just on the Brussels effect and (b) those I read contained especially little mention of the second category of reasons the EU might matter for AI risk. The post How Europe might matter for AI governance is largely an exception to that and is also worth reading; I see its breakdown and my breakdown as complementary.
A reviewer noted “In worlds where you think EU policy can have an effect abroad, the absence of EU policies could also have an effect too right?
The absence of a united EU position on AI internationally might allow room for worse policies to advance, for actors with good policies to lack allies with enough clout. Something like acts of omission or “ally vacuum” (not sure if the latter is already a concept somewhere)”
A reviewer wrote “I agree this one might be a big deal, and would include it in your list”
Interesting!
(And for others who might be interested and who are based in Singapore, there’s this Singapore AI Policy Career Guide.)