EU’s importance for AI governance is conditional on AI trajectories—a case study
The goal of this post is to show that AI trajectories matter a great deal when evaluating how some institution is expected to influence AI governance. To show this I argue that the importance of the Brussels effect, one of the EU’s levers of influence, is highly conditional on which AI trajectory we assume.
For those interested in learning more about the Brussels effect specifically, I know someone else is working on a paper that provides a much better analysis than I do here.
What is the Brussels effect?
The Brussels Effect describes the phenomenon of the European Union’s regulation seemingly spreads to the rest of the world through similar laws and companies following EU regulation outside of Europe.
When larger economies such as the US or China don’t regulate an area, the regulation defaults to EU. This is the case for privacy where, for example, Microsoft decided to make all their services GDPR compliant worldwide, rather than just for its EU users. Even though EU’s regulation applies only to EU citizens, the rest of the world often becomes subject to it irregardless. Other governments are also often inspired by EU regulation when writing their own.
When does the Brussels effect take place?
The Brussels effect has five necessary requirements to take place:[1]
Ideology & interest
Is the European Union interested in regulating AI?
Sufficient market size of the EU
Is the potential of the EU market large enough for providers of AI services to spend the resources necessary to become compliant with EU regulation?
Regulatory capacity
Does the EU institutions have the capacity and mandate to regulate AI?
Inelastic targets
Are those affected by EU’s regulation able to simply move elsewhere to avoid it?
Governments interested in using unregulated AI must benefit more than they do from membership of the Union, to justify leaving the Union to use unregulated AI. The same applies to citizens and companies of Europe, for whom unregulated AI must outweigh moving to a non-EU country.
Non divisibility
Are companies able to cheaply divide their services into one version that is compliant with EU regulation and another non-compliant version for the rest of the world?
The extent to which the Brussels effect will affect AI governance is conditional on how AI development progresses. To illustrate why, imagine two scenarios. One with a slow and continuous AI take-off, the other with a faster and more discontinuous take-off.
Slow take-off
In this scenario, AI capability progresses with a slow take-off speed. AI development is primarily driven by private enterprise. Progress comes in the shape of incremental improvements, each AI better than the last. In such a world there is good reason to believe the Brussels Effect will promulgate European AI regulation to the rest of the world, as all five requirements are met.
Fast take-off
In this scenario there is a fast take-off speed and development of AI is primarily driven by governments and few enterprises racing for a discontinuous payoff, where the winner largely takes all. In this world, AI development looks more like a Manhattan project than companies pursuing ever-improving iterations of GPT.
The EU has largely failed many of its external agendas. The EU has been unable to abolish torture, solve migration crises or achieve nuclear disarmament. These are also examples of issues where European legislation failed to live up to the five requirements needed for the Brussels Effect to take place. In a fast take-off world, I expect Europe’s influence to be reduced as EU AI regulation in this scenario fails to live up to three of the five requirements needed to become subject to the Brussels Effect.
The scenarios can be described in the following table:
Slow take-off | Fast take-off | |
Ideology & interest | ✓- EU is trying to regulate AI with the AI act and we can expect it to continue doing so | ✓ - No difference |
Sufficient market size of the EU | ✓- The EU market-size is big enough that companies are unlikely to forego it to avoid regulation | ✗ - Whichever government or company first develops transformative AI stands to gain so much profit and power that foregoing the European market is worthwhile if it means winning the race |
Regulatory capacity | ✓- The EU can expect to have the regulatory capacity as one of its core competencies is to maintain the European single market | ✗ - The Council will block regulation attempts that conflict with national interests of EU member states. In a race to AGI between non-EU nations, the EU institutions do not have diplomatic tools at its disposal powerful enough to significantly alter the conflict |
Inelastic targets | ✓- For governments, the benefits of using non-EU compliant AI must outweigh the benefits of leaving the Union. The same goes for European citizens and companies | ✗ - Companies whose AGI development is slowed by EU regulation will move elsewhere, or be beaten to the punch by those that do move. There is no strong profit motive for living up to European regulation |
Non-divisibility | ✓- Using GDPR as a historical precedent, we can expect companies of major AI products to prefer developing a single EU-compliant version rather than splitting their development efforts into multiple versions or foregoing the EU market | ✓ - No difference |
Depending on your expectations on how AI will be developed, your beliefs on the importance of the Brussels Effect should update accordingly.
I believe that better outlining on what axes AI trajectories can differ, and how they affect the levers of influence is an important step to evaluating the EU’s importance for global AI governance. Hopefully this post has given an idea of why I think that is.
- ^
These criteria are identified by Anu Bradford in her seminal book on the topic. She divides the Brussels effect into a de-facto and de-jure effect. For the purposes of this post the Brussels effect refers only to the de-facto effect.
- Should you work in the European Union to do AGI governance? by 31 Jan 2022 10:34 UTC; 90 points) (
- Collection of work on ‘Should you focus on the EU if you’re interested in AI governance for longtermist/x-risk reasons?’ by 6 Aug 2022 16:49 UTC; 45 points) (
- 11 Feb 2022 14:48 UTC; 16 points) 's comment on MichaelA’s Quick takes by (
Thanks for the post, I think it’s really useful to get a better picture about interactions like these.
I wonder whether I really expect companies to end up being that averse to AI regulation:
I expect decision-makers in companies to get increasingly worried out about AI progress and associated problems of control, alignment, and so forth
I expect the same for shareholders of the companies
They might appreciate regulations/constraints for their AI development teams, if the regulations increase safety to a reasonable cost
I can picture companies accepting very high costs… maybe the regulations on nuclear energy and the reactions of industry are somewhat analog and interesting to look at?
Companies might see themselves as largely regulating their own AI systems and might welcome “regulative help” from a competent outside body
I completely agree that the classification of trajectories should be much more nuanced. I don’t think a fast take-off implies these things either. The reason I bundle them together is to create to very distinct scenarios, making for a simpler analysis.
A more thorough analysis would separate these dynamics and analyse all the possible combinations. It would also be better to evaluate EU’s institutions separately and analyse more than a single lever influence.