How Europe might matter for AI governance

This post explores which levers exist in Europe for influencing the governance of AI. The scope includes potential actions taken by bodies/​offices/​agencies of the EU, its constituent member countries, or some other potentially relevant European countries like Switzerland. I’m not looking at self-regulation by leading AI development groups in Europe since some attention is already being paid to that, and there aren’t any beyond DeepMind. I should note that at this level of abstraction, the analysis will be necessarily somewhat crude.

My role in AI governance: I’m personally interested in the topic. Beyond that, we at the Effective Altruism Foundation are concerned about risks from AI and governance is one lever to affect the relevant outcomes. Since our team members mainly hail from European countries and we are based there, it made sense to pick this as an entry point. Please get in touch with me if you want to talk about content related to this post at stefan.torges@ea-foundation.org.

Epistemic status and methodology: I’m neither a legal nor a political science expert (except for some undergrad coursework) and have not worked in governance (except for a few internships). This analysis is based on my fairly superficial understanding of different governance mechanisms and conversations I’ve had with people in the EA community about them. Some of these people had little knowledge about AI governance, some had a lot, nobody had a lot of knowledge about European governance mechanism (and how they might relate to AI). Therefore, I have restricted myself to statements I feel sufficiently confident to make based on that knowledge and expressed my remaining thoughts in the form of questions to be addressed in the future. I’m fairly confident that the pathways I outline below cover most of the potential levers that exist. However, I’m much less confident about their absolute and relative importance (with some exceptions).

Acknowledgments: I’m grateful for the helpful comments by people involved with the Effective Altruism Foundation on a draft of this post. I also want to thank the people in the AI governance community for taking the time to speak to me about this.

Summary

  • The best case for working on governance in Europe probably rests on personal fit and comparative advantage. Other, more general reasons, strike me as fairly weak.

  • Europe might have some fairly direct influence (executive or legislative) over AI development groups: either because they’re located in Europe (e.g., DeepMind) or because they’re transnational companies operating in Europe (e.g., Google, Facebook).

  • Europe might have significant indirect influence on AI development via a number of different pathways: they might set norms or pass blueprint regulations that are subsequently adopted in other jurisdictions; they might have significant say via international regimes governing AI development and deployment; they might influence the power balance between the US and China by “taking sides” in key situations; their planned “AI build-up” might influence the global AI landscape in hard-to-anticipate ways.

  • I don’t have strong views about the relative importance of these different pathways. I’d welcome more research on the legal situation of DeepMind, the relevance of different international bodies for the governance of AI, the prevalence of the EU as a norm or regulation role model, and what these different pathways imply for career choice in this field.

Why look at Europe at all?

So far, the AI governance community concerned with the long-term effects of transformative AI (mainly within or adjacent to the EA community) seems to have mainly focused on the US and China, with some notable exceptions[1]. The key drivers behind this seem to be the fact that most key AI development groups are located there (e.g., OpenAI, Google, Facebook, Microsoft, Amazon, Tencent, Alibaba, Baidu) and the fact that these two countries seem to be ahead more generally when it comes to AI capabilities research.

However, even assuming that this picture is roughly accurate, it could still make sense for some people to work toward influencing relevant European governance.

This claim is mainly driven by considerations related to personal fit and comparative advantage. For instance, a lot of US government roles are not open to non-US citizens. There will also only be a limited number of policy roles at groups like DeepMind or OpenAI.

There also seems to be an okay outside view argument in favor of influencing the EU and its member countries (and Europe more generally) when it comes to questions of governance. The EU is the second largest economy behind the US, but still ahead of China. Its constituent countries, France, the UK, and Germany in particular, also still have a lot of influence (some would say “disproportionate”) in international bodies (partly for historical reasons). My impression is that European scientific institutions are still at the cutting edge in many scientific fields. Therefore, one might expect Europe to matter for the governance of AI in ways that might be hard to anticipate. In particular, this argument pushes for allocating more resources toward influencing Europe at the expense of China, since the US seems to be ahead of Europe when it comes to most such measures. I don’t give this argument a lot of weight though since I expect a detailed comparative look at the AI landscape to be more informative (which seems to favor China over Europe).

Another more speculative reason, which I also don’t give a lot of weight, might be “threshold effects” in certain international contexts. A toy example is passing a resolution in some international body. Since this usually requires a majority, it could be important to build influence in lots of countries that could sway such a vote.

Concrete pathways for European governance to influence AI development

Direct legislative or executive influence over relevant stakeholders

There are several ways in which European stakeholders might be able to exert direct political influence on leading AI development groups.

DeepMind

DeepMind is one of the leading companies developing AI technology and they’re currently located in the UK. While the company was acquired by Alphabet in 2014, their location makes them potentially susceptible to European influence. Conditional on Brexit, this influence would be reduced to that of the UK. Personally, I don’t have a good understanding of the legal situation surrounding DeepMind.

Further questions:

  • What legislative or executive levers do the EU or the UK currently have on DeepMind?

  • How does that change when taking into account extraordinary conditions such as national emergencies or wars?

Transnational AI development companies

The EU has significant and direct regulatory influence over transnational companies (e.g., Facebook, Google, Amazon, Apple) through its regulation (e.g., they might set certain explainability standards when it comes to the use of AI algorithms for personal assistants used by Google or Amazon). Such groups often find global compliance easier than differential regional compliance. This has been called the “Brussels effect”. GDPR is a good example of this in the technology sector. Even just forced regional compliance would likely have ramifications for differential AI development (e.g., compliance might slow down capability development within these companies). To the extent that such companies are relevant to AI progress, the EU is a relevant stakeholder.

Further questions:

  • How likely is such regulation in the first place?

  • Which groups are most likely going to be affected by such regulation?

Other European groups relevant to A(G)I development

It seems like Europe seems to be lagging behind the US and China in terms of AI capabilities and their future trajectory (with the exception of DeepMind). However, this might turn out to be wrong on closer inspection (which seems very unlikely) or change over time (which seems somewhat unlikely). If so, it might be that there will be relevant A(G)I development groups in Europe at some point. It could also be the case that certain European groups are leaders within certain subfields which are crucial for A(G)I development, even though they lag behind in most areas. Chip development is an illustrative example of such a strategically important area (NB: Europe is not leading in chip development).

Further questions:

  • What is the state of European AI capabilities research compared to the US and China? If they are lagging behind, how likely is that they will catch up? What’s the most likely development path?

  • Which European countries are most likely to be relevant for AI development?

  • Which European development groups (excluding DeepMind) are most likely to be relevant global players?

  • Are there fields related to A(G)I development in which Europe or European groups are leading? Which ones?

Indirect influence

“Spill-over governance” via role modeling

Regulation and norms related to AI put forward by European countries or the EU might influence relevant governance in other jurisdictions. This is especially relevant to the extent that this applies to the US and China. GDPR, again, can serve as a useful example here: Apparently, China modeled its data privacy regulation to a large extent on GDPR. California appears to have done the same. When it comes to AI, the EU is already developing a focus on “Trustworthy AI” which might have relevant spill-over effects.

Further questions:

  • To what extent has this been the case for other regulation beyond GDPR, especially in the realm of technology policy? How does AI compare to these other examples?

Influence on international regimes

European countries or the EU are likely to play some role in the global governance of AI. So to the extent that the global governance of AI will matter, either through existing regimes or the creation of new ones, European influence will likely be significant. In most international regimes, European countries (the UK, France, and Germany in particular) have considerable influence that is disproportionate to their population size. The EU also has some influence but much less so than some of its constituent members. Even if bilateral negotiations and agreements between the US and China are most relevant, one could imagine third-party countries or bodies playing an important mediation role. Switzerland is probably the prime example here; Norway might also be a candidate.

Further questions:

  • Historically, how have global governance mechanisms for similar technologies (e.g., dual-use technologies) been developed? What has European influence looked like in these cases?

  • Which existing international bodies are likely to be most relevant when it comes to the governance of AI? (e.g., UN Security Council, G7/​8, G20, International Telecommunication Union, International Organization for Standardization) Which European countries are most influential within these?

Directly influencing the “AI power balance” between the US and China

European countries or the EU might be in a position where they can influence the “AI power balance” between the US and China. For instance, they could join or abstain from potential US sanction regimes for strategic technologies or resources (cf. Iran nuclear deal framework). They might prevent the acquisition of AI development groups by Chinese companies (cf. discussions about this in Germany and EU regulation, in part as a result of the Chinese acquisition of German robotics firm KUKA). They might engage in sharing crucial intellectual property with the US. This is really a grab bag of different opportunities that might arise where the European response would have an influence on the Sino-American power balance.

Further questions:

  • What are the most relevant areas/​scenarios that fall under this category?

  • How has “Europe” responded in analogous situations in the past?

  • How relevant is this type of “European” influence on the power balance?

Indirect effects from building up the European “AI sector”

European countries and the EU seem interested in expanding their AI capabilities (broadly speaking). The global effects of this on AI development are difficult to anticipate but potentially relevant if one could potentially slow down or stop this build-up. It might draw in funding and talent from the US but it could also serve as a talent and money pipeline to the US. It might exacerbate “race dynamics” between the US and China or the presence of a third “safety-conscious” stakeholder might actually slow down race dynamics. All of this could affect AI timelines and which stakeholders are most likely to gain a development advantage.

Further questions:

  • How does this planned European build-up this affect global talent and money flow related to AI?

  • How would it affect global “race dynamics”?

  • Overall, would it speed up or slow down A(G)I development in expectation?

Discussion

As I said before, I don’t have particularly strong views about the relative importance of these different pathways. Direct influence seems more important than indirect influence. Within that category, influence over existing leading AI development groups seems more important than potential new ones. Within the “indirect influence” category, I have barely any views. The last pathway (“Indirect effects from building up the European ‘AI sector’”) seems least important and least tractable to make research progress on.

I’d be most interested in an investigation of the potential influence over DeepMind since it could turn out to be quite significant or barely relevant. It’s also a fairly straightforward and tractable issue to research since this strikes me a fairly concrete legal question. Perhaps this could be complemented by some historical analysis regarding the precedent of extraordinary or even extra-legal means of influence, e.g., potential nationalization (attempts) of foreign companies during war times.

These are other questions that strike me as most important and tractable:

  • Which existing international bodies are likely to be most relevant when it comes to the governance of AI? (e.g., UN Security Council, G7/​8, G20, International Telecommunication Union, International Organization for Standardization) Which European countries are most influential within these?

  • How common is “spill-over” governance via role modeling beyond GDPR, especially in the realm of technology policy? How does AI compare to these other examples?

I would also welcome more systematic research into which European bodies and positions are most important for different pathways which is also beyond the scope of this post.


  1. ↩︎

    Charlotte Stix’ work is certainly the most relevant example here. In addition, Allan Dafoe from the Center for the Governance of AI at the Future of Humanity Institute (Oxford) spoke in front of the Subcommittee on Security and Defence of the European Parliament and he also participated as an Evidence Panelist in the All Party Parliamentary Group on Artificial Intelligence. The Cambridge Centre for the Study of Existential Risk submitted evidence to the Lords Select Committee on Artificial Intelligence. Still, these strike me as exceptions to the overall focus on the US and China within that cause area.