Should you work in the European Union to do AGI governance?

Whether Artificial General Intelligence (AGI) is safe will depend on the series of actions carried out by relevant human actors that eventually results in more or less safe AGI. Artificial Intelligence (AI) governance refers to all the actions along that series that help humanity best navigate its transition towards a world with AGI, excluding the actions directly aimed at technically building safe AI systems.[1]

This post compiles and briefly explains arguments in favor and against the relevance for AI governance of actors based in the European Union (EU) and in favor and against the value of pursuing work in EU policy making. It builds on previous attempts to help EAs determine whether to work on AI governance in the EU.[2][3] In doing so, I hope to contribute to a knowledge gap in the EA community’s collective assessment of the importance of the EU and initiate a discussion on the various arguments with individuals exploring career, research, or funding opportunities.

The arguments listed have affected career and funding decisions in the past. Note that several of these arguments apply to the field of AI governance in general, but are included because they are especially salient for EU AI governance. Note also that no attempt is made in this post to weigh the arguments against one another, even though in fact, I disagree with several of these arguments (I disagree with 1 argument in favor and 2 arguments against). As a disclaimer, on balance, I believe that the EU is relevant for AGI governance so, please, keep me on my toes: I have tried hard to strengthen the arguments against and the arguments with which I disagree to provide a solid basis for discussion and reflection.

You can use the table of content on the left to navigate to the arguments you want to read about (on desktop browsers, not mobile). I welcome additional input! You can help by answering these questions:

  • What are additional arguments (for or against) that have not been included in this list?

  • If you think I don’t do justice to an argument you are familiar with, how would you explain it differently? What are the nuances I miss?

  • Which of the arguments do you think are the strongest?

Notes for those unfamiliar with the EU: “Member States” refer to nations that are part of the EU. There are 27 Member States in the EU, for around 450 million citizens and accounts for 16% of global GDP in purchasing power standards (PPS). Neither the UK nor Switzerland is part of the EU. For comparison, the US has 332 million citizens and accounts for 16.3% of global GDP PPS and China, with 1.423 billion citizens, accounts for 16.4% of global GDP PPS.[4]

I thank Risto Uuk, Laura Green, Andrea Miotti, Mathias Bonde and Daniel Schiff for their valuable feedback on this post, as well as everyone who have helped me map or better understand these arguments throughout the years. Disclaimer: the post’s original title was “Is the European Union relevant for AGI Governance”, but as rightly pointed out in this comment the post’s is more accurately described as being about the value of pursuing work in EU governance if you are concerned about AGI. Besides the title and the introduction, the text has remained identical.

Arguments in favor

1. The Brussels Effect

The Brussels Effect refers to the EU legislative decisions having impact outside of the EU. There are two types of Brussels Effect: de facto effect (i.e. market players adapting to EU’s decisions in non-EU markets even when it is not legally required) and de jure effect (i.e. foreign regulators adapting their own legislation to match EU’s decisions). These effects arise through the EU’s deliberate use of administrative and legal mechanisms to ensure extraterritorial impact. Whether a Brussels Effect also kicks in for AI governance has been explored here and in an upcoming paper by FHI/​GovAI. The preliminary answer seems to be yes.

How does it affect the EU’s relevance?

The Brussels Effect would multiply the impact of EU decisions on regulatory and market players worldwide. This would make EU AI policymakers’ actions more important for safe AGI development, and therefore would imply the EU is more relevant to AGI governance.

Further information:

2. The EU is taking AI governance decisions now

The EU institutions (European Commission, Council of the EU, and European Parliament) are developing their landmark AI legislation now. After various forms of public/​stakeholders consultations in the past two years, the European Commission has submitted a legislative proposal to regulate artificial intelligence in April 2021. As of January 1st 2022, the latest compromise put forward by the Council of the EU includes the legal notion (to be further defined) of general purpose AI systems for the first time.

How does it affect the EU’s relevance?

The debate underway about this legislation – likely to last till 2023 – will see increased research and advocacy about certain governance concepts and their prospects for achieving certain outcomes, affecting how people think about these concepts. For example, the notions of “accuracy”, “robustness” and “general purpose” in AI systems will have to be defined legally. For better or worse, because of path dependency in policymaking and the inefficiently long lifecycle of legal acts, these concepts could shape the way industry ensures “accuracy” and “robustness” in AI for the next 15-30 years. This suggests that now might be a rare opportunity to influence AI safety governance before the development of AGI.

Further information:

3. The EU is part of many AI development and governance collaborations

The EU has historically relied on and benefited from multilateralism and transnational partnerships for shaping its international political and economic environment. The EU or its Member States lead or co-lead the EU-US Trade & Technology Council, the OECD AI Observatory and the Global Partnership on AI. The EU is also part of the G7, G20 and NATO. EU-China relations are better than US-China relations. In the private sector, there are multiple industry partnerships through which the EU deploys its soft power (GAIA-X, Alliance on Processors & Semiconductor Technologies, InTouchAI.eu, Alliance for Industrial Data, Edge and Cloud) to affect AI governance and standardization.

How does it affect the EU’s relevance?

The EU aims to impose – through soft power – its approach to AI governance in the rest of the world. If the approach turns out to be sound enough to help reduce existential and catastrophic risk from AGI, it is important to foster its diffusion. If the approach is not sound, it is important to either correct it or help prevent its successful diffusion. While the AI safety governance research field has not yet reached a scientific consensus on what is sound or not sound, EA-aligned individuals in positions of influence within the EU AI governance field would over time become better positioned to make the judgment calls needed to improve AGI governance. Note that this is distinct from the Brussels Effect, which spreads legislation; the present argument instead relies on spreading EU standards, norms, trade agreement clauses, ethical board reviews for research projects, etc. through these partnerships.

Further information:

4. The EU market and political environment favor AGI safety

The argument is that the EU is more likely to aim for AGI safety than the US and China. The reasoning is based on three claims. First, EU consumers have a tendency to demand more trustworthy goods (in absolute value) than other regions, for economic or historical reasons, which therefore incentivizes global industry to supply such trustworthiness. Second, the EU has enshrined the precautionary principle in its constitution (i.e. its Treaties). Third, as it lags behind in the AI race, the EU has incentives to slow down the first players by setting high trade barriers, such as trustworthy AI requirements and legislation.

How does it affect the EU’s relevance?

If this argument holds, the EU might be the jurisdiction where it is the easiest to promote AI safety research and development through AI governance, while still influencing the global supply of AI technologies.

Further information:

5. Direct influence from inside relevant AI labs is limited

Individuals who consider entering the field of AI governance sometimes believe that the EA community has (i) identified relevant AI labs (i.e. labs that might develop AGI) and (ii) that it can directly influence them towards a safe outcome. The present argument questions the EA community’s capacity for identification because of the multiplication of AGI projects, the possibly diffuse nature of superintelligent systems breakthroughs (Comprehensive AI Services as General Intelligence) and the confidential nature of relevant projects due to trade secrecy or national security. The argument also questions the EA community’s capacity for influence because of past developments and the current situation. For example, DeepMind has recently failed its bid for independence from Google. Key safety personnel have left OpenAI for unknown reasons after OpenAI made a deal with Microsoft. There is an AI race underway in the private sector, and there is also an AI race underway between the US and China (even though the EA community considers both types of AI races as undermining AI safety at least since Nick Bostrom’s Superintelligence book). Finally, there are dozens of AGI projects identified by GCRI with no direct EA contacts. The present argument is that influencing the developers directly from within the lab is not as effective as previously thought, and, therefore, that there is a greater need for policy-based AGI governance to influence these developers.

How does it affect the EU’s relevance?

This argument, if true, would increase the relevance of AI governance activities outside the relevant AI labs in general, including EU AI governance. AI policy might be the only effective way to shape the culture and normative institutions of the industry and field as a whole. This in turn might be the only way to redirect the market and political forces at play in AGI labs so as to ensure AGI safety.

Further information:

6. Growing the political capital of AGI-concerned people

Working on AI governance in the EU while it is actively regulating AI is a rare opportunity to increase one’s political capital in this field (through increased credibility, network and expertise). By the time the EA community or scientific community knows what is the best way to govern the development of AGI safety, AGI-concerned individuals can spend this accumulated capital in influencing the decision-makers or in becoming decision-makers directly. The argument also relies on the observation that it takes several years to obtain the network, trust and supporters needed to gain positions of influence. Moreover, every decision taken in AI governance is an opportunity to gain some political capital. If AGI-concerned individuals are not present in the room when these decisions are made, they not only fail to increase their own political capital but also enable individuals not concerned about AGI to accumulate that political capital. By the time AGI is about to happen, it means decision-makers and relevant advisors to these decision-makers will more likely be individuals who are not concerned about AGI (the same way that today’s influential people in AI governance are likely to be yesterday’s people who worked on past privacy, cybersecurity, or digitalization policy files).

How does it affect the EU’s relevance?

From a political economy perspective, missing the opportunity to earn political capital today on AI-related decisions makes it more difficult for EAs and longtermists to influence humanity’s transition towards AGI tomorrow.

Further information:

Other arguments in favor

  • Exploration value – AI governance is a new field and the EA community does not know what’s most promising, hence more EAs should explore EU AI governance to learn whether it is and what type of policies are feasible to recommend there or elsewhere.

  • Personal fit – EU citizens have the best personal fit to work on this and it is difficult for them to become American/​British/​Chinese or to gain access to influential positions in the US/​UK/​China.

  • Low-regret career pathway – due to the way it is organised, many roles in EU governance require juggling with multiple legislative files, either in parallel (e.g. parliamentary assistants and diplomats) or sequentially (Commission staff and lobbyists). The EU is relevant in development aid (biggest development aid budget in the world), research & innovation, emergency response, health & science policy, and animal welfare. There is anecdotal evidence from the direct work of 3 EAs in EU policy institutions supporting this.

  • High personal career capital – because of their influence, the EU institutions are very competitive to enter and are therefore a signal of competence. In addition, the EU institutions offer well-paid and stable jobs. As most experts in industry, civil society or academia who want to influence policy reach out to these institutions to share their views or to invite them to events, these roles are also attractive to build the network necessary to find in due course a valuable professional exit opportunity (e.g. government affairs roles at top AI companies or director at consortia-based AI projects).

  • Neglectedness – there are currently a grand total of ~5 FTE EAs working on EU AI governance making the next person working on it potentially very unique and valuable counterfactually.

Arguments against

1. The EU is not an AI superpower

This argument against the relevance of the EU is that its industry and its research & innovation ecosystem is not influential in the technical development of AI technologies, and therefore unlikely to be influential on the development of AGI-related technologies. From the number of AI companies and startups to investment to the number of researchers, the EU lags behind on many metrics that are used as proxies for relevance to AGI governance (see for example this CDI report, this Elsevier report and this Bruegel analysis). Only 10 out of 74 identified AGI projects are in the EU[5] and these are far from being the best funded projects.

How does it affect the EU’s relevance?

Without a scientific, technological or commercial lead in some aspects of AI, the “field share” (market share, share of IP, share of publications, share of talent, etc.) of the EU declines. If the relative influence of the EU policies for research, technology and markets is proportional to this field share, then I expect it to decline: these policies or the EU approach are less likely to become the norm in the development of AI.

Further information:

2. EU legislation does not matter enough

When considering the series of actions by various actors along the potential pathways to AGI, the claim here is that EU government officials’ actions – such as legislation – won’t influence the outcome enough relative to the costs of influencing these actions. Therefore, it won’t shift enough our level of confidence in a safe AGI outcome to be worth the investment in EA time and/​or money. This is particularly strong for very short timelines such as around <5 years (i.e. before the legislation has had time to be implemented and have a structural effect on industry and R&D) and for very long ones >40 years (i.e. at which point the path dependency effect fades and is anyway offset by the evolution of the ideological, research, institutional and technological landscape). There are various reasons and circumstances where EU regulations might not matter in the end: if the EU fails to develop balanced regulations, if it fails to transfer these regulations proactively abroad or if it fails to develop effective safety-enhancing regulatory requirements. Moreover, if the global digital system surrounding AI doesn’t remain interoperable (e.g. structural decoupling and polarization of the technological landscape) and sustainable (e.g. AI winter), the EU regulations will matter less to the final outcome.

How does it affect the EU’s relevance?

One of the main pathways of influence on AGI from the EU relies on its regulations on AI to promote a safe outcome. If regulations do not matter enough, the EU’s relevance in AGI governance will be greatly diminished.

Further information:

3. EU governance would slow down US research towards AGI more than it would Chinese research

This argument is particularly strong if the AGI alignment issue is easy to resolve. It suggests that safety-enhancing policy recommendations emanating from the EU would affect US companies and labs more than Chinese ones, as the US AI industry derives more revenues in the EU than the Chinese AI industry does. This is because the Chinese industry relies heavily on its domestic market (where it does not have to comply with existing EU laws for many digital technologies), while the US technology industry relies on exportation, notably to the EU (where it has to comply). Assuming safety-enhancing regulations slow down innovation and research in AI raw capabilities, this would lead to a bigger slowdown in the US than in China. In a world where AGI alignment is easy to resolve, the slowdown of US industry relative to Chinese industry would therefore make China more likely to get controllable AGI first. The additional assumption is that Chinese AGI developers would be less likely than US AGI developers to use a controllable AGI in ways that align with the EA community’s values.

How does it affect the EU’s relevance?

If the argument holds, then the priorities of AI governance as a cause area would be to ensure the first controllable AGI systems are developed by entities that would use it for the flourishing of humanity as a whole. This is likely to require the ability to control the supply of goods necessary to build AGI (cutting edge chip designs, lithography systems, highly-skilled researchers, …) to favor some players’ progress towards AGI rather than others. The EU does have some industrial capacity in some of the “AGI supply chain” bottlenecks, notably ASML’s near monopoly in cutting edge lithography for semiconductors. However, given its commitment to open trade and multilateralism and its internal divisions, it is unclear whether the EU would ever be able to manipulate supply chains meaningfully this way.

4. The EU is irrelevant to military and national security AGI pathways

The EU has little military power compared to the US and China, and its national security apparatus is negligible (mostly existing at Member State-level, in an uncoordinated fashion). It is also unclear to what extent foreign military projects on advanced AI or AGI technologies would actually be affected by the EU, but presumably very little. If the critical pathway to AGI goes through military or national security research, I therefore expect EU decisions to be much less relevant to AGI safety. This is not a temporary but a structural situation: the EU does not have the security and military institutions equivalent to the US Department of Defense (DoD), National Security Agency (NSA), Defense Advanced Research Projects Agency (DARPA) or Intelligence Advanced Research Projects Activity (IARPA), which are candidates for hosting AGI research projects.

How does it affect the EU’s relevance?

While there is growing support and work towards establishing an EU military and EU security coordinated system, notably for advanced research projects, even the most optimistic experts do not foresee this happening in any meaningful way within the next 10 years. Even then, it doesn’t mean it could significantly influence Chinese, Russian or US military projects on AGI. There could be some slight influence from the EU if the R&D and quality assurance (civilian) norms it sets become strong industry/​academic field norms in the coming years, and that they transfer to military or security labs technicians and project managers, but there is no precedent. If AGI gets developed through military or national security research, the EU would therefore be quite irrelevant to AGI governance.

5. The EU is fragile and could become irrelevant

Although the EU institutions and their ancestors have progressively gained power over the past 70 years (in terms of the level of decision-making for various topics), the EU could disaggregate itself. Brexit has been an example of this process in action. Crises like COVID-19 or the 2010 sovereign debt crisis have generally resulted in more power and resources being concentrated at the EU decision-making level. However, the leadership has just happened to rise to the opportunity. There are recurrent issues that the EU has yet to resolve – migration policy, coordinating foreign policy among Member States, Member States’ rule of law and populism, etc. Nothing guarantees that the EU will survive future crises related to these issues.

How does it affect the EU’s relevance?

If the EU institutions dissolve or if their power significantly weakens, investment in gaining influence at the EU level would be much less valuable. The influence of EU laws would persist through national legislation (because national laws have to integrate EU laws), but the collapse would reduce the ability of EAs with EU governance experience to influence future decisions on AGI governance.

Further information:

This argument refers to AI governance activities that involve policy debate or policy changes. There are various ways in which policy-related AI governance could result in a net negative outcome for AGI safety.

First, there might be information hazard: a strong version of the information hazard argument assumes that many policymakers are already familiar with the concept of AGI, but suggests that a better explanation of AGI’s implications (to justify safety-enhancing policies) could trigger decision-makers to switch towards a race for AGI. This argument assumes that AGI governance requires explicit discussions of AGI.

Second, it could lead to politicization of AGI safety. As AI safety turns into a policy topic, stakeholders will have incentives to fund/​publish/​amplify research aimed mostly at swaying the political debate rather than solving the control problem. To the extreme, these could result in dangerous signalling tactics and games (e.g. industry underreporting risks of its AGI research or releasing a powerful proto-AGI programme publicly just to show confidence in the algorithm being safe). This argument assumes that AGI governance requires considering AGI safety as a separate policy topic. There are precedents where this politicization has occurred to the detriment of public health. For example, research about the scientific hypotheses of anthropogenic climate change, of the causation link between cigarettes and cancer, and of the anthropogenic ozone hole have all suffered from organized actions to delegitimise scientific evidence or suggested solutions.

Finally, policy-related AI governance could waste AI safety resources – There is significant uncertainty about what to recommend to policymakers. For example, no one can demonstrate with certainty that any EA recommendation integrated into the EU AI legislation would be a net positive compared to the default pathway taken by the EU. Regardless of the uncertainty, the staff or financial resources required for achieving a given unit of impact on the likelihood of safe AGI through AI policy might be higher than for achieving the same impact through AI technical research (or different cause areas). Both this uncertainty and the cost for achieving change could result in AI governance work being wasteful.

How does it affect the EU’s relevance?

This argument reduces confidence in the impactfulness of AI governance through policymakers. As current EU AI governance approaches rely almost exclusively on policy-making (rather than norm-setting in e.g. industry, the military, academia, etc.), this would reduce the relevance of the EU.

Further information:

Other arguments against

  • Higher returns on EA investment in the US and China AI governance space than in the EU’s – there might still be too few resources invested in AI governance in the US and China to expect diminishing returns relative to the EU AI governance space, so it makes sense to continue prioritizing the US and China.

  • The EA EU AI governance space is not mature enough for personal career progression and impact – there are not enough resources spent on the EU AI governance space yet to expect that the next resources spent would be able to accomplish much: contrary to the US and UK EA AI policy spaces, the EU EA AI policy space is currently ~5 FTEs. There is therefore no “gravity effect” or “network effect” similar to the US and UK, where well-funded CSET and FHI/​GovAI have enabled stable jobs for EAs to specialize and get integrated into the space of AI governance, making it much less risky career-wise to migrate there. There are two perspectives on this argument. On one hand, in terms of career-decision, it is not worth EAs’ time to work on EU AI governance because there is no significant investment by EA funders into EU AI governance that could derisk the approach. On the other hand, in terms of funding decisions, it is not worth EA donors’ investment because there are too few EAs in EU AI policy to ensure an informed and effective use of the funds “on the ground”.

  1. ^
  2. ^
  3. ^
  4. ^
  5. ^

    Aleph Alpha, Animats (formerly Alice In Wonderland), Curious AI (though the core team now seems to be at Apple), EleutherAI, Fairy Tale AGI solutions, FLOWERS, GoodAI, Mauhn, SingularityNET and Xephor Solutions – based on GCRI’s 2020 landscape, adding Aleph Alpha & EleutherAI.