Will the EU regulations on AI matter to the rest of the world?

I would like to hear your views on whether EU regulations on AI are likely to matter to the rest of the world. I have been dedicating significant time and career capital contributing to these regulations, and I want to make sure I understand the strongest arguments against my impact. One of these is that the regulations won’t affect AI developers in the rest of the world. Through the lens I suggested earlier (i.e. that “AGI safety” is a series of actions), the claim would be that the EU regulations don’t factor significantly in the outcome i.e. it is irrelevant and therefore any intervention on it are wasteful.

Concretely, the argument goes this way: besides GDPR and perhaps machinery governance, there is little evidence that EU decisions on digital technologies matter significantly outside the EU territory. The global reach of EU decisions (the Brussels Effect) on digital matters could be a self-serving EU talking point, and the tens of millions that GAFAMI are spending yearly for the past 3 years lobbying in Brussels is mostly about non-AI digital files. Short of collecting further evidence on this argument (I believe there is an FHI working paper soon to be released—feel free to add reference whenever available), I can at least outline factors likely to alter the influence the EU regulations on AI will have. There are EU-specific factors and environmental factors.

In terms of EU-specific factors, crucial considerations are the balance, transferability and effectiveness of the pro-safety EU policy.

  1. Failing balance: the EU regulating itself out of relevance. If a scandal of the scale of Cambridge Analytica or Snowden’s revelations comes up in the coming 12 months, this might put the European Parliament under pressure to use broad bans or similar unbalanced measures that would asphyxiate the EU market. This would then make the benefits of trying to capture the juicy EU market smaller than the compliance costs and make the potential pro-safety regulations untransferable abroad. That’s also why just pushing against the technology as a whole is not a good idea in the EU: having to demonstrate mathematical alignment and robustness for all narrow AI systems rolled out would be so burdensome that most would leave the market. The counterargument is that it would significantly incentivize research and investment into making alignment and robustness testing/​demonstration. On the other hand, if there is balance -e.g. asking companies to do some robustness & accuracy testing and incentivizing them to invest in safety R&D, without stifling the market- we could expect the market to “take the hit” of the regulation and invest in compliance rather than exit the market entirely. If the regulation is effective at setting safety requirements but unduly constrain the freedom to innovate and compete- we can hope this will drive wider adoption even in traditionally more pro-innovation jurisdiction. This implies any “unit of foregone freedom to innovate” should be paying off in terms “unit of uncertainty-adjusted AGI safety”, and calls for a very surgical approach to improving the legislation. Despite the public cries from GAFAMI, most of GAFAMI are not even lobbying on the AI regulation because they find it very balanced (so do they declare in private).

  2. Failing transferability: the EU fails to transfer most of its pro-safety measures. Even if the policy is balanced, it might fail to transfer. GDPR transferred in a more or less unintended way—yes, it was meant to apply to foreign companies; yes, there was recognition of “adequacy” and the notion of Data Protection Authorities foreign equivalent, but no mechanisms were in place to promote GDPR compliance in industry abroad as far as we could tell. Commission staff was surprised when Indian and Japanese (and reportedly Chinese) policy staff asked to come to Brussels to discuss GDPR with its architects and translate it in their respective languages and legal environment, to inform their domestic privacy policy negotiations. In the case of AI though, the Commission is serious about promotion: they have launched an international alliance to promote “trustworthy AI” and the “EU approach to AI”. This alliance’s role is basically to do public diplomacy, network research, influence, and to make concrete “conversions” to the EU approach. The same “alliance” mechanism applied to cybersecurity has been described as very successful by EU insiders—but I am not knowledgeable enough about cybersecurity governance to assess how true that is. If these mechanisms fail, a significant portion of the “impactfulness” of the policy will be lost. However, for now, I don’t have reason to believe they will fail.

  3. Failing effectiveness: the EU creates ineffective regulations. That is by far the most likely bad scenario in my opinion. The little “click to accept cookies” could easily transfer to AI for developers (“I have reviewed my algorithm and confirm it is aligned with humanity’s values” [√]). This is after all a policymaking ecosystem that non-content of having invented website-specific “cookies banners” has doubled down a couple of years later by adding “third-party data processing pop-ups” to the mix (again, website-specific, rather than, say, have your browser automatically share your preferences in http requests). Passing ineffective, easy laws would enable everyone to feel good about themselves and kill the momentum for more effective and creative mechanisms (e.g. mandated independent red-teaming, incubation sandboxes, software probes, etc.). The worst is that when looking at policy negotiations, there are good ideas put forward that never make it to the final text (GDPR originally had a strong right to explainability and explainability labeling system that could have increased investment in explainable AI quite significantly—for better or worse, it has been dropped). The AI Act seems to have relatively smart ideas (sandboxing, monitoring technological landscape evolution, distinction between narrow and general-purpose AI...), but I fear the quality of ideas will decrease at each step because of all the compromises.

In terms of environmental factors, developments in the global governance of AI and geopolitics will affect whether the EU becomes more or less relevant to AI safety governance. In particular, transferability (explained above) is also affected by the situation in the rest of the world:

  1. Interoperability of the digital system(s) and their governance: in a world where China and the US decouples structurally tech-wise (separate consumers, suppliers, industries, technologies, and therefore institutions and therefore governance), global convergence e.g. on EU’s higher standards will be more difficult and therefore the EU policy’s impact will be more limited. Think of how the transatlantic internet governance has little influence on China’s decoupled internet. Regardless of EU policy, this would be bad for EAs as they’d effectively have to shape not only a transatlantic AI governance regime but also a Chinese AI governance regime, and if any of these fails at embedding safety, it would structurally decrease our level of confidence in the outcome being safe.

  2. Sustainability of the digital systems: If China or the US or both plunge into domestic mayhem (e.g. November 2024 in the US; domestic discontent if China’s real GDP growth converges to the 2-3% of developed economies within 5 years), there might be a serious weakening of the investors’ appetite for AI R&D. Climate change might exert so much pressure on both countries over the next 15 years that the private and public investments are gradually redirected towards other technologies. Regardless of the reason, an AI winter would slow down market-driven progress towards more general AI systems. This would make the influence of EU policy abroad less of a priority.

  3. Conjoint effect of both: understanding which world we are likely to end up in is useful to assess the relevance of EU regulations on AI.

    1. [sustainable & interoperable] = cyber-peace --> maximal significant & relevant influence of EU policy on the world.

    2. [unsustainable but interoperable] = collapse of the global commons --> relatively significant but irrelevant influence of EU policy on the world.

    3. [unsustainable & not interoperable] = cold war 4.0 --> relatively little but anyway irrelevant influence of EU policy on the world.

    4. [sustainable but not interoperable] = digital trench warfare = little influence of EU policy on the world, even though it’d be needed.