Middle Powers in AI Governance: Potential paths to impact and related questions.

In the spirit of Draft Amnesty Week, this is a very rough post of ideas of ways through which the governments of smaller countries might have an influence on AI governance, particularly focused on X-risk. Personally, I see middle powers as countries that have significant regional or international sway, but that do not belong to either the superpowers such as the US or China or the big powers such as the other G7 and BRICS members[1]. I currently think there are roughly 5 paths through which these countries might influence AI Governance:

  1. Leveraging AI Inputs

  2. Research Funding

  3. Multilateral institutions

  4. Bilateral Relations

  5. Creating incentives through national regulations

In this post, I will flesh out all of these five ideas, and raise some related questions I have on whether and how working on this could be impactful. I am uncertain about the effectiveness of engaging with smaller and middle powers’ AI policies compared to other options, believe we haven’t fully explored these paths for impact, and welcome perspectives on their feasibility.

This list is only based on my personal work experience working in the government of a middle power, armchair reasoning about why middle powers might matter, and a rusty understanding of diplomacy picked up in university a few years ago.

Intro

On this forum, there is often a focus on AI governance. Debates often (rightly) focus on the actions of the (AI) superpowers like the US and China. These countries, with their substantial AI industries, and technological resources, are usually at the forefront of discussions about global AI Governance. However, this does not necessarily make engaging with these countries the only path to impact. This post explores potential leverage middle powers could use to address AI governance challenges effectively.

I am currently not sure whether getting new people to engage actively with the policies of smaller and middle powers will be effective compared to other options they might have. But I also think that as a community we have not investigated these paths to impact as thoroughly as I think we should, and I am fairly confident that for people that are in a position where they can have a positive impact on these powers, it might be worthwhile to do so.

I also think I might be biased towards thinking that middle powers might matter for AI governance, as this is what I am currently working on, so I am eager to hear pushback on whether these paths seem feasible.

Research Funding

One critical area where middle powers can make a significant impact is research funding, especially concerning AI safety. The current investment in AI governance research and safety research is negligible. Middle powers could shift this balance by funding research that advances AI safety, including technical safety work, interoperability, system robustness, and alignment research.

Since there are probably high costs associated with catching up in conventional AI safety research, middle powers might explore specialized funding strategies: These could focus on providing global public goods in AI safety not currently addressed by any single actor, such as incident data gathering, external monitoring of AI labs, auditing systems or improving AI Forecasting.

Questions I have about Research funding

  • How can middle powers effectively allocate research funding to maximize their impact on AI safety?

  • Is there a significant constraint on money or on talent? If talent is the problem, how well can more funding be converted into talent?

  • What specialized funding strategies can middle powers adapt to support global public goods in AI safety that are not currently addressed by major governments or labs?

Leveraging AI Inputs

Another area of potential influence is the AI supply chain. The computational power necessary to train AI systems has a fairly concentrated supply chain, which passes through several middle powers that might have significant leverage over access to compute resources. I’m thinking in particular of South Korea, Japan, Taiwan, and the Netherlands, but there might be others in this category.

For these countries, it might be feasible to have a pretty significant impact on who can develop future AI systems through export restrictions or other regulations that limit what actors companies are allowed to give access to (the input needed for) developing state of the art compute necessary for AI. Potentially other inputs of AI, such as the necessary talent or data, might also be susceptible to these kinds of restrictions.

Questions I have about AI inputs

  • What specific export restrictions or regulations could middle powers implement to control access to critical AI inputs like compute resources, talent, and data?

  • Will the above-mentioned middle countries maintain enough independent control of their industries to implement these kinds of policies?

  • Apart from compute, are there any other inputs of AI that might potentially be restricted? And if so, who has the capability to restrict them?

Multilateral Institutions and Relations

Middle powers can also play a significant role in shaping the future of AI through active participation in regional institutions and international organizations, such as the EU, UN or OECD. By influencing international regulations and standards, these countries can contribute to global discussions on AI governance.

The EU is on track to pass its own AI legislation. Middle powers that are part of the EU might influence decision-making in the EU through the positions their country takes in the Council of the EU. Similarly, we might see other regional bodies play a role in the governance of AI as well. On the global stage, the UN and OECD have both already played a role in the governance of AI, with the OECD playing a major role in standard setting and the creation of definitions, and the UN starting the process of getting to a global institution that might regulate through soft law. AI Governance is currently being explored by the UN High-Level Advisory Body on Artificial Intelligence, which includes pretty wide representation from a large range of different countries. While such wider participation might sound democratic and positive, it might also lead to a proposal that is way too ambitious to get the superpowers on board. This shows the downsides participation of a lot of middle powers might have on AI governance and its feasibility.

Another example of this is the participation of middle powers in summits like the AI (Safety) Summits. And the list of invitees of this conference is already a pretty good reflection of what middle powers might be influential, since it’s my understanding that the countries were selected based on whether they will have an influence on the future development of AI.

Questions I have about Multilateral Institutions and Relations

  • How can middle powers enhance their influence in multilateral institutions to promote global AI governance frameworks?

  • What mechanisms within multilateral institutions are most effective for smaller nations to introduce or influence AI governance and safety standards?

  • Can a coalition of middle powers within multilateral institutions like the UN or OECD effectively counterbalance the influence of AI superpowers in shaping AI governance norms? Should they?

  • How can middle powers ensure their interests and perspectives on AI risk and governance are adequately represented in multilateral discussions?

Bilateral Relations

Another path through which these powers could be influential in shaping AI governance is through bilateral relationships with the bigger players. I’m thinking here in particular of the US and China. Whereas it seems unlikely that middle powers will be able to shape the policy decisions made in Washington or Beijing, it seems like they could be influential on the margin in the decision-making of these superpowers. Through nudging their policy in slightly more safety-minded directions in negotiations and whenever they are consulted, they could have a positive influence as well.

Additionally, middle powers might be able to provide “neutral ground” for diplomacy to occur between the AI superpowers, playing an enabling role in coming to some sort of international agreement.

Questions about Bilateral relations

  • What strategies can middle powers employ in bilateral relations to nudge superpowers towards safer AI development practices?

  • Can bilateral agreements between middle powers and AI superpowers include provisions that positively influence AI safety and governance?

  • Can middle powers leverage their strategic or economic relationships with superpowers to gain concessions or influence in the realm of AI development and governance?

  • What are the risks and potential downsides of middle powers engaging in bilateral negotiations with superpowers on AI governance?

Creating Incentives Through National Regulations

Middle powers, through passing new AI legislation, have the opportunity to set precedents that could motivate AI labs to adhere to higher standards of safety, interpretability, and robustness. By enacting laws that demand stringent compliance for market access, these nations could indirectly influence global AI practices. The key lies in designing regulations that are reasonable enough for AI developers to achieve compliance, while also setting a standard that meaningfully contributes to reducing global AI risk. If executed in a synchronized manner among middle powers, this approach could set a new norm of enhanced safety standards in AI development, provided the regulations are targeted effectively and there is genuine incentive for labs to comply. Alternatively, middle powers might experiment with new policies that then spread through policy diffusion.

Questions about National Regulations and incentives

  • How can middle powers design national laws and regulations to set a high benchmark for AI safety, interpretability, and ethical standards?

  • What strategies can middle powers employ to prevent the displacement of unsafe AI development practices to less regulated environments?

  • Is there potential for a coalition of middle powers to harmonize their AI regulations, thus establishing a global baseline for AI safety and governance standards? (Under what circumstances) would this be helpful?

  1. ^

    Wikipedia lists the following countries as middle powers, which also seems roughly accurate to me: Algeria, Egypt, Ethiopia, Kenya, Nigeria, South Africa, Argentina, Canada, Chile, Colombia, Mexico, Peru, North Korea, Indonesia, Iran, Iraq, Israel, Kazakhstan, Kuwait, Malaysia, Pakistan, Philippines, Qatar, South Korea, Saudi Arabia, Singapore, Taiwan, Thailand, Turkey, United Arab Emirates, Vietnam, Austria, Belgium, Czech Republic, Denmark, Finland, Greece, Hungary, Ireland, Netherlands, Norway, Poland, Portugal, Romania, Spain, Sweden, Switzerland, Ukraine, Australia, New Zealand