If AI progress continues on its current trajectory, the developers of advanced AI systems—and the governments who house those developers—will accrue tremendous wealth and technological power.
In this post, I consider how and why US government[1] may want to internationally share some benefits accrued from advanced AI—like financial benefits or monitored access to private cutting-edge AI models. Building on prior work that discusses international benefit-sharing primarily from a global welfare or equality lens, I examine how strategic benefit-sharing could unlock international agreements that help all agreeing states and bolster global security.
Two “use cases” for strategic benefit-sharing in international AI governance:
Incentivizing states to join a coalition on safe AI development
Securing the US’ lead in advanced AI development to allow for more safety work
Benefit-sharing: Sharing AI-derived benefits that don’t significantly alter power dynamics between the recipient and the provider.
Power-sharing: Sharing AI-derived benefits that significantly empower the recipient actor, thereby changing the relative power between the provider and recipient.
I conclude with two key considerations with respect to benefit-sharing:
Credibility will be a key challenge for benefit-sharing and power-sharing agreements (See more)
Benefit sharing strategies should account for potential risks. (See more)
Introduction
Advanced AI systems could empower their developers—and the governments who supervise those developers—with enormous benefits. For example, advanced AI systems[2] could give rise to tremendous wealth, breakthrough medical technologies, and decisive national security benefits.
This post examines how these benefits could be shared internationally. In particular, I examine how and why the US government (henceforth USG) may want to strategically share some of the benefits accrued from advanced AI to further their own interest, other states interests, and ultimately bring about a safer world.
Past work[3] on international benefit-sharing has primarily focused on sharing benefits to address wide-spread job-displacement and promote welfare and equality globally. I support such altruistic benefit-sharing to remedy the uptick in global power- and income-inequality that AI could drive. But I want to expand discussions of international benefit-sharing to include sharing benefits as a tool for positive-sum trades.
By offering AI-derived benefits—such as economic aid, monitored frontier AI model access, or security assurances[4]—the USG could enable commitments that are in the interest of all parties at the table and promote global security. For example, the US could provide allied states with monitored access to private, cutting-edge frontier AI models. In exchange, allied states could take steps domestically to prevent the proliferation of weaponized AI systems. (I discuss other ideas on how benefit-sharing could be used in international AI governance below).
There is precedent for US-led strategic benefit-sharing. Consider the Marshall Plan. Post-WW2, the USG helped Western Europe rapidly recover by providing benefits like financial aid and modern technologies, which in turn strengthened the US’ key strategic alliances, created mutually-beneficial markets for US goods and services, promoted democratic capitalism, and stabilized a region that could have otherwise triggered new conflicts.
What types of international AI governance agreements could strategic benefit-sharing (and power-sharing) unlock?
While sharing AI-derived benefits could in principle be used to incentive international agreements on many topics, this post focuses on benefit-sharing as a tool to unlock agreements on questions like who develops next-generation AI systems and how they’re governed.
What exactly should those international AI governance agreements look like? I’m not sure yet. With so much uncertainty about the capabilities of future AI systems, their risks, and future domestic regulation in states like the US and China, it’s hard to know which international institutions and agreements are feasible and desirable.
So, rather than proposing a specific international agreement involving benefit-sharing, I highlight two different plausibly-good outcomes that benefit-sharing could help unlock.
Sharing benefits to incentivize other states to join an international coalition on safe AI development: In ‘Chips for Peace’,Cullen O’Keefe sketches a proposal in which a coalition of states, likely led by the US, could agree to implement domestic safety regulation and commit to nonproliferation of harmful or high-risk capabilities. Strategic benefit-sharing is a core part of this proposal: Members of this coalition could drive broad adoption of safety and non-proliferation commitments by sharing benefits that “compensate for the burdens associated with regulation.”
Sharing benefits to secure the US’ lead in developing the most advanced AI systems, which could prevent a neck and neck international AI arms race. In turn this could allow a US-led frontier AI effort to slow down to do more technical safety work, evaluations, and to carefully consider high-stakes decisions about how to integrate AI systems into society. Securing the US’ lead could involve sharing benefits (and potentially power) with both competitors and allies. With respect to competitors, the USG could, for example, share defense-dominant narrow AI models with competitor states in exchange for credible commitments to stop certain high-risk advanced AI development and not take actions that could undermine US’ lead, such as stealing frontier AI model weights. With respect to allies, the USG could, for example, share increased US protection and financial benefits with key states in the AI supply chain (e.g., Taiwan) to secure critical inputs for its safe AI development and stronger sanctions and export controls against competitors.
Benefit-sharing vs power-sharing: An important, fuzzy distinction
If the USG wants another state to sacrifice something significant (eg., their domestic advanced AI program), the USG may need to do more than just share benefits. The USG may need to share power.
That is, the USG may need to make concessions to its own power and share technology, national security capabilities, or ‘control rights’ for its most advanced AI models that significantly empower other states. I call this power-sharing and view it as distinct from benefit-sharing.[5] Note, however, that this distinction is blurry: financial investments are a prototypical example of benefit sharing, for example, but if the USG gives very lucrative, irrevocable financial benefits to another state, even financial investments start looking like power-sharing.
Power-sharing is more consequential than benefit sharing, but it should be on the table when thinking about international agreements.
Credible power-sharing may unlock important international agreements that ordinary benefit sharing couldn’t. For example, suppose the USG saw it as an imperative that China stops its own development of extremely advanced AI systems. Would benefits be enough? Without credible power-sharing, China may worry that the US could always defect from its agreement, retain complete control over a powerful new technology, and leave China with little to show for its costly commitment. If tensions are high between the US and China, China could see it in their interest to preemptively attack US data centers, for example, to stop the US from gaining such a decisive strategic advantage.
Power-sharing could also be important to prevent dangerous concentrations of power.
What AI-derived benefits could the USG share with other states?
I identify four main clusters of benefits, although some benefits don’t fit cleanly into these categories.
Financial and resource-based benefitssuch as monetary aid, energy resources, and investments in infrastructure and education programs in other states.
National security benefitslike US protection in the case of regional or global conflict, counter-terrorism support, or defense-dominant technologies like improved defenses against bioweapons.
‘Seats at the table’ for important decisions, such as how to develop or deploy next generation models, technical alignment efforts, or membership in working groups on how to navigate ethical challenges posed by advanced AI.[6]
Other benefitsthat don’t cleanly fit into the categories I propose, like guarantees to not engage in certain activities, diplomatic assurances regarding treatment of specific people, and an international AI policy advisory.
I’ve listed many possible benefits, even if many prove unrealistic as we learn more about the trajectory of AI progress and which actors accrue the biggest benefits. I’ve also listed some benefits that are not directly derived from AI advanced (eg., US protection) but could be instrumental for international AI governance agreements
Importantly, I focus on listing benefits that would most likely not cost the US their international lead on frontier AI development. For example, the list below doesn’t include benefits like unmonitored access to the US’ top AI models or offensive military technology. I think distributing unmonitored access to the US’ most powerful AI models, for example, should be thought of as power-sharing, which is not the focus of this list. (See discussion of power-sharing vs. benefit-sharing.)
Financial and resource-based benefits
The most straightforward benefits that the US could give to other states are money and resources.
Monetary aid could be given to governments, or perhaps directly to citizens of other states, akin to a sort of international universal basic income (UBI) trial. Monetary amounts could be in absolute terms or as a % of profits of some AI-driven index.
Non-monetary resources such as energy or natural resources. In futures where AI progress goes very fast and science-fiction-seeming futures stop seeming like fiction, some resources in space (e.g., solar output, helium-3 from the moon surface for nuclear fusion) could also be on offer.
Physical infrastructure such as modernized electricity grids infrastructure, transportation, and public services.
Technological advancements like clean air technologies, environmental disaster prediction and response systems, healthcare technologies (like new cures or diagnostic tools), and renewable energy technology. (See later section for discussion of sharing frontier AI tech benefits and defense-dominant technologies).
Advantageous insurance packages, such as disaster insurance in a time of increasingly frequent and severe large-scale natural disasters.
Access to lucrative frontier AI-driven investment fundsorfinancial markets.
Frontier AI benefits
If other states see the US economy and security being supercharged by advanced AI, one of the most direct and desired benefits USG could share may be AI technology or access.
Some frontier AI technology and inputs will naturally diffuse internationally as US companies sell AI products abroad. But other frontier AI technology, like powerful general models that can be easily misused, may not be shared with everyday consumers, especially if the USG works closely with frontier AI labs.
Below, I discuss some levers that could allow the USG (or another actor that has control over the advanced AI models) to balance sharing altruistic and strategic benefits with maintaining a technological edge and preventing the proliferation of dangerous capabilities.
Whether it’s a general-purpose model or a narrow model: Instead of sharing highly capable general models, which are harder to evaluate, a US actor could share narrow AI models, tailored for specific beneficial applications like healthcare or supply chain optimization.
Which generation of model is shared: A US actor could offer models that are 1-3 generations behind their cutting-edge systems. Depending on how securitized or closed-off US frontier AI development becomes, such models could still be more capable than the most capable model consumers would otherwise have access to.
The extent to which model outputs are fine-tuned or restricted: A US actor could share powerful models, yet only with certain ‘fine-tuning’ or reinforcement learning from human feedback (RLHF) so that the model doesn’t answer dangerous questions or follows certain high-level principles (e.g., non-violence). With monitored access (e.g., API access), the sharer could also make use of response filters and output classifier models that restrict what information is also ultimately shown to the user.
The extent to which access is monitored: Access could be closely monitored in real-time, periodically audited, or granted with minimal oversight, depending on trust levels and security concerns.
The extent to which model access is revocable: By sharing API access rather than model weights, for example, a US actor could cut off model access any time.
The types of ‘compute’ access that is shared, if at all: Instead of sharing AI model access or weights, the US could share the computational resources (a.k.a. ‘compute’) that allows for the training and use of AI models. This comes with risk: sharing scarce computational resources with other states, especially states that want to make more powerful models than the US, could undermine US AI leadership. However, some types of monitored compute access that only allows for the use of existing models (ie., ‘inference’ rather than training) could be realistic for US actors to share. Alternatively, US actors could share computational resources with future hardware-enabled governancemechanisms that only allow for certain kinds of training runs, like training runs below a certain compute threshold or training runs that don’t involve certain types of data.
The types of AI development inputs that are shared. Computational resources, mentioned above, are an important input into AI development. But so are data, algorithms, and human capital. Plausibly some data sets, algorithmic breakthroughs, or expertise could be shared strategically.
The extent to which access to the model is deliberately slowed down: Access to any model could be ‘throttled’ by restricting run speeds or the number of copies that can be run in parallel.
The extent to which meta-information about the model is disclosed: Varying levels of information about the model’s architecture, training process, or relationship to other frontier models could be disclosed.
‘Seats at the table’ for decision-making related to AI development
Rather than—or in addition to—providing material benefits, the US could offer invitations to join important processes or proceedings as a form of benefit.
Note that insofar as these seats hold significant decision-making power, I think it makes more sense to think of sharing these seats as power-sharing than benefit-sharing.
Invitation to influence how frontier AI is evaluated: The USG could allow other states and actors to add to the evaluation suite upon which frontier US models will be tested.
Invitation to join international technical AI safety effort: The USG could invite top scientists from abroad to support AI alignment efforts.
Invitation to join applied ethics working groups or commissions: The development and deployment of advanced AI will be accompanied by many serious ethical questions, like how to handle the possibility of military automation and advanced persuasion. The US-led AI efforts may be the first to encounter some of these challenges and the best-placed to implement solutions, and it could offer ‘seats at the table’ for other states to contribute to these decision-making processes.
Some sort of decision rights in key decisions about developing and deploying frontier AI (ie., ‘control rights’): Even while maintaining primary control over decisions about how to use advanced AI, the USG could provide some sort of avenue for other states to contribute to deliberations on if, how, and when to use certain advanced systems.
Regular updates about US AI development: The USG could allow certain actors to enter into an exclusive AI information-sharing bubble.
Invitation to high-stakes conferences or conventions: It’s plausible that the US will want to convene some sort of gathering of international experts or officials to help it make key AI development or deployment decisions. Invitations to such gatherings could be a type of strategic benefit.
Empowering other states national security
While empowering other states’ national security could pose risks, doing so could be an important part of disincentivizing other states from racing the US to develop dangerous AI capabilities.
If allies and even adversaries of the USG can get the security benefits of advanced AI while leaving the development predominantly to the US, it may help avoid an international AI arms race where the US-led AI project and a competitor feel pressured to cut corners on guaranteeing that the most advanced systems don’t pose catastrophic risks.
Additionally, boosting other states’ national security systems might help those states work towards goals they share with the US government. For example, sharing counter-terrorism technology could reduce the risk of bio-terrorism that spills into the US despite not targeting the US.
Some examples of ‘national security benefits’ include:
US protection in the case of regional or global conflict. In extreme cases this could include extending nuclear umbrella protection to some states, similar to existing NATO arrangements.
Counter-terrorism support: Sharing intelligence from advanced counter-terrorism efforts, or the underlying technologies, strategies, training programs for advanced counter-terrorism.
Defense-dominant technology such as space-based early warning systems, enhanced bioweapon or missile defense systems, cybersecurity assistance, and environmental disaster prediction and response systems.
Verifiable commitments to steer advanced AI development in non-adversarial ways: For example, the USG could commit to not creating AI systems that have certain offensive capabilities, or to create an AI system that would refuse certain commands that could threaten other states.
Other benefits
Assurances that the US won’t take certain actions: One counter-intuitive ‘benefit’ that the USG could offer is guarantees that they won’t do certain things. For example, the USG could guarantee other states that they won’t engage in certain surveillance or trade practices.
Diplomatic assurances regarding treatment of specific people, like other states’ party leaders.
International AI Policy Advisory: The USG could establish an international advisory body that helps certain states in developing their AI policies and strategies in ways that synergize with USG’s AI plans.
Two key considerations for benefit-sharing
Credibility will be a key challenge for benefit-sharing and power-sharing agreements
Other governments need to believe that the USG benefit-sharing or power-sharing is credible if they’re to offer something in return.[7]
In the near term, there are a number of international relations credibility tools that could help the USG make its commitments to other states more credible. The need and efficacy of these tools will depend on the properties of the benefits in question, like their revocability[8] and tangibility[9].
A track record of honoring commitment: Other states will look to the USG’ track-record of fulfilling its promises when they assess the credibility of US benefit- and power-sharing.[10] To build credibility for future high-stakes negotiations involving benefit-sharing, it could be important in the coming years to make commitments about sharing AI-derived benefits, build the diplomatic capacity to do so, and follow through on these commitments.
International oversight and verification mechanisms: The USG could work with other states to establish an international institution with representatives from multiple states that oversees and verifies compliance with agreements. This would be analogous to the International Atomic Energy Agency (IAEA) inspections of nuclear facilities. Insofar as commitments concern whether one party trains certain types of advanced AI systems, or the details of those training runs, hardware-enabled governance will likely be crucial.
Peer evaluations: Alongside—or in addition to—third-party oversight and verification of benefit-sharing, the USG and others involved in international agreements could oversee each other’s commitments. Ideally this could be done in privacy-preserving ways.
Phased implementation with reversible steps: Agreements could be structured in phases, where each side takes small, reversible steps to build trust before larger commitments. Such phased implementation may be especially important in relationships with a lot of mutual distrust. For example, in an ambitious agreement where China pauses certain frontier AI progress, China could halt but not fully dismantle its AI program initially, giving them the affordance to restart if the US violates the agreement.
Legally binding international treaties: If commitments around international AI governance become very high-stakes, the USG could codify their commitments in formal treaties ratified by Congress.
Pre-specified consequences if an agreement is broken: Commitments could be made more credible by creating clear, mutually-agreed upon consequences for defecting from the agreement. Such commitment devices could look like strong sanctions or the loss of valuable resources that have been put into an escrow arrangement.[11]
However, there may come a period in late-stage AI international relations when traditional international credibility tools no longer work. This is because advanced AI systems may give a leading developer such a ‘decisive strategic advantage’ that they could defect from any existing commitment they’ve made and not pay any price. For example, extremely advanced AI systems could unlock new WMDs, enormous wealth and autonomy, and tech that makes the leading actor immune to nuclear strikes. If other states come to believe that the US could develop such a technological advantage and then defect from commitments at little to no cost, they may not agree to any deals with the USG, even if there are deals they would in-principle want to agree on.
The challenge of making commitments that remain credible even if one party gets a decisive strategic advantage is out of the scope of this post, but it’s a formidable one.[12]
Benefit sharing strategies should account for potential risks.
While sharing AI-enabled benefits could yield strategic advantages for the US, it doesn’t come without risks.
The following risks could stem from sharing benefits like those I listed above.
Proliferation risks: Sharing certain technologies, especially those with dual-use potential, could lead to their misuse or accelerate dangerous AI development in other states, especially if they defect from an agreement with the US. In particular, sharing some technological benefits could increase the attack surface for cyber theft of AI model weights or other forms of espionage. Proliferation as a result of strategic benefit-sharing may have occurred in the “Atoms for Peace” program, which shared nuclear energy benefits that may have contributed to nuclear proliferation.
Geopolitical instability: Selective benefit sharing could upset states that feel they’re not receiving an appropriate share. Additionally, sharing certain technologies like improved missile defense systems could undermine long-standing deterrence equilibriums.
Perverse incentives: For example, if states believe the US will offer significant benefits in exchange for slowing down domestic AI development, it could paradoxically incentivize them to accelerate their AI programs to gain bargaining power.
Signaling risks: The nature and scale of shared benefits—and the commitments the USG asks for in return—might signal that the USG sees AI as the key technology of the future, potentially escalating AI arms races.
In addition to these direct risks from benefit-sharing, there may also be risks that stem from the commitments other states agree to in exchange for the benefits. For example, if other states help secure the US’ lead on advanced AI development, this could risk framing AI development too much in terms of ‘who wins.’ Ultimately this could backfire if the US-led AI project races ahead to build new models faster than its safety solutions can keep up.
While I don’t yet have answers on how to mitigate all these risks, these risks seem addressable with smart policy design and strong diplomacy. As AI-driven strategic benefit-sharing draws more consideration from AI governance researchers and implementers, I think these general risks provide a reason to be vigilant for now—not a reason to drop benefit-sharing proposals altogether.
. . .
I hope this post has begun to illuminate the role strategic benefit-sharing could play in international AI governance. If you’d like to discuss any of this more, you can reach me at michelm.justen [at] gmail [dot] com.
Thank you to Pivotal Research for running the AI governance fellowship that enabled me to conduct this research. Thank you as well to Matthew van der Merwe, Max Dalton, Oliver Guest, Bill Anderson-Samways, Claire Dennis, John Halstead, Oscar Delaney, and many others for valuable comments and feedback on this research. Feedback is not an endorsement; mistakes are my own.
Although I focus on the USG in this post, it’s still an open question whether the US government (USG) will be in a position to ambitiously share AI-derived benefits. The benefits of advanced AI may instead continue to primarily accrue to US-based AI companies, like OpenAI, Google, and Anthropic. This would leave the USG with benefits that look more like heightened tax revenue than key geopolitical playing pieces. However, I think it’s premature to rule out the possibility that the USG will form some sort of close partnership with these companies in the next decade. As a result, this post includes discussion of AI diplomacy and specific benefits that are only realistic if the USG and frontier AI labs are working together closely.
Note that some form of strategic benefit-sharing could also be utilized by joint international projects on advanced AI development, like a ‘CERN for AI’.
Some examples of power-sharing include sharing frontier AI model weights, offensive military technology, or veto authority on key frontier AI development and deployment decisions.
Insofar as these seats hold significant decision making power or ‘control rights’, I think it makes more sense to think of sharing these seats as power-sharing than benefit-sharing.
The USG also needs to believe other states will uphold their side of the deal, but I focus on how to make the more empowered actor’s commitments credible.
Is it clear to a state when they’ve received the benefit? States could easily tell when they’ve received cash, but knowing whether or not they’re really receiving something less tangible, like a place under the US nuclear umbrella, requires more credibility.
Is it easy for the providing actor to revoke the benefit? A financial benefit may not be easy for the US to revoke, but API access to an AI model would be. Revocable benefits require more credibility.
A speculative idea about escrow-like arrangements: parties put valuable resources into a neutral third-party or smart contract. By default they get those resources back (e.g., a large sum of money). But if an actor defects from an agreement they lose access to that money, and it potentially goes to the competitor. Escrow accounts have previously been used in international relations, but I haven’t looked into the details.
Sharing the AI Windfall: A Strategic Approach to International Benefit-Sharing
Link post
Summary
If AI progress continues on its current trajectory, the developers of advanced AI systems—and the governments who house those developers—will accrue tremendous wealth and technological power.
In this post, I consider how and why US government[1] may want to internationally share some benefits accrued from advanced AI—like financial benefits or monitored access to private cutting-edge AI models. Building on prior work that discusses international benefit-sharing primarily from a global welfare or equality lens, I examine how strategic benefit-sharing could unlock international agreements that help all agreeing states and bolster global security.
Two “use cases” for strategic benefit-sharing in international AI governance:
Incentivizing states to join a coalition on safe AI development
Securing the US’ lead in advanced AI development to allow for more safety work
I also highlight an important, albeit fuzzy, distinction between benefit-sharing and power-sharing:
Benefit-sharing: Sharing AI-derived benefits that don’t significantly alter power dynamics between the recipient and the provider.
Power-sharing: Sharing AI-derived benefits that significantly empower the recipient actor, thereby changing the relative power between the provider and recipient.
I identify four main clusters of benefits, but these categories overlap and some benefits don’t fit neatly into any category: Financial and resource-based benefits; Frontier AI benefits; National security benefits; and ‘Seats at the table’.
I conclude with two key considerations with respect to benefit-sharing:
Credibility will be a key challenge for benefit-sharing and power-sharing agreements (See more)
Benefit sharing strategies should account for potential risks. (See more)
Introduction
Advanced AI systems could empower their developers—and the governments who supervise those developers—with enormous benefits. For example, advanced AI systems[2] could give rise to tremendous wealth, breakthrough medical technologies, and decisive national security benefits.
This post examines how these benefits could be shared internationally. In particular, I examine how and why the US government (henceforth USG) may want to strategically share some of the benefits accrued from advanced AI to further their own interest, other states interests, and ultimately bring about a safer world.
Past work[3] on international benefit-sharing has primarily focused on sharing benefits to address wide-spread job-displacement and promote welfare and equality globally. I support such altruistic benefit-sharing to remedy the uptick in global power- and income-inequality that AI could drive. But I want to expand discussions of international benefit-sharing to include sharing benefits as a tool for positive-sum trades.
By offering AI-derived benefits—such as economic aid, monitored frontier AI model access, or security assurances[4]—the USG could enable commitments that are in the interest of all parties at the table and promote global security. For example, the US could provide allied states with monitored access to private, cutting-edge frontier AI models. In exchange, allied states could take steps domestically to prevent the proliferation of weaponized AI systems. (I discuss other ideas on how benefit-sharing could be used in international AI governance below).
There is precedent for US-led strategic benefit-sharing. Consider the Marshall Plan. Post-WW2, the USG helped Western Europe rapidly recover by providing benefits like financial aid and modern technologies, which in turn strengthened the US’ key strategic alliances, created mutually-beneficial markets for US goods and services, promoted democratic capitalism, and stabilized a region that could have otherwise triggered new conflicts.
In what follows, I expand on the types of international AI governance agreements that strategic benefit-sharing could unlock; the distinction between benefit-sharing and power-sharing; concrete benefits the USG could share; and two key considerations I see with respect to strategic benefit sharing (improving credibility and mitigating risks).
What types of international AI governance agreements could strategic benefit-sharing (and power-sharing) unlock?
While sharing AI-derived benefits could in principle be used to incentive international agreements on many topics, this post focuses on benefit-sharing as a tool to unlock agreements on questions like who develops next-generation AI systems and how they’re governed.
What exactly should those international AI governance agreements look like? I’m not sure yet. With so much uncertainty about the capabilities of future AI systems, their risks, and future domestic regulation in states like the US and China, it’s hard to know which international institutions and agreements are feasible and desirable.
So, rather than proposing a specific international agreement involving benefit-sharing, I highlight two different plausibly-good outcomes that benefit-sharing could help unlock.
Sharing benefits to incentivize other states to join an international coalition on safe AI development: In ‘Chips for Peace’,Cullen O’Keefe sketches a proposal in which a coalition of states, likely led by the US, could agree to implement domestic safety regulation and commit to nonproliferation of harmful or high-risk capabilities. Strategic benefit-sharing is a core part of this proposal: Members of this coalition could drive broad adoption of safety and non-proliferation commitments by sharing benefits that “compensate for the burdens associated with regulation.”
Sharing benefits to secure the US’ lead in developing the most advanced AI systems, which could prevent a neck and neck international AI arms race. In turn this could allow a US-led frontier AI effort to slow down to do more technical safety work, evaluations, and to carefully consider high-stakes decisions about how to integrate AI systems into society. Securing the US’ lead could involve sharing benefits (and potentially power) with both competitors and allies. With respect to competitors, the USG could, for example, share defense-dominant narrow AI models with competitor states in exchange for credible commitments to stop certain high-risk advanced AI development and not take actions that could undermine US’ lead, such as stealing frontier AI model weights. With respect to allies, the USG could, for example, share increased US protection and financial benefits with key states in the AI supply chain (e.g., Taiwan) to secure critical inputs for its safe AI development and stronger sanctions and export controls against competitors.
Benefit-sharing vs power-sharing: An important, fuzzy distinction
If the USG wants another state to sacrifice something significant (eg., their domestic advanced AI program), the USG may need to do more than just share benefits. The USG may need to share power.
That is, the USG may need to make concessions to its own power and share technology, national security capabilities, or ‘control rights’ for its most advanced AI models that significantly empower other states. I call this power-sharing and view it as distinct from benefit-sharing.[5] Note, however, that this distinction is blurry: financial investments are a prototypical example of benefit sharing, for example, but if the USG gives very lucrative, irrevocable financial benefits to another state, even financial investments start looking like power-sharing.
Power-sharing is more consequential than benefit sharing, but it should be on the table when thinking about international agreements.
Credible power-sharing may unlock important international agreements that ordinary benefit sharing couldn’t. For example, suppose the USG saw it as an imperative that China stops its own development of extremely advanced AI systems. Would benefits be enough? Without credible power-sharing, China may worry that the US could always defect from its agreement, retain complete control over a powerful new technology, and leave China with little to show for its costly commitment. If tensions are high between the US and China, China could see it in their interest to preemptively attack US data centers, for example, to stop the US from gaining such a decisive strategic advantage.
Power-sharing could also be important to prevent dangerous concentrations of power.
What AI-derived benefits could the USG share with other states?
I identify four main clusters of benefits, although some benefits don’t fit cleanly into these categories.
Financial and resource-based benefits such as monetary aid, energy resources, and investments in infrastructure and education programs in other states.
Frontier AI benefits such as monitored access to advanced US AI systems.
National security benefits like US protection in the case of regional or global conflict, counter-terrorism support, or defense-dominant technologies like improved defenses against bioweapons.
‘Seats at the table’ for important decisions, such as how to develop or deploy next generation models, technical alignment efforts, or membership in working groups on how to navigate ethical challenges posed by advanced AI.[6]
Other benefits that don’t cleanly fit into the categories I propose, like guarantees to not engage in certain activities, diplomatic assurances regarding treatment of specific people, and an international AI policy advisory.
I’ve listed many possible benefits, even if many prove unrealistic as we learn more about the trajectory of AI progress and which actors accrue the biggest benefits. I’ve also listed some benefits that are not directly derived from AI advanced (eg., US protection) but could be instrumental for international AI governance agreements
Importantly, I focus on listing benefits that would most likely not cost the US their international lead on frontier AI development. For example, the list below doesn’t include benefits like unmonitored access to the US’ top AI models or offensive military technology. I think distributing unmonitored access to the US’ most powerful AI models, for example, should be thought of as power-sharing, which is not the focus of this list. (See discussion of power-sharing vs. benefit-sharing.)
Financial and resource-based benefits
The most straightforward benefits that the US could give to other states are money and resources.
Monetary aid could be given to governments, or perhaps directly to citizens of other states, akin to a sort of international universal basic income (UBI) trial. Monetary amounts could be in absolute terms or as a % of profits of some AI-driven index.
Non-monetary resources such as energy or natural resources. In futures where AI progress goes very fast and science-fiction-seeming futures stop seeming like fiction, some resources in space (e.g., solar output, helium-3 from the moon surface for nuclear fusion) could also be on offer.
Physical infrastructure such as modernized electricity grids infrastructure, transportation, and public services.
Technological advancements like clean air technologies, environmental disaster prediction and response systems, healthcare technologies (like new cures or diagnostic tools), and renewable energy technology. (See later section for discussion of sharing frontier AI tech benefits and defense-dominant technologies).
Advantageous insurance packages, such as disaster insurance in a time of increasingly frequent and severe large-scale natural disasters.
Access to lucrative frontier AI-driven investment funds or financial markets.
Frontier AI benefits
If other states see the US economy and security being supercharged by advanced AI, one of the most direct and desired benefits USG could share may be AI technology or access.
Some frontier AI technology and inputs will naturally diffuse internationally as US companies sell AI products abroad. But other frontier AI technology, like powerful general models that can be easily misused, may not be shared with everyday consumers, especially if the USG works closely with frontier AI labs.
Below, I discuss some levers that could allow the USG (or another actor that has control over the advanced AI models) to balance sharing altruistic and strategic benefits with maintaining a technological edge and preventing the proliferation of dangerous capabilities.
Whether it’s a general-purpose model or a narrow model: Instead of sharing highly capable general models, which are harder to evaluate, a US actor could share narrow AI models, tailored for specific beneficial applications like healthcare or supply chain optimization.
Which generation of model is shared: A US actor could offer models that are 1-3 generations behind their cutting-edge systems. Depending on how securitized or closed-off US frontier AI development becomes, such models could still be more capable than the most capable model consumers would otherwise have access to.
The extent to which model outputs are fine-tuned or restricted: A US actor could share powerful models, yet only with certain ‘fine-tuning’ or reinforcement learning from human feedback (RLHF) so that the model doesn’t answer dangerous questions or follows certain high-level principles (e.g., non-violence). With monitored access (e.g., API access), the sharer could also make use of response filters and output classifier models that restrict what information is also ultimately shown to the user.
The extent to which access is monitored: Access could be closely monitored in real-time, periodically audited, or granted with minimal oversight, depending on trust levels and security concerns.
The extent to which model access is revocable: By sharing API access rather than model weights, for example, a US actor could cut off model access any time.
The types of ‘compute’ access that is shared, if at all: Instead of sharing AI model access or weights, the US could share the computational resources (a.k.a. ‘compute’) that allows for the training and use of AI models. This comes with risk: sharing scarce computational resources with other states, especially states that want to make more powerful models than the US, could undermine US AI leadership. However, some types of monitored compute access that only allows for the use of existing models (ie., ‘inference’ rather than training) could be realistic for US actors to share. Alternatively, US actors could share computational resources with future hardware-enabled governance mechanisms that only allow for certain kinds of training runs, like training runs below a certain compute threshold or training runs that don’t involve certain types of data.
The types of AI development inputs that are shared. Computational resources, mentioned above, are an important input into AI development. But so are data, algorithms, and human capital. Plausibly some data sets, algorithmic breakthroughs, or expertise could be shared strategically.
The extent to which access to the model is deliberately slowed down: Access to any model could be ‘throttled’ by restricting run speeds or the number of copies that can be run in parallel.
The extent to which meta-information about the model is disclosed: Varying levels of information about the model’s architecture, training process, or relationship to other frontier models could be disclosed.
‘Seats at the table’ for decision-making related to AI development
Rather than—or in addition to—providing material benefits, the US could offer invitations to join important processes or proceedings as a form of benefit.
Note that insofar as these seats hold significant decision-making power, I think it makes more sense to think of sharing these seats as power-sharing than benefit-sharing.
Invitation to influence how frontier AI is evaluated: The USG could allow other states and actors to add to the evaluation suite upon which frontier US models will be tested.
Invitation to join international technical AI safety effort: The USG could invite top scientists from abroad to support AI alignment efforts.
Invitation to join applied ethics working groups or commissions: The development and deployment of advanced AI will be accompanied by many serious ethical questions, like how to handle the possibility of military automation and advanced persuasion. The US-led AI efforts may be the first to encounter some of these challenges and the best-placed to implement solutions, and it could offer ‘seats at the table’ for other states to contribute to these decision-making processes.
Some sort of decision rights in key decisions about developing and deploying frontier AI (ie., ‘control rights’): Even while maintaining primary control over decisions about how to use advanced AI, the USG could provide some sort of avenue for other states to contribute to deliberations on if, how, and when to use certain advanced systems.
Regular updates about US AI development: The USG could allow certain actors to enter into an exclusive AI information-sharing bubble.
Invitation to high-stakes conferences or conventions: It’s plausible that the US will want to convene some sort of gathering of international experts or officials to help it make key AI development or deployment decisions. Invitations to such gatherings could be a type of strategic benefit.
Empowering other states national security
While empowering other states’ national security could pose risks, doing so could be an important part of disincentivizing other states from racing the US to develop dangerous AI capabilities.
If allies and even adversaries of the USG can get the security benefits of advanced AI while leaving the development predominantly to the US, it may help avoid an international AI arms race where the US-led AI project and a competitor feel pressured to cut corners on guaranteeing that the most advanced systems don’t pose catastrophic risks.
Additionally, boosting other states’ national security systems might help those states work towards goals they share with the US government. For example, sharing counter-terrorism technology could reduce the risk of bio-terrorism that spills into the US despite not targeting the US.
Some examples of ‘national security benefits’ include:
US protection in the case of regional or global conflict. In extreme cases this could include extending nuclear umbrella protection to some states, similar to existing NATO arrangements.
Counter-terrorism support: Sharing intelligence from advanced counter-terrorism efforts, or the underlying technologies, strategies, training programs for advanced counter-terrorism.
Defense-dominant technology such as space-based early warning systems, enhanced bioweapon or missile defense systems, cybersecurity assistance, and environmental disaster prediction and response systems.
Verifiable commitments to steer advanced AI development in non-adversarial ways: For example, the USG could commit to not creating AI systems that have certain offensive capabilities, or to create an AI system that would refuse certain commands that could threaten other states.
Other benefits
Assurances that the US won’t take certain actions: One counter-intuitive ‘benefit’ that the USG could offer is guarantees that they won’t do certain things. For example, the USG could guarantee other states that they won’t engage in certain surveillance or trade practices.
Diplomatic assurances regarding treatment of specific people, like other states’ party leaders.
International AI Policy Advisory: The USG could establish an international advisory body that helps certain states in developing their AI policies and strategies in ways that synergize with USG’s AI plans.
Two key considerations for benefit-sharing
Credibility will be a key challenge for benefit-sharing and power-sharing agreements
Other governments need to believe that the USG benefit-sharing or power-sharing is credible if they’re to offer something in return.[7]
In the near term, there are a number of international relations credibility tools that could help the USG make its commitments to other states more credible. The need and efficacy of these tools will depend on the properties of the benefits in question, like their revocability[8] and tangibility[9].
A track record of honoring commitment: Other states will look to the USG’ track-record of fulfilling its promises when they assess the credibility of US benefit- and power-sharing.[10] To build credibility for future high-stakes negotiations involving benefit-sharing, it could be important in the coming years to make commitments about sharing AI-derived benefits, build the diplomatic capacity to do so, and follow through on these commitments.
International oversight and verification mechanisms: The USG could work with other states to establish an international institution with representatives from multiple states that oversees and verifies compliance with agreements. This would be analogous to the International Atomic Energy Agency (IAEA) inspections of nuclear facilities. Insofar as commitments concern whether one party trains certain types of advanced AI systems, or the details of those training runs, hardware-enabled governance will likely be crucial.
Peer evaluations: Alongside—or in addition to—third-party oversight and verification of benefit-sharing, the USG and others involved in international agreements could oversee each other’s commitments. Ideally this could be done in privacy-preserving ways.
Phased implementation with reversible steps: Agreements could be structured in phases, where each side takes small, reversible steps to build trust before larger commitments. Such phased implementation may be especially important in relationships with a lot of mutual distrust. For example, in an ambitious agreement where China pauses certain frontier AI progress, China could halt but not fully dismantle its AI program initially, giving them the affordance to restart if the US violates the agreement.
Legally binding international treaties: If commitments around international AI governance become very high-stakes, the USG could codify their commitments in formal treaties ratified by Congress.
Pre-specified consequences if an agreement is broken: Commitments could be made more credible by creating clear, mutually-agreed upon consequences for defecting from the agreement. Such commitment devices could look like strong sanctions or the loss of valuable resources that have been put into an escrow arrangement.[11]
However, there may come a period in late-stage AI international relations when traditional international credibility tools no longer work. This is because advanced AI systems may give a leading developer such a ‘decisive strategic advantage’ that they could defect from any existing commitment they’ve made and not pay any price. For example, extremely advanced AI systems could unlock new WMDs, enormous wealth and autonomy, and tech that makes the leading actor immune to nuclear strikes. If other states come to believe that the US could develop such a technological advantage and then defect from commitments at little to no cost, they may not agree to any deals with the USG, even if there are deals they would in-principle want to agree on.
The challenge of making commitments that remain credible even if one party gets a decisive strategic advantage is out of the scope of this post, but it’s a formidable one.[12]
Benefit sharing strategies should account for potential risks.
While sharing AI-enabled benefits could yield strategic advantages for the US, it doesn’t come without risks.
The following risks could stem from sharing benefits like those I listed above.
Proliferation risks: Sharing certain technologies, especially those with dual-use potential, could lead to their misuse or accelerate dangerous AI development in other states, especially if they defect from an agreement with the US. In particular, sharing some technological benefits could increase the attack surface for cyber theft of AI model weights or other forms of espionage. Proliferation as a result of strategic benefit-sharing may have occurred in the “Atoms for Peace” program, which shared nuclear energy benefits that may have contributed to nuclear proliferation.
Geopolitical instability: Selective benefit sharing could upset states that feel they’re not receiving an appropriate share. Additionally, sharing certain technologies like improved missile defense systems could undermine long-standing deterrence equilibriums.
Perverse incentives: For example, if states believe the US will offer significant benefits in exchange for slowing down domestic AI development, it could paradoxically incentivize them to accelerate their AI programs to gain bargaining power.
Signaling risks: The nature and scale of shared benefits—and the commitments the USG asks for in return—might signal that the USG sees AI as the key technology of the future, potentially escalating AI arms races.
In addition to these direct risks from benefit-sharing, there may also be risks that stem from the commitments other states agree to in exchange for the benefits. For example, if other states help secure the US’ lead on advanced AI development, this could risk framing AI development too much in terms of ‘who wins.’ Ultimately this could backfire if the US-led AI project races ahead to build new models faster than its safety solutions can keep up.
While I don’t yet have answers on how to mitigate all these risks, these risks seem addressable with smart policy design and strong diplomacy. As AI-driven strategic benefit-sharing draws more consideration from AI governance researchers and implementers, I think these general risks provide a reason to be vigilant for now—not a reason to drop benefit-sharing proposals altogether.
. . .
I hope this post has begun to illuminate the role strategic benefit-sharing could play in international AI governance. If you’d like to discuss any of this more, you can reach me at michelm.justen [at] gmail [dot] com.
Thank you to Pivotal Research for running the AI governance fellowship that enabled me to conduct this research. Thank you as well to Matthew van der Merwe, Max Dalton, Oliver Guest, Bill Anderson-Samways, Claire Dennis, John Halstead, Oscar Delaney, and many others for valuable comments and feedback on this research. Feedback is not an endorsement; mistakes are my own.
Although I focus on the USG in this post, it’s still an open question whether the US government (USG) will be in a position to ambitiously share AI-derived benefits. The benefits of advanced AI may instead continue to primarily accrue to US-based AI companies, like OpenAI, Google, and Anthropic. This would leave the USG with benefits that look more like heightened tax revenue than key geopolitical playing pieces. However, I think it’s premature to rule out the possibility that the USG will form some sort of close partnership with these companies in the next decade. As a result, this post includes discussion of AI diplomacy and specific benefits that are only realistic if the USG and frontier AI labs are working together closely.
Note that some form of strategic benefit-sharing could also be utilized by joint international projects on advanced AI development, like a ‘CERN for AI’.
By advanced AI systems, I mean AI systems that can accomplish many cognitive tasks better and faster than most humans can.
See also: The Windfall Trust and Predistribution over Redistribution: Beyond the Windfall Clause.
Note that security assurance need not be AI-derived. Traditional benefits and AI-derived benefits could be bundled together.
Some examples of power-sharing include sharing frontier AI model weights, offensive military technology, or veto authority on key frontier AI development and deployment decisions.
Insofar as these seats hold significant decision making power or ‘control rights’, I think it makes more sense to think of sharing these seats as power-sharing than benefit-sharing.
The USG also needs to believe other states will uphold their side of the deal, but I focus on how to make the more empowered actor’s commitments credible.
Is it clear to a state when they’ve received the benefit? States could easily tell when they’ve received cash, but knowing whether or not they’re really receiving something less tangible, like a place under the US nuclear umbrella, requires more credibility.
Is it easy for the providing actor to revoke the benefit? A financial benefit may not be easy for the US to revoke, but API access to an AI model would be. Revocable benefits require more credibility.
For example, if the USG revokes walks back from NATO security assurances, future commitments would be less credible.
A speculative idea about escrow-like arrangements: parties put valuable resources into a neutral third-party or smart contract. By default they get those resources back (e.g., a large sum of money). But if an actor defects from an agreement they lose access to that money, and it potentially goes to the competitor. Escrow accounts have previously been used in international relations, but I haven’t looked into the details.
If you’re interested in working on this, let me know. I may work on this more.