Capping AGI profits
Introduction
Beyond the many concerns around AI alignment, the development of artificial general intelligence (AGI) also raises concerns about the concentration of wealth and power in the hands of a few corporations. While I’m very glad to see people working on avoiding worst-case-scenarios, my impression is that there is relatively less attention being given to “grey area” scenarios, under which catastrophe is neither near-certain nor automatically avoided. These scenarios strike me as worlds in which policy and governance work may be relatively important.
In this post I outline a case for government-imposed caps on extreme profits generated through AGI. I think could be promising both as a way to distribute AGI-generated wealth democratically, and (hopefully) to disincentivize AGI development by profit-motivated actors.
Edit (March 21): Thank you to Larks for pointing out the strong similarities between this proposal and a Windfall Clause. As far as I can tell, my proposal mainly differs from a Windfall Clause in that it is feasible to implement without the coordinated buy-in of AI labs, and that it fits more squarely in existing policy paradigms. As potential drawbacks, it seems more prone to creating tensions at an international level, and less targeted at effective redistribution of funds, although I think there could be practical solutions to these issues.
Disclaimers:
As best I can tell, the idea of a “capped-profit” organization was introduced by OpenAI in 2019,
but I have not seen any discussion of it in the context of broader policy options. I do not claim to have any especially novel ideas here, but I apologize if I’ve missed someone else’s work on this.Since I am sympathetic to the claim that equitable distribution of wealth is a second-order problem in the face of potential AGI ruin, I am conditioning the remainder of the post with “assuming we find practical ways to make AI both safe and useful.”
AGI wealth
Many believe that AGI has the potential to generate an enormous amount of wealth, provided that we avoid disastrous outcomes. For example, Metaculus forecasters are predicting with 60% likelihood that GDP will grow by 30% or more annually in any of the 15 years after human level AI is achieved. On Manifold, predictors expect an 80% chance that AI will constitute more than 10% of GDP by 2050 (although that market is thin, and the criteria for resolution are unclear).
Consistent with this notion, OpenAI restructured itself as a capped-profit organization in 2019. The move was intended to allow OpenAI to fund itself as a profit-driven company in the short term, while maintaining its non-profit mission if they succeed at creating AGI. If the organization becomes immensely profitable due to its development of powerful AI, investors’ returns will be limited to a fixed amount (100 times their contribution, for initial investors), and any excess profits will be redirected to a supervising nonprofit organization, whose “primary fiduciary duty is to humanity.”
Although this seems admirable in purpose, it raises several questions. How will the non-profit use its massive income? Will the OpenAI board act as benevolent autocrats, funding social programs of their choice, or will there be an attempt to create democratic channels of decision-making? Can we trust the entity to adhere to its charter faithfully? Above all, what will happen in the future if some other profit-driven company is the first to create AGI?
Capping profits more broadly
OpenAI’s capped-profit model suggests a policy option for governments who view the above questions as concerning. Rather than hoping that AI companies will charitably distribute massive profits, governments could impose a fixed limit on company profitability. If companies become incredibly wealthy as a result of AGI, the ensuing tax revenues could be used to finance social programs such as universal basic income (UBI), through existing democratic pathways.
I think such a policy has a number of attractive properties:
In addition to providing stronger democratic assurances, this could help disincentivize reckless AGI development by companies pursuing tail outcomes.
The profitability limit could be tailored such that no business-as-usual company expects to be affected, minimizing its distortionary effects outside of companies pursuing transformative technologies.
The policy fits cleanly into established tax policy frameworks, reducing the friction of implementation.
Since nearly all voters and political actors stand to benefit, it seems like it should be relatively easy to find support for this kind of policy.
Naturally, there are numerous questions and uncertainties to be addressed:
It would be critical to ensure the policy is highly robust to a variety of takeoff scenarios, that it balances redistribution with a “winning” company’s ongoing capital requirements, and that it properly captures profits in high-growth outcomes while avoiding taxing unintended targets. I do not yet have a strong view on what the specifics should entail.
Companies pursuing AGI might be able to use their technology to effectively circumvent the policy. If they anticipate this possibility ahead of time, it will reduce the policy’s disincentive effects.
It is unclear to what extent leading AI labs would embrace or oppose this kind of policy. On the one hand, it could ease certain race dynamics and generate favorable PR; on the other, many are aiming to win the race.
The usual issues of tax evasion apply here; it would be easy for companies to relocate to countries with lower taxes on corporate profits.
Finally, it is not straightforwardly clear that governments would be better stewards of the money than would a profit-capped organization like OpenAI. To begin with, governments are primarily accountable to only their citizens; if AGI is created in a country with AGI profit caps, we may end up with less equitable outcomes. Finding ways to mitigate this should be a high priority if this policy is seriously being considered.
Despite the unresolved questions, this appears to be a promising direction to me. Even if companies could easily evade regulations using AGI, it seems plausible that such a policy could create a Schelling point for cooperation and contribution to the social good. On the last bullet point, I suspect that there are solutions to the problem of distributing tax revenues globally, which at least outperform corporate disbursements in expectation.
I also think that starting with a legible and broadly popular policy would be a very good way to initiate public discussion of AI governance. While there is likely a lot of behind-the-scene work that I am unaware of, my impression is that existing momentum in public AI policy is not heading in the most useful direction. It strikes me that taking a positive first step, especially one which recognizes surprising claims like “AGI companies may grow 100 or 1000X,” would be helpful for shifting the Overton window on policy in the right direction.
There has been some prior work you might enjoy reading, labeled under ‘Windfall Clause’. See the collection of posts here, the original proposal here, and my criticism here.
Thanks! I’d heard of the windfall clause idea, but somehow not made the connection to this. I’ll edit the post to make note of it.