The case for long-term corporate governance of AI

Summary

In this post, we define the term “long-term corporate governance of AI” and argue that:

  • Corporate governance is an important area of long-term AI governance.

  • Despite its importance, the corporate governance of AI is relatively neglected within long-term AI communities.

  • There are tractable things these communities could do to improve the long-term corporate governance of AI.

Definitions

By “long-term corporate governance of AI”, we mean the governance of corporate development and deployment of AI that could affect the long-term future. Broadly speaking, corporate governance refers to the ways in which corporations are managed, operated, regulated, and financed. Important elements of corporate governance include the legal status of corporations, the relationship between investors and executives, information flows within and outside of the corporation, and specific operational decisions made throughout the corporation. Corporate governance can include internal decision-making by corporate management, regulation by governments, and other activities that affect how corporations operate.

The long-term future involves outcomes over distant time periods, including millions, billions, or trillions of years into the future, especially with respect to the long-term trajectory of human civilization. In the context of AI, an orientation toward the long-term future often means an emphasis on advanced future forms of AI, such as artificial general intelligence (AGI). These forms of advanced future AI are sometimes referred to as long-term AI, though they may arise within decades or centuries, rather than in the “long-term future”. Long-term AI governance includes both long-term AI and any nearer-term forms of AI that could affect the long-term future. Likewise, long-term AI governance includes governance activities in both the near-term and the long-term that could affect the long-term future.

We also use the term “long-term AI communities” to refer to groups of people interested in the role AI plays in determining long-term future outcomes. This includes effective altruism (EA) communities focused on longtermism, professional communities working on long-term AI and its risks and opportunities, transhumanist communities, etc. This post is written primarily, but not exclusively, for an EA audience.

Corporate governance is an important area within long-term AI governance

Private industry is at the forefront of AI research and development (R&D). AI is a major focus of the technology industry, which includes some of the largest corporations in the world. Even the US Department of Defense, one of the world’s most technologically advanced government agencies, acknowledges that “almost all of the technology that is of importance in the future is coming from the commercial sector”—and that applies especially to AI technology.[1]

The corporate sector may be especially important for the development of advanced AI. Surveys by the Global Catastrophic Risk Institute (GCRI) published in 2017 and 2020 provide detailed mappings of the landscape of AGI R&D. The 2020 survey finds that 45 of 72 active AGI R&D projects are based in corporations, including 6 publicly traded and 39 private corporations. These include some of the largest AGI R&D projects, such as DeepMind (a subsidiary of Alphabet), OpenAI (which is part nonprofit and part for-profit, with the for-profit division partnered with Microsoft), Microsoft Research AI, and Vicarious. For comparison, the second-most prominent AGI R&D projects institution type identified in the 2020 survey is academia, with 15 active academic AGI R&D projects. Six AGI R&D projects were in nonprofits, and only three were in governments. At least for now, the corporate sector is the dominant force in AGI R&D. How these companies govern their AI activities is therefore of utmost importance.

Corporate governance may be vital for reducing catastrophic risk and improving outcomes from advanced AI. In scenarios in which a single AGI or superintelligent AI system takes over the world, that system would likely be based in a corporation, at least if current trends continue. Good outcomes may depend on that corporation acting to ensure that its AI is designed according to high safety and ethical standards. In other scenarios, such as those considered in recent work by Andrew Critch, outcomes could be determined by the overall trajectory of the AI industry. In these scenarios, a key question may be whether the AI industry’s values are aligned with the public interest.

A recent event at OpenAI illustrate the importance of corporate governance. As reported by the Financial Times, OpenAI’s recent partnership with Microsoft created tensions that may have contributed to some OpenAI employees leaving to found a new company—Anthropic—which is structured as a public benefit corporation to help enable it to advance the common benefit of humanity.

There is a sense in which corporate governance for AI R&D can be seen as an alignment problem. The problem is to align corporate behavior with the public interest or some other conception of moral value. Corporations often pursue their own financial benefit, even if it comes at the expense of the public interest. However, it doesn’t need to be this way, as is helpfully shown by Lynn Stout’s work on the shareholder value myth. Indeed, there has been an encouraging recent rise of interest in alternative corporate governance paradigms, such as “stakeholder capitalism”, which shift emphasis toward the public good. For long-term AI communities, these recent developments can be leveraged into valuable opportunities for engagement.

As AI technology advances, other actors, such as governments, may come to play more significant roles than they currently do. Nonetheless, it is likely that corporations will continue to play an important role in the development of AI. AI technology has too much commercial value for corporations to not be significantly involved with it, especially given that they are already heavily involved. These factors suggest an important role for corporate governance in the overall portfolio of work to improve long-term AI outcomes.

Despite its importance, the corporate governance of AI is relatively neglected within long-term AI communities

This claim is based on a review of relevant literature, EA Forum posts, and Open Philanthropy Project grantmaking.

Literature review

For a recent paper, we reviewed the literature on AI corporate governance. We didn’t find foundational research on major overarching issues in AI corporate governance, and we found only a small amount of research oriented toward long-term AI issues. Much of the literature that we did find covered a mix of topics related to corporate governance, especially liability law. There was also coverage in business and technology magazines and publications by management consulting firms such as McKinsey. This is valuable work, but it doesn’t address the major issues of corporate governance of relevance to the long-term implications of AI.

Some work on AI corporate governance has been done by communities that work on long-term AI risks. This includes direct work by people employed at corporate groups such as DeepMind, OpenAI, and Anthropic. Although details of this work are often not publicly available, it is nonetheless important.

Some relevant research has been openly published, though it often addresses only a small subset of possible corporate governance concerns or tangentially addresses corporations’ role in broader AI governance. Amanda Askell et al.’s paper on cooperation between AI developers emphasizes corporate competition in AI development. Shahar Avin et al.’s paper on role playing future AI scenarios includes corporations among the scenario actors. Haydn Belfield’s paper on activism in AI includes robust attention to activism related to corporations. Allan Dafoe’s research agenda on AI governance includes some discussion of corporations, covering issues of incentives for safety, competition for AI advancement, and government regulation. Ben Goertzel has expressed concern that the corporatization of AI may lead to worse AGI outcomes.[2] (The Belfield and Askell et al. papers are not specifically about long-term AI, but their authors are from groups—CSER and Anthropic—that are oriented in part toward long-term AI.) Finally, Cullen O’Keefe and colleagues have developed the idea of a windfall clause that would govern corporate profits from advanced AI.

We have also contributed to work on AI corporate governance. Together with Peter Cihon and Moritz Kleinaltenkamp, we have a new paper on how certification regimes could be used to improve AI governance, especially corporate governance. Additionally, Seth Baum’s papers on superintelligence skepticism and misinformation explore potential scenarios in which the AI industry obfuscates AI risks similar to how the fossil fuel industry has obfuscated climate risks.

We wish to highlight another new paper of ours, also written with Peter Cihon, titled Corporate governance of artificial intelligence in the public interest. The paper surveys opportunities to improve AI corporate governance. It covers opportunities for management, workers, investors, corporate partners and competitors, industry consortia, nonprofit organizations, the public, the media, and governments. We recommend this paper as a starting point for learning more about how AI corporate governance works and what opportunities there are to improve it. The paper could also be used for analysis on prioritization of opportunities in AI corporate governance, though prioritization is outside the scope of the paper itself.

This body of work shows that corporate governance has not been completely neglected by communities active on long-term AI issues, even though it may be relatively neglected considering its importance and compared to other lines of work.

Review of EA Forum posts

A search on the EA Forum for [“corporate governance” AI] yields only six results, three of which are on our work (this, this, and this), one of which discusses the windfall clause idea, one of which only mentions corporate governance to explain that it’s outside the scope of the post, and one of which is an interview in which the interviewee claims to not know much about corporate governance. For comparison, a search on the EA Forum for [“public policy” AI] yields 45 results, including multiple posts on AI public policy as a career direction (e.g., this, this, this, and this).[3] These keyword searches are not definitive. For example, the corporate governance search missed a post on Jade Leung’s EA Global talk on the importance of corporate actors leading on AI governance. Nonetheless, they are strongly suggestive of a relative neglect of AI corporate governance on the EA Forum.

Review of Open Philanthropy Project grantmaking

A review of Open Philanthropy Project grants on Potential Risks from Advanced Artificial Intelligence shows a variety of projects on public policy (and other topics including AI safety), but nothing focused on corporate governance.

Taken together, the corporate governance of AI seems relatively neglected within long-term AI communities. It’s not our intent to argue against the importance of other lines of work on AI, such as public policy or safety techniques. These topics are important too. Indeed, we have done some work on other AI topics ourselves, including public policy. Furthermore, the topics are not mutually exclusive. In particular, public policy is one way to improve corporate governance. Nonetheless, corporate governance raises a distinctive set of issues, challenges, and opportunities. We believe these merit dedicated attention from long-term AI communities.

There are tractable things that long-term AI communities could do to improve the long-term corporate governance of AI

To be most useful, we believe that future work on AI corporate governance should integrate scholarship and practice. Scholarship is needed to learn from prior experience and develop new ideas. AI corporate governance is a relatively new topic, but the more general corporate governance literature provides a wealth of relevant insight. Additionally, practical experience is needed to ensure that ideas are viable and to effect real change. To the extent possible, all this work should be free from the biases of corporate financial self-interest. Some specific potential approaches include:

  • Building up corporate governance expertise among communities active on long-term AI issues. Some of this can and should come from people working in governance positions at AI companies, government positions focused on regulating the AI industry, media outlets reporting on the AI industry, and other practical roles. Some of it should also come from people hired by academic and nonprofit organizations to study and conduct project work. Funding should be made available to such organizations for these hires.

  • Generating ideas for improving the long-term corporate governance of AI. The publications listed above only scratch the surface of potential activities. Some work can adapt existing corporate governance concepts to the particulars of advanced AI. For example, our certification paper includes a brief discussion of certification for future AI that could be expanded to explore a certification regime for AGI and related technologies. Other concepts could include the relative merits of novel corporate structures, compliance and risk management teams, employee activism, oversight boards, reputation management, shareholder activism, whistleblowing, and more.

  • Evaluating priorities for AI corporate governance work. Our survey paper maps out a range of opportunities, but it doesn’t evaluate their relative priority. Evaluations should proceed cautiously to account for uncertainty about the impacts of different opportunities and to account for important interconnections between them. This work would benefit from the availability of more detailed ideas on how to improve the long-term corporate governance of AI (see above).

Above all, we believe that corporate governance merits the same degree of attention and investment as public policy gets within the field of long-term AI governance.

Thanks to Peter Cihon, Conor Griffin, Allan Dafoe, Cullen O’Keefe, and Robert de Neufville for valuable feedback.

Disclaimer: Jonas Schuett has recently finished an internship at DeepMind. The views and opinions expressed here are his own.


  1. ↩︎

    This quote is from former Deputy Secretary of Defense Bob Work, speaking in his official capacity.

  2. ↩︎

    Goertzel expressed this view in an article “The Corporatization of AI is a Major Threat to Humanity”, published in 2017 by H+ Magazine at this link. At the time of this writing, that link is not operational. An archived version of the page is available here.

  3. ↩︎

    Some of these results are not relevant, due to quirks in the EA Forum search function (e.g., this, this, and this), but many are highly relevant.