What is everyone doing in AI governance

What is this post about, and how to use it

This is a list of AI governance organizations with a brief description of their directions of work, so a reader can get a basic understanding of what they are doing and explore them in-depth on their own.

There is another recent post on a similar topic that explores AI governance research agendas with a lesser focus on the areas of work of specific governance organizations.

This is a flawed list

I made this list using publicly available data, as well as by talking with people from the field. I am sure that I missed some important things, especially in the section on the governance teams of AI labs since they are less open than nonprofits, but the best way to get the right answer on the internet is to post the wrong one.

At some point, adding each new bit of information started taking more and more time, and I decided to stop so this post is reasonably useful and will not cause me to struggle.


DM me if you see mistakes or think I forgot something important.

Major non-profit organizations

Centre for the Governance of AI (GovAI)

Areas of work

  1. Scientific research on AI policies

  2. Educational and fellowship programs

  3. Some efforts to improve coordination among AI governance orgs

Mechanisms of influence:

  1. Many prominent AI governance specialists are alumni or former employees of GovAI including heads of policy at Deepmind, OpenAI, and Anthropic. So GovAI has strong connections among decision-makers

  2. GovAI produced a lot of academic research on policies. Mostly academic, to a lesser extent—applied

Other notable things

GovAI has an established brand of a respectable organization

Center for AI Safety (CAIS)

Areas of work

  1. Field building

    1. Courses and fellowships for starting a career in technical AI safety and philosophy

    2. Offer compute resources for AI safety researchers including compute cluster

    3. Organize competitions for AI safety researchers

  2. Research

    1. Technical research on AI safety

    2. Research on future societal risks from AI

Mechanisms of influence

  1. CAIS nurtures technical researchers and philosophers

Existential Risk Observatory (ERO)

Areas of work

  1. Raise awareness on AI x-risks in the press (dozens of publications including TIME magazine) and on conferences

  2. Consultations with Dutch MPs, lobbying in the Dutch government

Mechanisms of influence

Lobbying and publicity in media

Other notable things

They do not just focus on the risks from AI, but on other potentially existential risks as well: e.g. pandemics and nuclear war

Centre for Long-term Resilience (CLTR)

Areas of work

  1. Developing policy proposals on extreme risks

  2. Lobbying policies on extreme risks in a number of jurisdictions with a focus on the UK. They describe themselves as a glue between research on extreme risks and actionable policies

Mechanisms of influence

CLTR has solid connections among UK political decision-makers. These decision-makers know CLTR and trust them.

Other notable things

  1. CLTR not just focuses on the risks from AI, but also biological risks like pandemics of bioweapons

  2. Right now there is a window of opportunity for shaping AI policies, so CLTR’s influence is now at its maximum

Future of Life Institute (FLI)

Areas of work

  1. Policy advocacy in the US, the EU, and the UK (EU AI act, NIST framework in the US, policies against integration of AI in Nuclear launch)

  2. Education (Podcast, live events, videos)

  3. Grantmaking for fellowships and research

Mechanisms of influence

As an outsider, the only projects I was able to identify as ongoing are grantmaking and public outreach projects

Other notable things

They do not just focus on the risks from AI, but on other potentially existential risks as well: biotechnology, nukes, climate change

The Future Society (TFS)

Areas of work

  1. Applied research for AI policies in the EU (including the EU AI Act)

  2. Organizer of The Athens Roundtable on AI and Rule of Law—the largest AI governance conference

  3. Enabling coordination among policy-makers, AI developers, standard-setting bodies, and AI safety community members to enable “distributed competence” for policy-making and effective mechanisms of policy enforcement

Mechanisms of influence

  1. Solid network among all sorts of important actors in AI regulation

  2. Several employees are experts in international regulatory bodies

  3. A platform for coordination among all important actors in AI regulation

Other notable things

On the paper, TFS is quite influential as they are like a large squid with their tentacles in almost every major AI safety actor, but the real extent of their influence is unclear to me.

Rethink Priorities (RP)

Areas of work

  1. Research on the Chinese AI Safety strategy and Chinese-western relations

  2. Consultations with the US policy-makers on AI regulations

  3. Launching an incubator for the Longtermist entrepreneurs

Mechanisms of influence

Unclear

Other notable things

  1. Focus not just on AI safety, but also on animal welfare, climate change and global health

  2. Relatively small influence in the US policy making

  3. Planning to do research on the safety policies for AI labs, as well as compute governance (governance of computational resources. e.g. chips)

  4. Publish a newsletter

UK Foundation Model Task Force

No website yet


By the time this post is published, there is little information on this organization, but it has solid support from the UK government as well as from all major AI labs and has the ambition to become the main international body for AI regulation.

Less-influential non-profit organizations

The organizations I consider less influential are described briefly. There are no objective criteria on which organization is major and which is not. This classification is based only on my impression.

Center for Human Compatible AI (CHAI)

Mostly conceptual research on ML, AI safety, and cognitive sciences, but Stuart Russel, the founder of CHAI is a high-profile public figure who is active in media.

Campaign for AI Safety

Mostly, research on public opinion of AI risks that is useful for shaping narratives around topics.

PauseAI

Protests to pause AI development, as well as some media and politicians outreach.

Future of humanity institute (FHI)

Conceptual research on AI governance and technical AI safety, as well as strategic thinking in general. Last activity on their website is dated 2021.

Global Catastrophic Risk Institute (GCRI)

Minor policy advising as well as public outreach on AI safety. Focuses mostly not on AI risks, but other global risks.

Center on Long-Term Risk (CLR)

It was hard for me to determine whether CLR is actively doing any work on AI governance, but it seems like they have some influence on AI labs and doing academic research on cooperation.

BlueDot impact

Runs AI safety fundamentals courses

Governance teams at AI labs


I expect this part of the list to be way less complete compared to the previous ones since AI labs are generally way less open to the public about their work. If you know more non-confidential information than me, feel free to DM me.

The public policy team at OpenAI

Areas of work

  1. Coordination among leading AI labs to slow down AI progress and ensure safety

  2. Advocating for the creation of an international regulatory body for AI that controls the development and deployment of AI models. Something similar to IAEA

  3. Development of instruments for the democratic public oversight over the governance of AGI

  4. Research on the effects of AI systems on society

Sources: [1] [2] [3]

The policy team at Anthropic

Areas of work

  1. Support the development of safety evaluations for AI systems

  2. Enforce pre-registration of large training runs by regulators

  3. Empower third-party audit of models, as well as red-teaming

Sources: [1] [2]

The policy team at Google DeepMind

Although I found some sources describing Deepmind’s AI policy work ([1] [2] [3]) I was unable to create a comprehensive image of what are they doing so I decide to not describe it here to nod mislead readers.

The governance team at Conjecture


One of the team’s goals is advocating for narratives on AI x-risks among general the public and politicians. The members of the team also launched the Stop AGI project which is aimed to achieve similar goals.