This is a list of AI governance organizations with a brief description of their directions of work, so a reader can get a basic understanding of what they are doing and explore them in-depth on their own.
There is another recent post on a similar topic that explores AI governance research agendas with a lesser focus on the areas of work of specific governance organizations.
This is a flawed list
I made this list using publicly available data, as well as by talking with people from the field. I am sure that I missed some important things, especially in the section on the governance teams of AI labs since they are less open than nonprofits, but the best way to get the right answer on the internet is to post the wrong one.
At some point, adding each new bit of information started taking more and more time, and I decided to stop so this post is reasonably useful and will not cause me to struggle.
DM me if you see mistakes or think I forgot something important.
Some efforts to improve coordination among AI governance orgs
Mechanisms of influence:
Many prominent AI governance specialists are alumni or former employees of GovAI including heads of policy at Deepmind, OpenAI, and Anthropic. So GovAI has strong connections among decision-makers
GovAI produced a lot of academic research on policies. Mostly academic, to a lesser extent—applied
Other notable things
GovAI has an established brand of a respectable organization
Lobbying policies on extreme risks in a number of jurisdictions with a focus on the UK. They describe themselves as a glue between research on extreme risks and actionable policies
Mechanisms of influence
CLTR has solid connections among UK political decision-makers. These decision-makers know CLTR and trust them.
Other notable things
CLTR not just focuses on the risks from AI, but also biological risks like pandemics of bioweapons
Right now there is a window of opportunity for shaping AI policies, so CLTR’s influence is now at its maximum
Applied research for AI policies in the EU (including the EU AI Act)
Organizer of The Athens Roundtable on AI and Rule of Law—the largest AI governance conference
Enabling coordination among policy-makers, AI developers, standard-setting bodies, and AI safety community members to enable “distributed competence” for policy-making and effective mechanisms of policy enforcement
Mechanisms of influence
Solid network among all sorts of important actors in AI regulation
Several employees are experts in international regulatory bodies
A platform for coordination among all important actors in AI regulation
Other notable things
On the paper, TFS is quite influential as they are like a large squid with their tentacles in almost every major AI safety actor, but the real extent of their influence is unclear to me.
By the time this post is published, there is little information on this organization, but it has solid support from the UK government as well as from all major AI labs and has the ambition to become the main international body for AI regulation.
Less-influential non-profit organizations
The organizations I consider less influential are described briefly. There are no objective criteria on which organization is major and which is not. This classification is based only on my impression.
Mostly conceptual research on ML, AI safety, and cognitive sciences, but Stuart Russel, the founder of CHAI is a high-profile public figure who is active in media.
It was hard for me to determine whether CLR is actively doing any work on AI governance, but it seems like they have some influence on AI labs and doing academic research on cooperation.
I expect this part of the list to be way less complete compared to the previous ones since AI labs are generally way less open to the public about their work. If you know more non-confidential information than me, feel free to DM me.
Coordination among leading AI labs to slow down AI progress and ensure safety
Advocating for the creation of an international regulatory body for AI that controls the development and deployment of AI models. Something similar to IAEA
Development of instruments for the democratic public oversight over the governance of AGI
Although I found some sources describing Deepmind’s AI policy work ([1][2][3]) I was unable to create a comprehensive image of what are they doing so I decide to not describe it here to nod mislead readers.
One of the team’s goals is advocating for narratives on AI x-risks among general the public and politicians. The members of the team also launched the Stop AGI project which is aimed to achieve similar goals.
What is everyone doing in AI governance
What is this post about, and how to use it
This is a list of AI governance organizations with a brief description of their directions of work, so a reader can get a basic understanding of what they are doing and explore them in-depth on their own.
There is another recent post on a similar topic that explores AI governance research agendas with a lesser focus on the areas of work of specific governance organizations.
This is a flawed list
I made this list using publicly available data, as well as by talking with people from the field. I am sure that I missed some important things, especially in the section on the governance teams of AI labs since they are less open than nonprofits, but the best way to get the right answer on the internet is to post the wrong one.
At some point, adding each new bit of information started taking more and more time, and I decided to stop so this post is reasonably useful and will not cause me to struggle.
DM me if you see mistakes or think I forgot something important.
Major non-profit organizations
Centre for the Governance of AI (GovAI)
Areas of work
Scientific research on AI policies
Educational and fellowship programs
Some efforts to improve coordination among AI governance orgs
Mechanisms of influence:
Many prominent AI governance specialists are alumni or former employees of GovAI including heads of policy at Deepmind, OpenAI, and Anthropic. So GovAI has strong connections among decision-makers
GovAI produced a lot of academic research on policies. Mostly academic, to a lesser extent—applied
Other notable things
GovAI has an established brand of a respectable organization
Center for AI Safety (CAIS)
Areas of work
Field building
Courses and fellowships for starting a career in technical AI safety and philosophy
Offer compute resources for AI safety researchers including compute cluster
Organize competitions for AI safety researchers
Research
Technical research on AI safety
Research on future societal risks from AI
Mechanisms of influence
CAIS nurtures technical researchers and philosophers
Existential Risk Observatory (ERO)
Areas of work
Raise awareness on AI x-risks in the press (dozens of publications including TIME magazine) and on conferences
Consultations with Dutch MPs, lobbying in the Dutch government
Mechanisms of influence
Lobbying and publicity in media
Other notable things
They do not just focus on the risks from AI, but on other potentially existential risks as well: e.g. pandemics and nuclear war
Centre for Long-term Resilience (CLTR)
Areas of work
Developing policy proposals on extreme risks
Lobbying policies on extreme risks in a number of jurisdictions with a focus on the UK. They describe themselves as a glue between research on extreme risks and actionable policies
Mechanisms of influence
CLTR has solid connections among UK political decision-makers. These decision-makers know CLTR and trust them.
Other notable things
CLTR not just focuses on the risks from AI, but also biological risks like pandemics of bioweapons
Right now there is a window of opportunity for shaping AI policies, so CLTR’s influence is now at its maximum
Future of Life Institute (FLI)
Areas of work
Policy advocacy in the US, the EU, and the UK (EU AI act, NIST framework in the US, policies against integration of AI in Nuclear launch)
Education (Podcast, live events, videos)
Grantmaking for fellowships and research
Mechanisms of influence
As an outsider, the only projects I was able to identify as ongoing are grantmaking and public outreach projects
Other notable things
They do not just focus on the risks from AI, but on other potentially existential risks as well: biotechnology, nukes, climate change
The Future Society (TFS)
Areas of work
Applied research for AI policies in the EU (including the EU AI Act)
Organizer of The Athens Roundtable on AI and Rule of Law—the largest AI governance conference
Enabling coordination among policy-makers, AI developers, standard-setting bodies, and AI safety community members to enable “distributed competence” for policy-making and effective mechanisms of policy enforcement
Mechanisms of influence
Solid network among all sorts of important actors in AI regulation
Several employees are experts in international regulatory bodies
A platform for coordination among all important actors in AI regulation
Other notable things
On the paper, TFS is quite influential as they are like a large squid with their tentacles in almost every major AI safety actor, but the real extent of their influence is unclear to me.
Rethink Priorities (RP)
Areas of work
Research on the Chinese AI Safety strategy and Chinese-western relations
Consultations with the US policy-makers on AI regulations
Launching an incubator for the Longtermist entrepreneurs
Mechanisms of influence
Unclear
Other notable things
Focus not just on AI safety, but also on animal welfare, climate change and global health
Relatively small influence in the US policy making
Planning to do research on the safety policies for AI labs, as well as compute governance (governance of computational resources. e.g. chips)
Publish a newsletter
UK Foundation Model Task Force
No website yet
By the time this post is published, there is little information on this organization, but it has solid support from the UK government as well as from all major AI labs and has the ambition to become the main international body for AI regulation.
Less-influential non-profit organizations
The organizations I consider less influential are described briefly. There are no objective criteria on which organization is major and which is not. This classification is based only on my impression.
Center for Human Compatible AI (CHAI)
Mostly conceptual research on ML, AI safety, and cognitive sciences, but Stuart Russel, the founder of CHAI is a high-profile public figure who is active in media.
Campaign for AI Safety
Mostly, research on public opinion of AI risks that is useful for shaping narratives around topics.
PauseAI
Protests to pause AI development, as well as some media and politicians outreach.
Future of humanity institute (FHI)
Conceptual research on AI governance and technical AI safety, as well as strategic thinking in general. Last activity on their website is dated 2021.
Global Catastrophic Risk Institute (GCRI)
Minor policy advising as well as public outreach on AI safety. Focuses mostly not on AI risks, but other global risks.
Center on Long-Term Risk (CLR)
It was hard for me to determine whether CLR is actively doing any work on AI governance, but it seems like they have some influence on AI labs and doing academic research on cooperation.
BlueDot impact
Runs AI safety fundamentals courses
Governance teams at AI labs
I expect this part of the list to be way less complete compared to the previous ones since AI labs are generally way less open to the public about their work. If you know more non-confidential information than me, feel free to DM me.
The public policy team at OpenAI
Areas of work
Coordination among leading AI labs to slow down AI progress and ensure safety
Advocating for the creation of an international regulatory body for AI that controls the development and deployment of AI models. Something similar to IAEA
Development of instruments for the democratic public oversight over the governance of AGI
Research on the effects of AI systems on society
Sources: [1] [2] [3]
The policy team at Anthropic
Areas of work
Support the development of safety evaluations for AI systems
Enforce pre-registration of large training runs by regulators
Empower third-party audit of models, as well as red-teaming
Sources: [1] [2]
The policy team at Google DeepMind
Although I found some sources describing Deepmind’s AI policy work ([1] [2] [3]) I was unable to create a comprehensive image of what are they doing so I decide to not describe it here to nod mislead readers.
The governance team at Conjecture
One of the team’s goals is advocating for narratives on AI x-risks among general the public and politicians. The members of the team also launched the Stop AGI project which is aimed to achieve similar goals.