New DeepMind report on institutions for global AI governance

Link post

This is a linkpost for a new white paper from DeepMind, titled ‘Exploring institutions for global AI governance’, on “models and functions of international institutions that could help manage opportunities and mitigate risks of advanced AI”. I’ve copied the first few paragraphs.


Growing awareness of the global impact of advanced artificial intelligence (AI) has inspired public discussions about the need for international governance structures to help manage opportunities and mitigate risks involved.

Many discussions have drawn on analogies with the ICAO (International Civil Aviation Organisation) in civil aviation; CERN (European Organisation for Nuclear Research) in particle physics; IAEA (International Atomic Energy Agency) in nuclear technology; and intergovernmental and multi-stakeholder organisations in many other domains. And yet, while analogies can be a useful start, the technologies emerging from AI will be unlike aviation, particle physics, or nuclear technology.

To succeed with AI governance, we need to better understand:

  1. What specific benefits and risks we need to manage internationally.

  2. What governance functions those benefits and risks require.

  3. What organisations can best provide those functions.

Our latest paper, with collaborators from the University of Oxford, Université de Montréal, University of Toronto, Columbia University, Harvard University, Stanford University, and OpenAI, addresses these questions and investigates how international institutions could help manage the global impact of frontier AI development, and make sure AI’s benefits reach all communities.

No comments.