Three new reports reviewing research and concepts in advanced AI governance

Link post

I’m sharing three new reports on AI governance (total length 283 pg incl. appendices), which provide reviews of research lines, key concepts and policy proposals in the field of AI governance.

These are part of LPP’s ‘AI Foundations Reports’ (FR) series (see also our September report on International AI institutions: A literature review of models, examples, and proposals, as summarized in this Forum Post).

If you are time-constrained: pg 3-5 of each report contain an executive summary and overview tables.

The reports are:

  • (FR2) AI is like… A literature review of AI metaphors and why they matter for policy

    • Other links: (SSRN, PDF, online)

    • Summary: This report reviews why and how framings, analogies and metaphors used by policymakers, media, and the public in discussing AI, matter to AI governance. It includes a review of five ways in which metaphors play a role in shaping technology development and regulation, a survey of historical cases where the choice of analogy materially influenced the regulation of issues in areas such as cyberspace and (early) AI law, a survey of 55 analogies already used for AI (and their policy implications), and a discussion of the risks of bad analogies.

    • You might find this useful if: you’d like to have a better primer for understanding when an AI (policy) argument strongly relies on analogies; what that does, and what other framings could be picked (and are being advanced).

  • (FR3) Concepts in advanced AI governance: A literature review of key terms and definitions

    • Other links: (SSRN, PDF, online)

    • Summary: This report provides an overview, taxonomy, and preliminary analysis of many cornerstone ideas and concepts within the fields focused on the risks from, and governance of Advanced AI systems. It reviews three different purposes for pursuing AI definitions (technological; sociotechnical; and regulatory). It reviews 101 definitions across 69 terms that have been coined for advanced AI systems, within four categories (form of advanced AI, pathways to building it, aggregate societal impacts, critical capabilities). For key terms it discusses common themes as well as the benefits and drawbacks of using those terms for policy and governance. It then reviews ways that the field has defined (AI) ‘policy’ and ‘governance’; how the field has defined itself (in terms of ‘advanced AI governance’, ’transformative AI governance, ‘longtermist AI governance’, etc), as well as terms used in discussing theories of change. Appendixes provide detailed lists of definitions and sources for all terms.

    • You might find this useful if: you’d like greater clarity about the concepts we use, the different ways they have been used in the past, and their strengths and drawbacks in AI law & policy debates (see also this previous Forum Post by Oliver G)

  • (FR4) Advanced AI governance: A literature review of problems, options, and proposals

    • Other links: (SSRN, PDF, online)

    • Summary: This literature review aims to provide an updated overview and taxonomy of research on advanced AI governance—the field focused on studying the potential problems and risks of advanced AI ; the options available for its governance; and concrete policy proposals to implement. After briefly setting out the aims and scope of the review, it surveys three major lines of work:

      • (I) problem-clarifying research aimed at understanding the challenges advanced AI poses for governance, by mapping the strategic parameters (technical, deployment, governance) around its development, and by deriving indirect guidance from history, models, or theory;

      • (II) option-identifying work aimed at understanding affordances for governing these problems, by mapping potential key actors, their levers of governance over AI, and pathways to influence whether or how these are utilized;

      • (III) prescriptive work aimed at identifying priorities and articulating concrete proposals for advanced AI policy, on the basis of certain views of the problem and governance options.

    • You might find this useful if: You’d like to have greater clarity about what different lines of research have been pursued in the last years, how they fit together, what gaps (and fertile directions) still exist; how to connect world models of the problem to be solved (e.g. of AI risk) and of the tools available, and distill this into specific, concrete policy programs. It may also be of use in compiling syllabi or courses.

Thanks to the many, many people who have helped give valuable feedback and input on these!

No comments.