RP’s AI Governance & Strategy team—June 2023 interim overview

Hi! I co-lead Rethink Priorities’ AI Governance & Strategy (AIGS) team. At the suggestion of Ben West, I’m providing an update on our team.

Caveats:

  • This was quickly written and omits most of the reasoning for our choices, because that was all I had time to write and that seemed better than not providing an update at all.

  • This post may not reflect the views of all members of the team, and doesn’t represent RP as an organization.

  • The areas we work in are evolving rapidly, so our strategy and projects are as well. We won’t update this post, but will update this overlapping document.

Comments and DMs are welcome, though I can’t guarantee a rapid or detailed reply.

Summary

  • The AIGS team works to reduce catastrophic risks related to AI by conducting research and strengthening the field of AI governance. We aim to bridge the technical and policy worlds, and we now focus on short, rapid-turnaround outputs and briefings. [read more]

  • Our four key workstreams are compute governance, China, lab governance, and US regulations. [read more]

  • We list some of our ongoing or completed projects. [read more]

  • Please feel free to reach out if you’d like to suggest a project; if you’re open to sharing feedback, expertise, or connections with us; or if you or someone you know might be interested in working with or funding us. [read more]

  • I summarize a few lessons learned and recent updates. [read more]

Who we are

Rethink Priorities’ AI Governance & Strategy team works to reduce catastrophic risks related to development & deployment of AI systems. We do this by producing research that grounds concrete recommendations in strategic considerations, and by strengthening coordination and talent pipelines across the AI governance field.

  • We combine the intellectual independence and nonpartisanship of a think tank with the flexibility and responsiveness of a consultancy. Our work is funded solely by foundations and independent donors, giving us the freedom to pursue important questions without bias. We’re always on the lookout for unexplored high-value research questions–feel free to pitch us!

  • We aim to bridge the technical and policy worlds, with expertise on foundation models and the hardware underpinning them.

  • We focus on short, rapid-turnaround outputs and briefings, but also produce longer reports. Much of our work is nonpublic, but may be shareable on request.

  • We have 11 staff, listed here. You can contact any of us at firstname@rethinkpriorities.org

Our four workstreams

We recently narrowed down to four focus areas, each of which has a 1-3 person subteam working on it. Below we summarize these workstreams and link to docs that provide further information on each (e.g., about ongoing projects, public outputs, and stakeholders and paths to impact).

  • Compute governance: This workstream will focus on establishing a firmer empirical and theoretical grounding for the fledgling field of compute governance, informing ongoing policy processes and debates, and developing more concrete technical and policy proposals. In particular, we will focus on understanding the impact of existing compute-related US export controls, and researching what changes to them may be feasible and beneficial.

    • This workstream consists of Onni Aarne and Erich Grunewald, and we’re currently hiring a third member.

  • China: This workstream’s mission is to improve decisions at the intersection of AI governance and China. We are interested in both China-West relations concerning AI, as well as AI developments within China. We are particularly focused on informing decision-makers who are concerned about catastrophic risks from AI.

    • This workstream consists of Oliver Guest.

  • Lab governance: This workstream identifies concrete measures frontier AI labs can adopt now and in the future to reduce the chance of catastrophic risks, and facilitate the adoption of these measures. By utilizing a tiered approach to each measure, we are able to push high-impact recommendations for immediate adoption while simultaneously increasing awareness of higher-cost /​ higher-assurance versions that could be implemented if certain risk thresholds are met. We expect to publish new recommendations every 2-4 weeks from mid-August 2023.

    • This workstream consists of Zoe Williams, Shaun Ee, and Joe O’Brien.

  • US regulations: This workstream’s priority for the next three months is to consult for the US AI policy outreach efforts of our contacts at think tanks, technical orgs, and government institutions who have the connections and mainstream credibility to influence policy directly. In the longer run, we aim to build our expertise and connections in US policy space, engage directly with AI-relevant regulators, and inform long-term strategy on AI policy.

    • This workstream consists of Ashwin Acharya, Bill Anderson-Samways, and Patrick Levermore.

Most of these workstreams essentially only started in Q2 2023. Their strategies may change considerably, and we may drop, modify, or add workstreams in future.

We previously also worked on projects outside of those focus areas, some of which are still wrapping up. See here for elaboration.

Some of our ongoing or completed work

Note: This isn’t comprehensive, and in particular excludes nonpublic work. If you’re reading this after June 2023, please see the documents linked to in the above section for updated info on our completed or ongoing projects.

Compute governance

Ongoing:

  • Erich Grunewald (Research Assistant) is investigating the impact of the October export controls on Chinese actors’ access to state-of-the-art ML compute at scale. He’s focusing on China’s indigenous production capabilities and its chance of smuggling large numbers of controlled AI chips into the country.

  • Onni Aarne (Associate Researcher) is investigating the potential of hardware-enabled mechanisms on AI-relevant chips for restricting dangerous uses of AI and allowing AI developers to make verifiable claims about their systems.

China

Ongoing:

  • Oliver Guest (Associate Researcher) is investigating questions at the intersection of US-China relations and AI governance, such as prospects and best approaches for track II diplomacy on AI risk.

Outputs:

Lab governance

Ongoing:

  • Patrick Levermore (Research Assistant) is assessing the potential value of an AI safety bounty program, which would reward members of the public who find safety issues in existing AI systems.

  • As noted above, Shaun Ee (Researcher), Joe O’Brien (Research Assistant), and Zoe Williams (Research Manager) are producing a series of concrete recommendations frontier AI labs can adopt to reduce the chance of catastrophic risks. In-progress reports include recommendations on Safety Culture, Isolation of Digital Systems, and Post-Deployment Monitoring and Safe Shutdown practices.

Strategy & foresight

Ongoing:

  • Bill Anderson-Samways (Associate Researcher) is investigating the likelihood of governments becoming involved in developing advanced AI and the forms that involvement could take.

Outputs:

US regulation

Ongoing:

  • Abi Olvera (contractor & affiliate; formerly Policy Fellow) is developing a database of AI policy proposals that could be implemented by the US government in the near- or medium-term. This is intended to capture information on these proposals’ expected impacts, their feasibility, and how they could be implemented.

  • Ashwin, Bill, and Patrick are exploring a range of projects as part of our new workstream on US AI regulations.

Other

Media appearances:

How we can help you or you can help us

Feel free to:

  • propose research projects for us

  • suggest ways we could make our drafts/​outputs more useful to you (e.g., reframing them or adding consideration of some subquestion)

  • ask us questions related to areas we work on

  • ask to see our drafts or nonpublic outputs

  • subscribe to RP’s newsletter

Please let us know (e.g. via emailing one of us at firstname@rethinkpriorities.org) if:

  • you have expertise in an area we work on, or just have suggestions for what to read or who to talk to

  • you have thoughts on our strategy, ongoing or planned projects, or drafts, or would be happy for us to ask you for feedback on that

  • you could help us get our work or recommendations to people who can act on them

  • you or someone you know might be interested in working with us or funding our work

    • In each of our focus areas, there are currently significant windows of opportunity, there’s more demand from decision-makers for work by our team than can be fulfilled by our current staff alone, and we have sufficient management and operations capacity to hire, given further funding.

    • We hope to start hiring for multiple workstreams in Sep/​Oct 2023.

Appendix: Some lessons learned and recent updates

Here I feel especially inclined to remind readers that, due to time constraints, unfortunately this post was quickly written, omits most of our reasoning, and may not reflect the views of all members of the team.

A huge amount has happened in the AI and AI governance spaces since October 2022. Additionally, our team has learned a lot since starting out. Below I summarize some of the things that I personally consider lessons learned and recent updates for our team (with little elaboration, justification, or nuance).

Note that, even if these points are accurate for our team, they may not also apply to other people or teams, depending on whether their beliefs, actions, skills, etc. are similar in relevant ways to our team until recently.

Regarding what we work on and consider important, it seems that, relative to in 2022, our team should:

  • Think somewhat more about domestic legislation and regulation (not just national security policy)

    • Major domestic policy, and such policy effectively reducing large-scale risks, now seems more likely or tractable than had been believed.

  • Focus somewhat more on governments and less on corporate labs than we had been

    • Government and public excitement and concern about AI have grown substantially, and more than had been expected for 2023.

  • Focus somewhat more on concrete and directly decision-relevant work and less on improving strategic clarity than we had been, at least for the coming months

    • Many windows of opportunity have opened.

    • The pace of relevant changes has increased, increasing the pace at which strategic research projects are rendered somewhat outdated.

  • Focus somewhat more on advocating for particular ideas/​proposals and less on “just” figuring out what ideas/​proposals are best, relative to before (but still overall focus more on the latter than the former)

    • This entails, among other things, strengthening our “brand” and connections among actors who are relevant to AI governance but who aren’t already strongly in our network.

Regarding how we work, it seems that, relative to in 2022, our team should:

  • Have just a few (broad) focus areas and structure our team as one workstream/​subteam for each.

  • More strongly prioritize rapid turnaround of each output, e.g. via:

    • typically making our projects shorter and split large projects/​outputs into multiple smaller ones.

      • Partly because the pace of relevant changes has increased dramatically, but also just because this seems useful anyway.

    • more strongly defaulting to each staff member doing only one major thing at a time

    • making various changes to project management, prioritization, team culture, etc. that I won’t describe here

Note that most of those are just small/​moderate shifts (e.g., we still want several team members to focus on lab governance), and we may later shift in opposite directions (e.g., we may increase our focus on strategic questions after we gain more expertise or if in future there are fewer policy windows open).

Acknowledgements

This is a blog post from Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The primary author is Michael Aird, though some parts were contributed by Ashwin Acharya, Oliver Guest, Onni Aarne, Shaun Ee, and Zoe Williams. Thanks to those and all other AIGS team members, Peter Wildeford, Sarina Wong, and other RP staff for helpful conversations and feedback.

If you are interested in RP’s work, please visit our research database and subscribe to our newsletter.