Survey on intermediate goals in AI governance
It seems that a key bottleneck for the field of longtermism-aligned AI governance is limited strategic clarity (see Muehlhauser, 2020, 2021). As one effort to increase strategic clarity, in October-November 2022, we sent a survey to 229 people we had reason to believe are knowledgeable about longtermist AI governance, receiving 107 responses. We asked about:
respondents’ “theory of victory” for AI risk (which we defined as the main, high-level “plan” they’d propose for how humanity could plausibly manage the development and deployment of transformative AI such that we get long-lasting good outcomes),
how they’d feel about funding going to each of 53 potential “intermediate goals” for AI governance,[1]
what other intermediate goals they’d suggest,
how high they believe the risk of existential catastrophe from AI is, and
when they expect transformative AI (TAI) to be developed.
We hope the results will be useful to funders, policymakers, people at AI labs, researchers, field-builders, people orienting to longtermist AI governance, and perhaps other types of people. For example, the report could:
Broaden the range of options people can easily consider
Help people assess how much and in what way to focus on each potential “theory of victory”, “intermediate goal”, etc.
Target and improve further efforts to assess how much and in what way to focus on each potential theory of victory, intermediate goal, etc.
You can see a summary of the survey results here. Note that we will expect readers to abide by the policy articulated in “About sharing information from this report” (for the reasons explained there).
Acknowledgments
This report is a project of Rethink Priorities–a think tank dedicated to informing decisions made by high-impact organizations and funders across various cause areas. The project was commissioned by Open Philanthropy. Full acknowledgements can be found in the linked “Introduction & summary” document.
If you are interested in RP’s work, please visit our research database and subscribe to our newsletter.
- ^
Here’s the definition of “intermediate goal” that we stated in the survey itself:
By an intermediate goal, we mean any goal for reducing extreme AI risk that’s more specific and directly actionable than a high-level goal like ‘reduce existential AI accident risk’ but is less specific and directly actionable than a particular intervention. In another context (global health and development), examples of potential intermediate goals could include ‘develop better/cheaper malaria vaccines’ and ‘improve literacy rates in Sub-Saharan Africa’.
- 12 tentative ideas for US AI policy (Luke Muehlhauser) by Apr 19, 2023, 9:05 PM; 117 points) (
- Why experienced professionals fail to land high-impact roles (FBB #5) by Apr 10, 2025, 12:44 PM; 69 points) (
- AI policy ideas: Reading list by Apr 17, 2023, 7:00 PM; 60 points) (
- How Rethink Priorities’ Research could inform your grantmaking by Oct 4, 2023, 6:24 PM; 59 points) (
- “Risk Awareness Moments” (Rams): A concept for thinking about AI governance interventions by Apr 14, 2023, 5:40 PM; 53 points) (
- Announcing the AIPolicyIdeas.com Database by Jun 23, 2023, 4:09 PM; 50 points) (
- Slowing AI: Reading list by Apr 17, 2023, 2:30 PM; 45 points) (LessWrong;
- EA Organization Updates: April 2023 by Apr 13, 2023, 6:48 PM; 41 points) (
- Rethink Priorities is hiring a Compute Governance Researcher or Research Assistant by Jun 7, 2023, 1:22 PM; 36 points) (
- EA & LW Forum Weekly Summary (13th − 19th March 2023) by Mar 20, 2023, 4:18 AM; 31 points) (
- Ideas for AI labs: Reading list by Apr 24, 2023, 7:00 PM; 28 points) (
- Research project idea: Intermediate goals for nuclear risk reduction by Apr 15, 2023, 2:25 PM; 24 points) (
- AI policy ideas: Reading list by Apr 17, 2023, 7:00 PM; 24 points) (LessWrong;
- Please help me sense-check my assumptions about the needs of the AI Safety community and related career plans by Mar 27, 2023, 8:11 AM; 23 points) (
- Part 3: A Proposed Approach for AI Safety Movement Building: Projects, Professions, Skills, and Ideas for the Future [long post][bounty for feedback] by Mar 22, 2023, 12:54 AM; 22 points) (
- Incident reporting for AI safety by Jul 19, 2023, 5:00 PM; 22 points) (LessWrong;
- Incident reporting for AI safety by Jul 19, 2023, 5:00 PM; 18 points) (
- A Proposed Approach for AI Safety Movement Building: Projects, Professions, Skills, and Ideas for the Future [long post][bounty for feedback] by Mar 22, 2023, 1:11 AM; 14 points) (LessWrong;
- EA & LW Forum Weekly Summary (13th − 19th March 2023) by Mar 20, 2023, 4:18 AM; 13 points) (LessWrong;
- Insights from an expert survey about intermediate goals in AI governance by Mar 17, 2023, 2:59 PM; 11 points) (
- Ideas for AI labs: Reading list by Apr 24, 2023, 7:00 PM; 11 points) (LessWrong;
- Why Experienced Professionals Fail to Land High-Impact Roles (FBB #5) by Apr 10, 2025, 12:46 PM; 2 points) (LessWrong;
...and while I hopefully have your attention: My team is currently hiring for a Research Manager! If you might be interested in managing one or more researchers working on a diverse set of issues relevant to mitigating extreme risks from the development and deployment of AI, please check out the job ad!
The application form should take <2 hours. The deadline is the end of the day on March 21. The role is remote and we’re able to hire in most countries.
People with a wide range of backgrounds could turn out to be the best fit for the role. As such, if you’re interested, please don’t rule yourself out due to thinking you’re not qualified unless you at least read the job ad first!
Can I ask whether there is a specific reason that you do not put the summary of the findings in this post, but only let people request access to a google drive folder?
I just browsed through it, their reasons for not doing so is also described in a section in the report.
Yeah, the “About sharing information from this report” section attempts to explain this. Also, for what it’s worth, I approved all access requests, generally within 24 hours.
That said, FYI I’ve now switched to the folder being viewable by anyone with the link, rather than requiring requesting access, though we still have the policies in “About sharing information from this report”. (This switch was partly because my sense of the risks vs benefits has changed, and partly because we apparently hit the max number of people who can be individually shared on a folder.)