Big Picture AI Safety: Introduction

tldr: I conducted 17 semi-structured interviews of AI safety experts about their big picture strategic view of the AI safety landscape: how will human-level AI play out, how things might go wrong, and what should the AI safety community be doing. While many respondents held “traditional” views (e.g. the main threat is misaligned AI takeover), there was more opposition to these standard views than I expected, and the field seems more split on many important questions than someone outside the field may infer.

What do AI safety experts believe about the big picture of AI risk? How might things go wrong, what we should do about it, and how have we done so far? Does everybody in AI safety agree on the fundamentals? Which views are consensus, which are contested and which are fringe? Maybe we could learn this from the literature (as in the MTAIR project), but many ideas and opinions are not written down anywhere, they exist only in people’s heads and in lunchtime conversations at AI labs and coworking spaces.

I set out to learn what the AI safety community believes about the strategic landscape of AI safety. I conducted 17 semi-structured interviews with a range of AI safety experts. I avoided going into any details of particular technical concepts or philosophical arguments, instead focussing on how such concepts and arguments fit into the big picture of what AI safety is trying to achieve.

This work is similar to the AI Impacts surveys, Vael Gates’ AI Risk Discussions, and Rob Bensinger’s existential risk from AI survey. This is different to those projects in that both my approach to interviews and analysis are more qualitative. Part of the hope for this project was that it can hit on harder-to-quantify concepts that are too ill-defined or intuition-based to fit in the format of previous survey work.

Questions

I asked the participants a standardized list of questions.

  • What will happen?

    • Q1 Will there be a human-level AI? What is your modal guess of what the first human-level AI (HLAI) will look like? I define HLAI as an AI system that can carry out roughly 100% of economically valuable cognitive tasks more cheaply than a human.

      • Q1a What’s your 60% or 90% confidence interval for the date of the first HLAI?

    • Q2 Could AI bring about an existential catastrophe? If so, what is the most likely way this could happen?

      • Q2a What’s your best guess at the probability of such a catastrophe?

  • What should we do?

    • Q3 Imagine a world where, absent any effort from the AI safety community, an existential catastrophe happens, but actions taken by the AI safety community prevent such a catastrophe. In this world, what did we do to prevent the catastrophe?

    • Q4 What research direction (or other activity) do you think will reduce existential risk the most, and what is its theory of change? Could this backfire in some way?

  • What mistakes have been made?

    • Q5 Are there any big mistakes the AI safety community has made in the past or are currently making?

These questions changed gradually as the interviews went on (given feedback from participants), and I didn’t always ask the questions exactly as I’ve presented them here. I asked participants to answer from their internal model of the world as much as possible and to avoid deferring to the opinions of others (their inside view so to speak).

Participants

  • Adam Gleave is the CEO and co-founder of the alignment research non-profit FAR AI. (Sept 23)

  • Adrià Garriga-Alonso is a research scientist at FAR AI. (Oct 23)

  • Ajeya Cotra leads Open Philantropy’s grantmaking on technical research that could help to clarify and reduce catastrophic risks from advanced AI. (Jan 24)

  • Alex Turner is a research scientist at Google DeepMind on the Scalable Alignment team. (Feb 24)

  • Ben Cottier is a researcher specializing in key trends and questions that will shape the trajectory and governance of AI at Epoch AI. (Oct 23)

  • Daniel Filan is a PhD candidate at the Centre for Human-Compatible AI under Stuart Russell and runs the AXRP podcast. (Feb 24)

  • David Krueger is an assistant professor in Machine Learning and Computer Vision at the University of Cambridge. (Feb 24)

  • Evan Hubinger is an AI alignment stress-testing researcher at Anthropic. (Feb 24)

  • Gillian Hadfield is a Professor of Law & Strategic Management at the University of Toronto and holds a CIFAR AI Chair at the Vector Institute for Artificial Intelligence. (Feb 24)

  • Holly Elmore is currently running the US front of the Pause AI Movement and previously worked at Rethink Priorities. (Jan 24)

  • Jamie Bernardi co-founded BlueDot Impact and ran the AI Safety Fundamentals community, courses and website. (Oct 23)

  • Neel Nanda runs Google DeepMind’s mechanistic interpretability team. (Feb 24)

  • Nora Belrose is the head of interpretability research at EleutherAI. (Feb 24)

  • Noah Siegel is a senior research engineer at Google DeepMind and a PhD candidate at University College London. (Jan 24)

  • Ole Jorgensen is a member of technical staff at the UK Government’s AI Safety Institute (this interview was conducted before he joined). (Mar 23)

  • Richard Ngo is an AI governance researcher at OpenAI. (Feb 24)

  • Ryan Greenblatt is an AI safety researcher at the AI safety non-profit Redwood Research. (Feb 24)

These interviews were conducted between March 2023 and February 2024, and represent their views at the time.

A very brief summary of what people said

What will happen?

Many respondents expected the first human-level AI (HLAI) to be in the same paradigm as current large language models (LLMs) like GPT-4, probably scaled up (made bigger), with some new tweaks and hacks, and scaffolding like AutoGPT to make it agentic. But a smaller handful of people predicted that larger breakthroughs are required before HLAI. The most common story of how AI could cause an existential disaster was the story of unaligned AI takeover, but some explicitly pushed back on the assumptions behind the takeover story. Some took a more structural view of AI risk, emphasizing threats like instability, extreme inequality, gradual human disempowerment, and a collapse of human institutions.

What should we do about it?

When asked how AI safety might prevent disaster, respondents focussed most on

  • the technical solutions we might come up with,

  • spreading a safety mindset through AI research,

  • promoting sensible AI regulation,

  • and helping build a fundamental science of AI.

The research directions people were most excited about were mechanistic interpretability, black box evaluations, and governance research.

What mistakes have been made?

Participants pointed to a range of mistakes they thought the AI safety movement had made. There was no consensus and the focus was quite different from person to person. The most common themes included:

  • an overreliance on overly theoretical argumentation,

  • being too insular,

  • putting people off by pushing weird or extreme views,

  • supporting the leading AGI companies resulting in race dynamics,

  • not enough independent thought,

  • advocating for an unhelpful pause to AI development,

  • and historically ignoring policy as a potential route to safety.

Limitations

  • People had somewhat different interpretations of my questions, so they were often answering questions that were subtly different from each other.

  • The sample of people I interviewed is not necessarily a representative sample of the AI safety movement as a whole. The sample was pseudo-randomly selected, optimizing for a) diversity of opinion, b) diversity of background, c) seniority, and d) who I could easily track down. Noticeably, there is an absence of individuals from MIRI, a historically influential AI safety organization, or those who subscribe to similar views. I approached some MIRI team members but no one was available for an interview. This is especially problematic since many respondents criticized MIRI for various reasons, and I didn’t get much of a chance to integrate MIRI’s side of the story into the project.

  • There will also be a selection bias due to everyone I asked being at least somewhat bought into the idea of AI being an existential risk.

  • A handful of respondents disagreed with the goal of this project: they thought that those in AI safety typically spend too much time thinking about theories of impact.

  • There were likely a whole bunch of framing effects that I did not control for.

  • There was in some cases a large gap in time between the interview and this being written up (mostly between 1 and 4 months, a year for one early interview). Participant opinions may have changed over this period.

Subsequent posts

In the following three posts, I present a condensed summary of my findings, describing the main themes that came up for each question:

  1. What will happen? What will human-level AI look like, and how might things go wrong?

  2. What should we do? What should AI safety be trying to achieve and how?

  3. What mistakes has the AI safety movement made?

You don’t need to have read an earlier post to understand a later one, so feel free to zoom straight in on what interests you.

I am very grateful to all of the participants for offering their time to this project. Also thanks to Vael Gates, Siao Si Looi, ChengCheng Tan, Adam Gleave, Quintin Davis, George Anadiotis, Leo Richter, McKenna Fitzgerald, Charlie Griffin and many of the participants for feedback on early drafts.

This work was funded and supported by FAR AI.