We Ran an AI Timelines Retreat

TL;DR:

UCLA EA ran an AI timelines retreat for community members interested in pursuing AI safety as a career. Attendees sought to form inside views on the future of AI based on an object-level analysis of current AI capabilities.

We highly recommend other university groups hold similar <15 person object-level-based retreats. We tentatively recommend other organizers hold AI timelines retreats, with caveats discussed below.

Why did we run the UCLA EA AI Timelines Retreat?

Most people in the world do not take AI risk seriously. On the other hand, some prominent members of our community believe we have virtually no chance of surviving this century due to misaligned AI. These are wild-seeming takes with massive implications. We think that assessing AI risk should be a serious and thoughtful endeavor. We sought to create space for our participants to form an inside view on the future development of AI systems based on a technical analysis of current AI systems.

We aimed for participants to form evidence-based views on questions such as:

  • What are the concrete ways AI could harm us in the near future and beyond?

  • What are the most probable ways AGI could be developed?

  • How do recent developments in AI (such as DeepMind’s GATO) update your model of the urgency of AI risk?

  • Where can we focus our interventions to prevent the greatest risks from future AI systems?

What did the weekend look like?

The retreat began Friday evening and ended Sunday afternoon. We had 12 participants from UCLA, Harvard, UCI, and UC Berkeley. There was a 1:3 ratio of grad students to undergrads. Participants already had an interest in AI safety and most were already planning on pursuing it as a career.

We began with structured debates and presentations, and then spent the bulk of Saturday writing a forecasting report on the development of AI systems. Participants had the choice to work in teams or individually on these reports. Sunday was spent directing positive peer pressure to apply to AI safety opportunities or work on an AI safety-related project.

Govind Pimpale, Mario Peng Lee, Aiden Ament, and I (Lenny McCline) organized the content of the weekend. Leilani Bellamy from Canopy Retreats ran operations for the event, with help from Vishnupriya Bohra and Migle Railaite.

You can check out this google drive folder for resources and selected reports from the retreat.

What went well?

  • We were able to keep the conversation on the object level throughout the weekend, which is what we were aiming for.

  • Attendees seemed to gain what we wanted them to. Here is some well-articulated feedback from the event:

    • “I now have a proper inside view of timelines. I wasn’t just very uncertain about timelines, I had no personal model, so I didn’t have the ability to really analyze new evidence into updates or make uncertain predictions; I just relied on the weighted average of the views of some senior researchers. I now have a much clearer sense of what I actually predict about the future, including both a distribution for when I expect AGI, but also many predictions about what innovations I expect in what order, what I think is necessary for AGI, and the effects of pre-AGI AI.”

Having a clear goal of writing a timeline report kept the conversations focused and productive. Although attendees could write timelines reports without having attended the retreat, having a wide range of passionate AI safety people to bounce ideas off of was reportedly quite helpful in the report writing process. All of our attendees found the weekend to be a valuable experience (9.1 avg net promoter score) and came away with a clearer picture of AI timelines based on current AI system capabilities.

The small size of the retreat encouraged people to have deeper conversations. Since there were only 12 individuals attending, people felt more approachable. We hypothesize that having >15 attendees would have strongly diminished this effect.

What could be improved?

  • A more intellectually diverse pool of attendees

    • Most endorsed shorter timelines

    • Would’ve benefited from having ML experts with longer timelines

  • Conversations weren’t directed toward alignment research agendas

    • Agenda understanding seems more important for determining future steps

    • Timelines are generally more approachable

Having attendees with more varied opinions would’ve generated more defensible beliefs. While people presented a wide range of ideas, long timelines were underrepresented. Most attendees forecasted AGI in under 15 years (heavily influenced by the new GATO paper from DeepMind), and we would’ve liked to have other ML experts share a more skeptical viewpoint.

For improvements related to the programming details of the event, check out this commented schedule of the event, or message Lenny on the forum for the debrief document.

What’s next for UCLA EA?

We’re happy with the outcome of this retreat and have a strong team poised to run monthly weekend retreats (we expect mostly AI safety, with occasional biosecurity topics) similar to this one starting in fall.

Feel free to fill out this form if you’d like to pre-apply for future retreats. All retreats will be hosted in Westwood, CA, and anyone in the community may apply. Note that this is an experiment.

If you have any suggestions for future retreat topics (i.e. ELK weekend), please write them in the comments!