“AI Risk Discussions” website: Exploring interviews from 97 AI Researchers

Tldr: Check out the new AI Risk Discussions website! It contains 83 transcripts of AI researcher interviews, and a quantitative analysis (full report) of these interviews. We also built an interactive walkthrough of perspectives and arguments. We would be grateful for any feedback!


In February-March 2022, Vael Gates conducted a series of 97 interviews with AI researchers about their perceptions of the future of AI, focusing on their responses to arguments for potential risk from advanced systems. Eleven transcripts from those interviews were released, along with a talk summarizing the findings, with the promise of further analysis of the results.

We have now finished the analysis, and created a website aimed at a technical audience to explore the results!

Lukas Trötzmüller has written an interactive walkthrough of the common perspectives that interviewees had, along with counterarguments describing why we might still be concerned about risk from advanced AI (introduction to this walkthrough). Maheen Shermohammed has conducted a quantitative analysis (full report) of all the interview transcripts, which were laboriously tagged by Zi Cheng (Sam) Huang. An army of people helped anonymize a new set of transcripts that researchers gave permission to be publicly released. We’ve also constructed a new resources page (based on what materials ML researchers find compelling) and “what can I do?” page for further investigation. Michael Keenan led the effort to put the whole website together.

In addition to being a compilation of the research outputs of this interview series, we want this to be a website that can be helpfully forwarded to technical AI researchers, and would be grateful for any feedback that can improve it. Please feel free to leave comments on the interactive walkthrough post, the quantitative analysis post, or message Vael with any notes you have. Thanks everyone!

Crossposted to LessWrong (0 points, 0 comments)