Late 2021 MIRI Conversations

This sequence is a (chronological) series of logs of informal, largely unedited chats about AI, spanning EAs from a number of different organizations. A large number of topics are covered, beginning with conversations related to how difficult the AI alignment problem seems to be.

Participants include Eliezer Yudkowsky (MIRI), Carl Shulman (FHI), Rohin Shah (DeepMind), Richard Ngo (now at OpenAI), Ajeya Cotra (Open Phil), and Paul Christiano (ARC), among others.

Short summaries of each post, and links to audio versions, are available here.

Ngo and Yud­kowsky on al­ign­ment difficulty

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

Soares, Tal­linn, and Yud­kowsky dis­cuss AGI cognition

Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

Biol­ogy-In­spired AGI Timelines: The Trick That Never Works

Shul­man and Yud­kowsky on AI progress

More Chris­ti­ano, Co­tra, and Yud­kowsky on AI progress

Con­ver­sa­tion on tech­nol­ogy fore­cast­ing and gradualism

Ngo’s view on al­ign­ment difficulty

Ngo and Yud­kowsky on sci­en­tific rea­son­ing and pivotal acts

Chris­ti­ano and Yud­kowsky on AI pre­dic­tions and hu­man intelligence

Shah and Yud­kowsky on al­ign­ment failures