This sequence is a (chronological) series of logs of informal, largely unedited chats about AI, spanning EAs from a number of different organizations. A large number of topics are covered, beginning with conversations related to how difficult the AI alignment problem seems to be.
Participants include Eliezer Yudkowsky (MIRI), Carl Shulman (FHI), Rohin Shah (DeepMind), Richard Ngo (now at OpenAI), Ajeya Cotra (Open Phil), and Paul Christiano (ARC), among others.
Short summaries of each post, and links to audio versions, are available here.