AI views and disagreements AMA: Christiano, Ngo, Shah, Soares, Yudkowsky

Link post

In late 2021, MIRI hosted a series of conversations about AI risk with a number of other EAs working in this problem area. As of today, we’ve finished posting the (almost entirely raw and unedited) results of that discussion.

To help with digesting the sequence now that it’s out, and to follow up on threads of interest, we’re hosting an AMA this Wednesday (March 2) featuring researchers from various organizations (all speaking in their personal capacity):

  • Paul Christiano (ARC)

  • Richard Ngo (OpenAI)

  • Rohin Shah (DeepMind)

  • Nate Soares (MIRI)

  • Eliezer Yudkowsky (MIRI)

You’re welcome to post questions, objections, etc. on any vaguely relevant topic, whether or not you’ve read the whole sequence.

The AMA is taking place on LessWrong, and is open to comments now: https://​​www.lesswrong.com/​​s/​​n945eovrA3oDueqtq/​​p/​​34Gkqus9vusXRevR8. If you don’t have a LessWrong account, feel free to post questions below and I’ll cross-post them.