FLI AI Alignment podcast: Evan Hubinger on Inner Alignment, Outer Alignment, and Proposals for Building Safe Advanced AI

Link post

This post is for the Future of Life Institute’s most recent AI alignment podcast episode that I was a part of. I talk about a lot of stuff on it which I think is likely to be fairly relevant to other EAs here, including what I see as the biggest problems that need to be solved in AI safety as well as some of what I see as the most promising solutions.

I’m happy to answer any questions that people might have here if any come up. The full transcript of the podcast is also available at the above link in case anyone would rather read than listen.