I thought people here might be interested in my interview with Jan Leike about OpenAI’s superalignment team, and what their plan is to solve superintelligence alignment within 4 years.
My blurb for the episode:
Recently, OpenAI made a splash by announcing a new “Superalignment” team. Lead by Jan Leike and Ilya Sutskever, the team would consist of top researchers, attempting to solve alignment for superintelligent AIs in four years by figuring out how to build a trustworthy human-level AI alignment researcher, and then using it to solve the rest of the problem. But what does this plan actually involve? In this episode, I talk to Jan Leike about the plan and the challenges it faces.
Link to the transcript is here, and a link to the audio is here
AXRP Episode 24 - Superalignment with Jan Leike
Link post
I thought people here might be interested in my interview with Jan Leike about OpenAI’s superalignment team, and what their plan is to solve superintelligence alignment within 4 years.
My blurb for the episode:
Link to the transcript is here, and a link to the audio is here