Paul Christiano is an American AI safety researcher. Christiano runs the Alignment Research Center, and is a research associate at the Future of Humanity Institute, a board member at Ought and a technical advisor for Open Philanthropy. Previously, he ran the language model alignment team at OpenAI.
Christiano, Paul (2021) AMA: Paul Christiano, alignment researcher, AI Alignment Forum, April 28.
Ngo, Richard (2020) EA reading list: Paul Christiano, Effective Altruism Forum, August 4.
Rice, Issa (2018a) List of discussions between Paul Christiano and Wei Dai, Cause Prioritization Wiki.
Rice, Issa (2018b) List of discussions between Eliezer Yudkowsky and Paul Christiano, Cause Prioritization Wiki.
Wiblin, Robert & Keiran Harris (2018) Dr Paul Christiano on how OpenAI is developing real solutions to the “AI alignment problem”, and his vision of how humanity will progressively hand over decision- making to AI systems, 80,000 Hours, October 2.
Paul Christiano. Official website.