Paul Christiano is an American AI safety researcher. Christiano runs the Alignment Research Center, and is a research associate at the Future of Humanity Institute, a board member at Ought and a technical advisor for Open Philanthropy. Previously, he ran the language model alignment team at OpenAI.
Further reading
Christiano, Paul (2021) AMA: Paul Christiano, alignment researcher, AI Alignment Forum, April 28.
Ngo, Richard (2020) EA reading list: Paul Christiano, Effective Altruism Forum, August 4.
Nguyen, Chi (2020) My understanding of Paul Christiano’s iterated amplification AI safety research agenda, Effective Altruism Forum, August 15.
Rice, Issa (2018a) List of discussions between Paul Christiano and Wei Dai, Cause Prioritization Wiki.
Rice, Issa (2018b) List of discussions between Eliezer Yudkowsky and Paul Christiano, Cause Prioritization Wiki.
Wiblin, Robert & Keiran Harris (2018) Dr Paul Christiano on how OpenAI is developing real solutions to the “AI alignment problem”, and his vision of how humanity will progressively hand over decision-making to AI systems, 80,000 Hours, October 2.
External links
Paul Christiano. Official website.
Paul Christiano. Effective Altruism Forum account.