Paul Christiano

TagLast edit: 8 May 2021 21:29 UTC by Pablo

Paul Christiano is an American AI safety researcher. Christiano runs the Alignment Research Center, and is a research associate at the Future of Humanity Institute, a board member at Ought and a technical advisor for Open Philanthropy. Previously, he ran the language model alignment team at OpenAI.


Christiano, Paul (2021) AMA: Paul Christiano, alignment researcher, AI Alignment Forum, April 28.

Ngo, Richard (2020) EA reading list: Paul Christiano, Effective Altruism Forum, August 4.

Rice, Issa (2018a) List of discussions between Paul Christiano and Wei Dai, Cause Prioritization Wiki.

Rice, Issa (2018b) List of discussions between Eliezer Yudkowsky and Paul Christiano, Cause Prioritization Wiki.

Wiblin, Robert & Keiran Harris (2018) Dr Paul Christiano on how OpenAI is developing real solutions to the “AI alignment problem”, and his vision of how humanity will progressively hand over decision- making to AI systems, 80,000 Hours, October 2.

External links

Paul Christiano. Official website.

In­tegrity for consequentialists

Paul_Christiano14 Nov 2016 20:56 UTC
48 points
15 commentsEA link

Paul Chris­ti­ano: Cur­rent work in AI alignment

EA Global3 Apr 2020 7:06 UTC
16 points
0 comments23 min readEA link

Paul Chris­ti­ano on cause prioritization

admin323 Mar 2014 22:44 UTC
5 points
2 commentsEA link

My Un­der­stand­ing of Paul Chris­ti­ano’s Iter­ated Am­plifi­ca­tion AI Safety Re­search Agenda

Chi15 Aug 2020 19:59 UTC
34 points
3 comments39 min readEA link

AI im­pacts and Paul Chris­ti­ano on take­off speeds

Crosspost2 Mar 2018 11:16 UTC
4 points
0 commentsEA link

EA read­ing list: Paul Christiano

richard_ngo4 Aug 2020 13:36 UTC
22 points
0 comments1 min readEA link
No comments.