RSS

Paul Christiano

TagLast edit: Apr 6, 2022, 5:58 PM by Leo

Paul Christiano is an American AI safety researcher. Christiano runs the Alignment Research Center, and is a research associate at the Future of Humanity Institute, a board member at Ought and a technical advisor for Open Philanthropy. Previously, he ran the language model alignment team at OpenAI.

Further reading

Christiano, Paul (2021) AMA: Paul Christiano, alignment researcher, AI Alignment Forum, April 28.

Ngo, Richard (2020) EA reading list: Paul Christiano, Effective Altruism Forum, August 4.

Nguyen, Chi (2020) My understanding of Paul Christiano’s iterated amplification AI safety research agenda, Effective Altruism Forum, August 15.

Rice, Issa (2018a) List of discussions between Paul Christiano and Wei Dai, Cause Prioritization Wiki.

Rice, Issa (2018b) List of discussions between Eliezer Yudkowsky and Paul Christiano, Cause Prioritization Wiki.

Wiblin, Robert & Keiran Harris (2018) Dr Paul Christiano on how OpenAI is developing real solutions to the “AI alignment problem”, and his vision of how humanity will progressively hand over decision-making to AI systems, 80,000 Hours, October 2.

External links

Paul Christiano. Official website.

Paul Christiano. Effective Altruism Forum account.

EA read­ing list: Paul Christiano

richard_ngoAug 4, 2020, 1:36 PM
23 points
0 comments1 min readEA link

My Un­der­stand­ing of Paul Chris­ti­ano’s Iter­ated Am­plifi­ca­tion AI Safety Re­search Agenda

ChiAug 15, 2020, 7:59 PM
38 points
3 comments39 min readEA link

In­tegrity for consequentialists

Paul_ChristianoNov 14, 2016, 8:56 PM
177 points
18 comments8 min readEA link

Thoughts on re­spon­si­ble scal­ing poli­cies and regulation

Paul_ChristianoOct 24, 2023, 10:25 PM
191 points
5 comments6 min readEA link

[linkpost] Chris­ti­ano on agree­ment/​dis­agree­ment with Yud­kowsky’s “List of Lethal­ities”

Owen Cotton-BarrattJun 19, 2022, 10:47 PM
130 points
1 comment1 min readEA link
(www.lesswrong.com)

Yud­kowsky and Chris­ti­ano on AI Take­off Speeds [LINKPOST]

aogaraApr 5, 2022, 12:57 AM
15 points
0 comments11 min readEA link

Paul Chris­ti­ano: Cur­rent work in AI alignment

EA GlobalApr 3, 2020, 7:06 AM
80 points
3 comments24 min readEA link
(www.youtube.com)

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

EliezerYudkowskyNov 22, 2021, 7:42 PM
42 points
0 comments60 min readEA link

Chris­ti­ano (ARC) and GA (Con­jec­ture) Dis­cuss Align­ment Cruxes

Andrea_MiottiFeb 24, 2023, 11:03 PM
16 points
1 comment1 min readEA link

Paul Chris­ti­ano on Dwarkesh Podcast

ESRogsNov 3, 2023, 10:13 PM
5 points
0 comments1 min readEA link
(www.dwarkeshpatel.com)

Imi­ta­tion Learn­ing is Prob­a­bly Ex­is­ten­tially Safe

Vasco Grilo🔸Apr 30, 2024, 5:06 PM
19 points
7 comments3 min readEA link
(www.openphilanthropy.org)

Paul Chris­ti­ano on cause prioritization

admin3Mar 23, 2014, 10:44 PM
5 points
2 comments11 min readEA link

AI im­pacts and Paul Chris­ti­ano on take­off speeds

CrosspostMar 2, 2018, 11:16 AM
4 points
0 comments1 min readEA link

Paul Chris­ti­ano on how OpenAI is de­vel­op­ing real solu­tions to the ‘AI al­ign­ment prob­lem’, and his vi­sion of how hu­man­ity will pro­gres­sively hand over de­ci­sion-mak­ing to AI systems

80000_HoursOct 2, 2018, 11:49 AM
6 points
0 comments185 min readEA link

Paul Chris­ti­ano – Ma­chine in­tel­li­gence and cap­i­tal accumulation

Tessa A 🔸May 15, 2014, 12:10 AM
21 points
0 comments6 min readEA link
(rationalaltruist.com)
No comments.