RSS

Paul Christiano

TagLast edit: 6 Apr 2022 17:58 UTC by Leo

Paul Christiano is an American AI safety researcher. Christiano runs the Alignment Research Center, and is a research associate at the Future of Humanity Institute, a board member at Ought and a technical advisor for Open Philanthropy. Previously, he ran the language model alignment team at OpenAI.

Further reading

Christiano, Paul (2021) AMA: Paul Christiano, alignment researcher, AI Alignment Forum, April 28.

Ngo, Richard (2020) EA reading list: Paul Christiano, Effective Altruism Forum, August 4.

Nguyen, Chi (2020) My understanding of Paul Christiano’s iterated amplification AI safety research agenda, Effective Altruism Forum, August 15.

Rice, Issa (2018a) List of discussions between Paul Christiano and Wei Dai, Cause Prioritization Wiki.

Rice, Issa (2018b) List of discussions between Eliezer Yudkowsky and Paul Christiano, Cause Prioritization Wiki.

Wiblin, Robert & Keiran Harris (2018) Dr Paul Christiano on how OpenAI is developing real solutions to the “AI alignment problem”, and his vision of how humanity will progressively hand over decision-making to AI systems, 80,000 Hours, October 2.

External links

Paul Christiano. Official website.

Paul Christiano. Effective Altruism Forum account.

EA read­ing list: Paul Christiano

richard_ngo4 Aug 2020 13:36 UTC
23 points
0 comments1 min readEA link

My Un­der­stand­ing of Paul Chris­ti­ano’s Iter­ated Am­plifi­ca­tion AI Safety Re­search Agenda

Chi15 Aug 2020 19:59 UTC
38 points
3 comments40 min readEA link

In­tegrity for consequentialists

Paul_Christiano14 Nov 2016 20:56 UTC
169 points
18 comments8 min readEA link

[linkpost] Chris­ti­ano on agree­ment/​dis­agree­ment with Yud­kowsky’s “List of Lethal­ities”

Owen Cotton-Barratt19 Jun 2022 22:47 UTC
130 points
1 comment1 min readEA link
(www.lesswrong.com)

Thoughts on re­spon­si­ble scal­ing poli­cies and regulation

Paul_Christiano24 Oct 2023 22:25 UTC
177 points
5 comments6 min readEA link

Yud­kowsky and Chris­ti­ano on AI Take­off Speeds [LINKPOST]

aogara5 Apr 2022 0:57 UTC
15 points
0 comments11 min readEA link

Paul Chris­ti­ano: Cur­rent work in AI alignment

EA Global3 Apr 2020 7:06 UTC
80 points
3 comments22 min readEA link
(www.youtube.com)

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

EliezerYudkowsky22 Nov 2021 19:42 UTC
42 points
0 comments60 min readEA link

Paul Chris­ti­ano on how OpenAI is de­vel­op­ing real solu­tions to the ‘AI al­ign­ment prob­lem’, and his vi­sion of how hu­man­ity will pro­gres­sively hand over de­ci­sion-mak­ing to AI systems

80000_Hours2 Oct 2018 11:49 UTC
6 points
0 comments188 min readEA link

AI im­pacts and Paul Chris­ti­ano on take­off speeds

Crosspost2 Mar 2018 11:16 UTC
4 points
0 comments1 min readEA link

Chris­ti­ano (ARC) and GA (Con­jec­ture) Dis­cuss Align­ment Cruxes

Andrea_Miotti24 Feb 2023 23:03 UTC
16 points
1 comment1 min readEA link

Paul Chris­ti­ano on cause prioritization

admin323 Mar 2014 22:44 UTC
5 points
2 comments11 min readEA link

Paul Chris­ti­ano on Dwarkesh Podcast

ESRogs3 Nov 2023 22:13 UTC
5 points
0 comments1 min readEA link
(www.dwarkeshpatel.com)

Paul Chris­ti­ano – Ma­chine in­tel­li­gence and cap­i­tal accumulation

Tessa15 May 2014 0:10 UTC
21 points
0 comments6 min readEA link
(rationalaltruist.com)
No comments.