RSS

Cen­ter for Hu­man-Com­pat­i­ble Ar­tifi­cial Intelligence

TagLast edit: 4 May 2021 13:57 UTC by Pablo

The Center for Human-Compatible Artificial Intelligence (CHAI) is an AI alignment research center at the University of California, Berkeley. Its mission is “to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.”

CHAI is one of the four organizations recommended by Founders Pledge in their cause report on safeguarding the long-term future (Halstead 2019).

Bibliography

Center for Human-Compatible Artificial Intelligence (2021) About, Center for Human-Compatible Artificial Intelligence.

Halstead, John (2019) Safeguarding the future cause area report, Founders Pledge, January (updated December 2020).

Open Philanthropy (2016) UC Berkeley — Center for Human-Compatible AI (2016), Open Philanthropy, August.

Rice, Issa (2018) Timeline of Center for Human-Compatible AI, Timelines Wiki, February 8.

External links

Center for Human-Compatible Artificial Intelligence. Official website.

Related entries

AI alignment | Berkeley Existential Risk Initiative | Human Compatible | Stuart Russell

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroff12 Apr 2021 19:11 UTC
60 points
15 comments1 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
134 points
14 comments68 min readEA link

Long-Term Fu­ture Fund: May 2021 grant recommendations

abergal27 May 2021 6:44 UTC
110 points
15 comments57 min readEA link

CHAI In­tern­ship Application

Martin Fukui12 Nov 2020 0:22 UTC
6 points
1 comment1 min readEA link

In­ter­view about CHAI

rosiecampbell3 Dec 2018 3:41 UTC
16 points
0 commentsEA link
(medium.com)

A con­ver­sa­tion with Ro­hin Shah

AI Impacts12 Nov 2019 1:31 UTC
27 points
8 comments33 min readEA link
(aiimpacts.org)

Publi­ca­tion of Stu­art Rus­sell’s new book on AI safety—re­views needed

CarolineJ8 Oct 2019 5:29 UTC
40 points
8 comments1 min readEA link
No comments.