RSS

Cen­ter for Hu­man-Com­pat­i­ble Ar­tifi­cial Intelligence

TagLast edit: 16 Jul 2022 8:47 UTC by Leo

The Center for Human-Compatible Artificial Intelligence (CHAI) is an AI alignment research center at the University of California, Berkeley. Its mission is “to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.”[1]

Funding

As of June 2022, CHAI has received over $17.1 million in funding from Open Philanthropy,[2] nearly $780,000 from the Survival and Flourishing Fund,[3] and over $120,000 from Effective Altruism Funds.[4][5]

Evaluation

CHAI is one of the four organizations recommended by Founders Pledge in their cause report on safeguarding the long-term future.[6]

Further reading

Open Philanthropy (2016) UC Berkeley — Center for Human-Compatible AI (2016), Open Philanthropy, August.

Rice, Issa (2018) Timeline of Center for Human-Compatible AI, Timelines Wiki, February 8.

External links

Center for Human-Compatible Artificial Intelligence. Official website.

Apply for a job.

Donate to CHAI.

Related entries

AI alignment | Berkeley Existential Risk Initiative | Human Compatible | Stuart Russell

  1. ^

    Center for Human-Compatible Artificial Intelligence (2021) About, Center for Human-Compatible Artificial Intelligence.

  2. ^

    Open Philanthropy (2022) Grants database: Center for Human-Compatible AI, Open Philanthropy.

  3. ^

    Survival and Flourishing Fund (2019) SFF-2020-H2 S-process recommendations announcement, Survival and Flourishing Fund.

  4. ^

    Long-Term Future Fund (2020) September 2020: Long-Term Future Fund grants, Effective Altruism Funds, September.

  5. ^

    Long-Term Future Fund (2021) May 2021: Long-Term Future Fund grants, Effective Altruism Funds, May.

  6. ^

    Halstead, John (2019) Safeguarding the future cause area report, Founders Pledge, January (updated December 2020).

[Question] Is there ev­i­dence that recom­mender sys­tems are chang­ing users’ prefer­ences?

zdgroff12 Apr 2021 19:11 UTC
60 points
15 comments1 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
155 points
16 comments68 min readEA link

2021 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks23 Dec 2021 14:06 UTC
176 points
18 comments73 min readEA link

Long-Term Fu­ture Fund: May 2021 grant recommendations

abergal27 May 2021 6:44 UTC
110 points
17 comments57 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

Kiliank9 May 2022 19:53 UTC
17 points
2 comments8 min readEA link

Publi­ca­tion of Stu­art Rus­sell’s new book on AI safety—re­views needed

Caro8 Oct 2019 5:29 UTC
40 points
8 comments1 min readEA link

CHAI in­tern­ship ap­pli­ca­tions are open (due Nov 13)

Erik Jenner26 Oct 2023 0:48 UTC
6 points
1 comment3 min readEA link

CHAI In­tern­ship Application

Martin Fukui12 Nov 2020 0:22 UTC
6 points
1 comment1 min readEA link

In­ter­view about CHAI

rosiecampbell3 Dec 2018 3:41 UTC
16 points
0 comments1 min readEA link
(medium.com)

A con­ver­sa­tion with Ro­hin Shah

AI Impacts12 Nov 2019 1:31 UTC
27 points
8 comments33 min readEA link
(aiimpacts.org)
No comments.