RSS

Berkeley Ex­is­ten­tial Risk Initiative

TagLast edit: 3 May 2021 14:43 UTC by Pablo

The Berkeley Existential Risk Initiative (BERI) is a non-profit organization that takes on ethical and legal responsibility for projects deemed to be important for existential risk reduction. It was founded in 2017 by Andrew Critch (Vaughan 2017; Berkeley Existential Risk Initiative 2018).

Organizations helped by BERI include the Center for Human-Compatible Artificial Intelligence, the Centre for the Study of Existential Risk, and the Future of Humanity Institute (Berkeley Existential Risk Initiative 2021).

Bibliography

Berkeley Existential Risk Initiative (2018) Semi-annual report, Berkeley Existential Risk Initiative, August.

Berkeley Existential Risk Initiative (2021) Mission, Berkeley Existential Risk Initiative.

Rice, Issa et al (2018) Timeline of Berkeley Existential Risk Initiative, Timelines Wiki.

Vaughan, Kerry (2017) Update on Effective Altruism Funds, Effective Altruism Forum, April 20.

External links

Berkeley Existential Risk Initiative. Official website.

Related entries

Center for Human-Compatible Artificial Intelligence | Centre for the Study of Existential Risk | existential risk | Future of Humanity Institute

BERI seek­ing new collaborators

sawyer29 Apr 2021 16:34 UTC
25 points
2 comments1 min readEA link

2020 AI Align­ment Liter­a­ture Re­view and Char­ity Comparison

Larks21 Dec 2020 15:25 UTC
134 points
14 comments68 min readEA link

BERI seek­ing new collaborators

sawyer6 May 2020 23:11 UTC
17 points
0 comments1 min readEA link

BERI’s “Pro­ject Grants” Pro­gram—Round One

rebecca_raible8 Jun 2018 2:44 UTC
12 points
3 commentsEA link

[Link] BERI hand­ing off Jaan Tal­linn’s grantmaking

Milan_Griffes27 Aug 2019 17:13 UTC
18 points
2 comments1 min readEA link

EA grants available to in­di­vi­d­u­als (cross­post from LessWrong)

Jameson Quinn7 Feb 2019 15:23 UTC
31 points
13 commentsEA link

“Tak­ing AI Risk Se­ri­ously” – Thoughts by An­drew Critch

Raemon19 Nov 2018 2:21 UTC
26 points
9 commentsEA link
(www.lesswrong.com)
No comments.