RSS

Cen­ter for AI Safety

TagLast edit: 8 Mar 2024 22:54 UTC by Sarah Cheng

The Center for AI Safety (CAIS) is a non-profit that works to reduce societal-scale risks from AI through research, field-building, and advocacy.

External links

Center for AI Safety. Official website.

Apply for a job.

Is “su­per­hu­man” AI fore­cast­ing BS? Some ex­per­i­ments on the “539″ bot from the Cen­tre for AI Safety

titotal18 Sep 2024 13:07 UTC
67 points
4 comments14 min readEA link
(open.substack.com)

US pub­lic per­cep­tion of CAIS state­ment and the risk of extinction

Jamie E22 Jun 2023 16:39 UTC
126 points
4 comments9 min readEA link

Why some peo­ple dis­agree with the CAIS state­ment on AI

David_Moss15 Aug 2023 13:39 UTC
144 points
15 comments16 min readEA link

Model­ing the im­pact of AI safety field-build­ing programs

Center for AI Safety10 Jul 2023 17:22 UTC
83 points
0 comments7 min readEA link

Biose­cu­rity and AI: Risks and Opportunities

Center for AI Safety27 Feb 2024 18:46 UTC
7 points
2 comments7 min readEA link
(www.safe.ai)

An Overview of Catas­trophic AI Risks

Center for AI Safety15 Aug 2023 21:52 UTC
37 points
1 comment13 min readEA link
(www.safe.ai)

State­ment on AI Ex­tinc­tion—Signed by AGI Labs, Top Aca­demics, and Many Other Notable Figures

Center for AI Safety30 May 2023 9:06 UTC
427 points
28 comments1 min readEA link
(www.safe.ai)

Sub­mit Your Tough­est Ques­tions for Hu­man­ity’s Last Exam

Matrice Jacobine18 Sep 2024 8:03 UTC
6 points
0 comments2 min readEA link
(www.safe.ai)
No comments.