RSS

Build­ing the field of AI safety

TagLast edit: 28 Oct 2022 15:56 UTC by Lizka

Building the field of AI safety refers to the family of interventions aimed at growing, shaping or otherwise improving AI safety as an intellectual community.

Related entries

AI risk | AI safety | existential risk | building effective altruism

AGI Safety Fun­da­men­tals pro­gramme is con­tract­ing a low-code engineer

Jamie Bernardi26 Aug 2022 15:43 UTC
39 points
4 comments5 min readEA link

An­nounc­ing the Cam­bridge Bos­ton Align­ment Ini­ti­a­tive [Hiring!]

kuhanj2 Dec 2022 1:07 UTC
73 points
0 comments1 min readEA link

An­nounc­ing the AI Safety Field Build­ing Hub, a new effort to provide AISFB pro­jects, men­tor­ship, and funding

Vael Gates28 Jul 2022 21:29 UTC
124 points
5 comments6 min readEA link

AI safety uni­ver­sity groups: a promis­ing op­por­tu­nity to re­duce ex­is­ten­tial risk

mic30 Jun 2022 18:37 UTC
50 points
1 comment11 min readEA link

The Tree of Life: Stan­ford AI Align­ment The­ory of Change

Gabriel Mukobi2 Jul 2022 18:32 UTC
68 points
5 comments14 min readEA link

An­nounc­ing the Har­vard AI Safety Team

Xander Davies30 Jun 2022 18:34 UTC
128 points
4 comments5 min readEA link

Estab­lish­ing Oxford’s AI Safety Stu­dent Group: Les­sons Learnt and Our Model

CharlieGriffin21 Sep 2022 7:57 UTC
70 points
0 comments1 min readEA link

Re­sources that (I think) new al­ign­ment re­searchers should know about

Akash28 Oct 2022 22:13 UTC
19 points
2 comments1 min readEA link

Are al­ign­ment re­searchers de­vot­ing enough time to im­prov­ing their re­search ca­pac­ity?

Carson Jones4 Nov 2022 0:58 UTC
10 points
1 comment1 min readEA link

What are some low-cost out­side-the-box ways to do/​fund al­ign­ment re­search?

trevor111 Nov 2022 5:57 UTC
2 points
3 comments1 min readEA link

Pod­cast: Shoshan­nah Tekofsky on skil­ling up in AI safety, vis­it­ing Berkeley, and de­vel­op­ing novel re­search ideas

Akash25 Nov 2022 20:47 UTC
9 points
0 comments1 min readEA link

Up­date on Har­vard AI Safety Team and MIT AI Alignment

Xander Davies2 Dec 2022 6:09 UTC
53 points
0 comments1 min readEA link

Tran­scripts of in­ter­views with AI researchers

Vael Gates9 May 2022 6:03 UTC
134 points
13 comments2 min readEA link

Vael Gates: Risks from Ad­vanced AI (June 2022)

Vael Gates14 Jun 2022 0:49 UTC
45 points
5 comments30 min readEA link

Re­sources I send to AI re­searchers about AI safety

Vael Gates14 Jun 2022 2:23 UTC
60 points
1 comment10 min readEA link

[Question] Does China have AI al­ign­ment re­sources/​in­sti­tu­tions? How can we pri­ori­tize cre­at­ing more?

jskatt4 Aug 2022 19:23 UTC
17 points
9 comments1 min readEA link

*New* Canada AI Safety & Gover­nance community

Wyatt Tessari L'Allié29 Aug 2022 15:58 UTC
31 points
2 comments1 min readEA link

An­nounc­ing an Em­piri­cal AI Safety Program

Joshc13 Sep 2022 21:39 UTC
64 points
7 comments2 min readEA link

Stress Ex­ter­nal­ities More in AI Safety Pitches

NickGabs26 Sep 2022 20:31 UTC
31 points
13 comments2 min readEA link

We all teach: here’s how to do it better

Michael Noetel30 Sep 2022 2:06 UTC
152 points
12 comments24 min readEA link

AI Safety Un­con­fer­ence NeurIPS 2022

Orpheus_Lummis7 Nov 2022 15:39 UTC
13 points
5 comments1 min readEA link
(aisafetyevents.org)

AI Safety Micro­grant Round

Chris Leong14 Nov 2022 4:25 UTC
82 points
1 comment3 min readEA link