RSS

AI safety re­sources and materials

TagLast edit: 5 Oct 2022 14:10 UTC by Lizka

AI safety resources and materials include syllabi and other educational content related to AI safety.

Related entries

Teaching materials | collections and resources | research summary | AI risk | AI safety

List of AI safety newslet­ters and other resources

Lizka1 May 2023 17:24 UTC
49 points
5 comments4 min readEA link

List of AI safety courses and resources

Daniel del Castillo6 Sep 2021 14:26 UTC
50 points
8 comments1 min readEA link

How to pur­sue a ca­reer in tech­ni­cal AI alignment

Charlie Rogers-Smith4 Jun 2022 21:36 UTC
265 points
9 comments39 min readEA link

Cost-effec­tive­ness of stu­dent pro­grams for AI safety research

Center for AI Safety10 Jul 2023 17:23 UTC
53 points
7 comments15 min readEA link

All AGI Safety ques­tions wel­come (es­pe­cially ba­sic ones) [April 2023]

StevenKaas8 Apr 2023 4:21 UTC
111 points
174 comments1 min readEA link

An­nounc­ing aisafety.training

JJ Hepburn17 Jan 2023 1:55 UTC
110 points
4 comments1 min readEA link

[Linkpost] AI Align­ment, Ex­plained in 5 Points (up­dated)

Daniel_Eth18 Apr 2023 8:09 UTC
31 points
2 comments1 min readEA link
(medium.com)

MATS AI Safety Strat­egy Cur­ricu­lum v2

DanielFilan7 Oct 2024 23:01 UTC
29 points
1 comment1 min readEA link

Cost-effec­tive­ness of pro­fes­sional field-build­ing pro­grams for AI safety research

Center for AI Safety10 Jul 2023 17:26 UTC
38 points
2 comments18 min readEA link

Model­ing the im­pact of AI safety field-build­ing programs

Center for AI Safety10 Jul 2023 17:22 UTC
83 points
0 comments7 min readEA link

On­board­ing stu­dents to EA/​AIS in 4 days with an in­ten­sive fellowship

gergo5 Dec 2023 10:07 UTC
17 points
0 comments4 min readEA link

AI Safety Ar­gu­ments: An In­ter­ac­tive Guide

Lukas Trötzmüller🔸1 Feb 2023 19:21 UTC
32 points
5 comments3 min readEA link

The Im­por­tance of AI Align­ment, ex­plained in 5 points

Daniel_Eth11 Feb 2023 2:56 UTC
50 points
4 comments13 min readEA link

$20K in Boun­ties for AI Safety Public Materials

TW1235 Aug 2022 2:57 UTC
45 points
11 comments6 min readEA link

How to be­come an AI safety researcher

peterbarnett12 Apr 2022 11:33 UTC
112 points
15 comments14 min readEA link

Distri­bu­tion Shifts and The Im­por­tance of AI Safety

Leon_Lang29 Sep 2022 22:38 UTC
7 points
0 comments1 min readEA link

Re­sources that (I think) new al­ign­ment re­searchers should know about

Akash28 Oct 2022 22:13 UTC
20 points
2 comments1 min readEA link

Poster Ses­sion on AI Safety

Neil Crawford12 Nov 2022 3:50 UTC
8 points
0 comments4 min readEA link

AI Risk In­tro 1: Ad­vanced AI Might Be Very Bad

L Rudolf L11 Sep 2022 10:57 UTC
22 points
0 comments30 min readEA link

[Question] How to cre­ate cur­ricu­lum for self-study to­wards AI al­ign­ment work?

OIUJHKDFS7 Jan 2023 19:53 UTC
10 points
5 comments1 min readEA link

AGISF adap­ta­tion for in-per­son groups

Sam Marks17 Jan 2023 18:33 UTC
30 points
0 comments3 min readEA link
(www.lesswrong.com)

Thread: Reflec­tions on the AGI Safety Fun­da­men­tals course?

Clifford18 May 2023 13:11 UTC
27 points
7 comments1 min readEA link

(My sug­ges­tions) On Begin­ner Steps in AI Alignment

Joseph Bloom22 Sep 2022 15:32 UTC
36 points
3 comments9 min readEA link

Lev­el­ling Up in AI Safety Re­search Engineering

GabeM2 Sep 2022 4:59 UTC
165 points
21 comments17 min readEA link

AI Safety Univer­sity Or­ga­niz­ing: Early Take­aways from Thir­teen Groups

Agustín Covarrubias 🔸2 Oct 2024 14:39 UTC
45 points
2 comments9 min readEA link

Big list of AI safety videos

JakubK9 Jan 2023 6:09 UTC
9 points
0 comments1 min readEA link
(docs.google.com)

“AI Risk Dis­cus­sions” web­site: Ex­plor­ing in­ter­views from 97 AI Researchers

Vael Gates2 Feb 2023 1:00 UTC
46 points
1 comment1 min readEA link

An au­dio ver­sion of the al­ign­ment prob­lem from a deep learn­ing per­spec­tive by Richard Ngo Et Al

Miguel3 Feb 2023 19:32 UTC
18 points
0 comments1 min readEA link
(www.whitehatstoic.com)

AI Safety Info Distil­la­tion Fellowship

robertskmiles17 Feb 2023 16:16 UTC
80 points
1 comment1 min readEA link

Sum­mary of 80k’s AI prob­lem profile

JakubK1 Jan 2023 7:48 UTC
19 points
0 comments5 min readEA link
(www.lesswrong.com)

Seek­ing in­put on a list of AI books for broader audience

Darren McKee27 Feb 2023 22:40 UTC
49 points
14 comments5 min readEA link

Prob­lems of peo­ple new to AI safety and my pro­ject ideas to miti­gate them

Igor Ivanov3 Mar 2023 17:35 UTC
19 points
0 comments7 min readEA link

My at­tempt at ex­plain­ing the case for AI risk in a straight­for­ward way

JulianHazell25 Mar 2023 16:32 UTC
25 points
7 comments18 min readEA link
(muddyclothes.substack.com)

An A.I. Safety Pre­sen­ta­tion at RIT

Nicholas / Heather Kross27 Mar 2023 23:49 UTC
5 points
0 comments1 min readEA link

Con­crete Steps to Get Started in Trans­former Mechanis­tic Interpretability

Neel Nanda26 Dec 2022 13:00 UTC
18 points
0 comments12 min readEA link

All AGI Safety ques­tions wel­come (es­pe­cially ba­sic ones) [May 2023]

StevenKaas8 May 2023 22:30 UTC
19 points
11 comments1 min readEA link

[Question] Best in­tro­duc­tory overviews of AGI safety?

JakubK13 Dec 2022 19:04 UTC
21 points
8 comments2 min readEA link
(www.lesswrong.com)

AI Safety Fun­da­men­tals: An In­for­mal Co­hort Start­ing Soon! (cross-posted to less­wrong.com)

Tiago4 Jun 2023 18:21 UTC
6 points
0 comments1 min readEA link
(www.lesswrong.com)

ENAIS has launched a newslet­ter for AIS fieldbuilders

gergo22 Nov 2024 10:45 UTC
18 points
0 comments1 min readEA link

China x AI Refer­ence List

Saad Siddiqui13 Mar 2024 18:57 UTC
61 points
3 comments3 min readEA link
(docs.google.com)

We are shar­ing a new web­site tem­plate for AI Safety groups!

AIS Hungary13 Mar 2024 16:40 UTC
10 points
2 comments1 min readEA link

What AI Safety Ma­te­ri­als Do ML Re­searchers Find Com­pel­ling?

Vael Gates28 Dec 2022 2:03 UTC
130 points
12 comments1 min readEA link

AI Safety For Dum­mies (Like Me)

Madhav Malhotra24 Aug 2022 20:26 UTC
22 points
7 comments20 min readEA link

Un­con­trol­lable AI as an Ex­is­ten­tial Risk

Karl von Wendt9 Oct 2022 10:37 UTC
28 points
0 comments1 min readEA link

New AI risk in­tro from Vox [link post]

JakubK21 Dec 2022 5:50 UTC
7 points
1 comment2 min readEA link
(www.vox.com)

AI Safety Ex­ec­u­tive Summary

Sean Osier6 Sep 2022 8:26 UTC
20 points
2 comments5 min readEA link
(seanosier.notion.site)

Let’s talk about un­con­trol­lable AI

Karl von Wendt9 Oct 2022 10:37 UTC
12 points
2 comments1 min readEA link

Learn­ing as much Deep Learn­ing math as I could in 24 hours

Phosphorous8 Jan 2023 2:19 UTC
58 points
6 comments7 min readEA link

AI Risk In­tro 2: Solv­ing The Problem

L Rudolf L24 Sep 2022 9:33 UTC
11 points
0 comments28 min readEA link
(www.perfectlynormal.co.uk)

Re­sources I send to AI re­searchers about AI safety

Vael Gates11 Jan 2023 1:24 UTC
43 points
0 comments1 min readEA link

My ex­pe­rience build­ing math­e­mat­i­cal ML skills with a course from UIUC

Naoya Okamoto9 Jun 2024 11:41 UTC
2 points
0 comments10 min readEA link

Power-Seek­ing AI and Ex­is­ten­tial Risk

antoniofrancaib11 Oct 2022 21:47 UTC
10 points
0 comments1 min readEA link

There should be a pub­lic ad­ver­sar­ial col­lab­o­ra­tion on AI x-risk

pradyuprasad23 Jan 2023 4:09 UTC
56 points
5 comments2 min readEA link
No comments.