RSS

JakubK

Karma: 474

Avert­ing Catas­tro­phe: De­ci­sion The­ory for COVID-19, Cli­mate Change, and Po­ten­tial Disasters of All Kinds

JakubKMay 2, 2023, 10:50 PM
15 points
0 comments1 min readEA link
(nyupress.org)

Notes on “the hot mess the­ory of AI mis­al­ign­ment”

JakubKApr 21, 2023, 10:07 AM
44 points
3 commentsEA link

Risks from Ad­vanced AI

NicoleJaneway 🔸Mar 29, 2023, 9:40 PM
5 points
0 comments1 min readEA link

Risks from Ad­vanced AI

NicoleJaneway 🔸Mar 3, 2023, 4:43 PM
6 points
0 comments1 min readEA link

Next steps af­ter AGISF at UMich

JakubKJan 25, 2023, 8:57 PM
18 points
1 commentEA link

List of tech­ni­cal AI safety ex­er­cises and projects

JakubKJan 19, 2023, 9:35 AM
15 points
0 commentsEA link

6-para­graph AI risk in­tro for MAISI

JakubKJan 19, 2023, 9:22 AM
8 points
0 commentsEA link

List of lists of EA syllabi

JakubKJan 9, 2023, 6:34 AM
31 points
6 comments1 min readEA link
(docs.google.com)

Big list of AI safety videos

JakubKJan 9, 2023, 6:09 AM
9 points
0 comments1 min readEA link
(docs.google.com)

Big list of EA videos

JakubKJan 9, 2023, 5:56 AM
24 points
6 comments1 min readEA link
(docs.google.com)

Big list of ice­breaker questions

JakubKJan 9, 2023, 4:46 AM
28 points
1 comment1 min readEA link
(docs.google.com)

Sum­mary of 80k’s AI prob­lem profile

JakubKJan 1, 2023, 7:48 AM
19 points
0 comments5 min readEA link
(www.lesswrong.com)

New AI risk in­tro from Vox [link post]

JakubKDec 21, 2022, 5:50 AM
7 points
1 comment2 min readEA link
(www.vox.com)

[Question] Best in­tro­duc­tory overviews of AGI safety?

JakubKDec 13, 2022, 7:04 PM
21 points
8 comments2 min readEA link
(www.lesswrong.com)

Small im­prove­ments for uni­ver­sity group organizers

JakubKSep 30, 2022, 8:09 PM
7 points
0 comments1 min readEA link

[Question] Does China have AI al­ign­ment re­sources/​in­sti­tu­tions? How can we pri­ori­tize cre­at­ing more?

JakubKAug 4, 2022, 7:23 PM
18 points
9 comments1 min readEA link