RSS

JakubK

Karma: 474

Avert­ing Catas­tro­phe: De­ci­sion The­ory for COVID-19, Cli­mate Change, and Po­ten­tial Disasters of All Kinds

JakubK2 May 2023 22:50 UTC
15 points
0 comments1 min readEA link
(nyupress.org)

Notes on “the hot mess the­ory of AI mis­al­ign­ment”

JakubK21 Apr 2023 10:07 UTC
44 points
3 comments1 min readEA link

Risks from Ad­vanced AI

NicoleJaneway29 Mar 2023 21:40 UTC
5 points
0 comments1 min readEA link

Risks from Ad­vanced AI

NicoleJaneway3 Mar 2023 16:43 UTC
6 points
0 comments1 min readEA link

Next steps af­ter AGISF at UMich

JakubK25 Jan 2023 20:57 UTC
18 points
1 comment1 min readEA link

List of tech­ni­cal AI safety ex­er­cises and projects

JakubK19 Jan 2023 9:35 UTC
15 points
0 comments1 min readEA link

6-para­graph AI risk in­tro for MAISI

JakubK19 Jan 2023 9:22 UTC
8 points
0 comments1 min readEA link

List of lists of EA syllabi

JakubK9 Jan 2023 6:34 UTC
31 points
6 comments1 min readEA link
(docs.google.com)

Big list of AI safety videos

JakubK9 Jan 2023 6:09 UTC
9 points
0 comments1 min readEA link
(docs.google.com)

Big list of EA videos

JakubK9 Jan 2023 5:56 UTC
24 points
6 comments1 min readEA link
(docs.google.com)

Big list of ice­breaker questions

JakubK9 Jan 2023 4:46 UTC
28 points
1 comment1 min readEA link
(docs.google.com)

Sum­mary of 80k’s AI prob­lem profile

JakubK1 Jan 2023 7:48 UTC
19 points
0 comments5 min readEA link
(www.lesswrong.com)

New AI risk in­tro from Vox [link post]

JakubK21 Dec 2022 5:50 UTC
7 points
1 comment2 min readEA link
(www.vox.com)

[Question] Best in­tro­duc­tory overviews of AGI safety?

JakubK13 Dec 2022 19:04 UTC
21 points
8 comments2 min readEA link
(www.lesswrong.com)

Small im­prove­ments for uni­ver­sity group organizers

JakubK30 Sep 2022 20:09 UTC
7 points
0 comments1 min readEA link

[Question] Does China have AI al­ign­ment re­sources/​in­sti­tu­tions? How can we pri­ori­tize cre­at­ing more?

JakubK4 Aug 2022 19:23 UTC
18 points
9 comments1 min readEA link