RSS

CLR’s Safe Pareto Im­prove­ments Re­search Agenda

Anthony DiGiovanni 🔸20 Apr 2026 9:28 UTC
6 points
0 comments14 min readEA link

Bet­ter Fu­tures Dis­cus­sion Thread: With Fin Moorhouse

Toby Tremlett🔹20 Apr 2026 8:59 UTC
8 points
0 comments1 min readEA link

Should “un­usual” fields have a place in uni­ver­si­ties?

Fırat Akova20 Apr 2026 8:23 UTC
3 points
0 comments6 min readEA link
(akovafirat.substack.com)

[Linkpost] 34th Street Magaz­ine’s “Sav­ing Every­one, Every­where, Across Space and Time”

fiddle20 Apr 2026 1:23 UTC
3 points
0 comments1 min readEA link
(www.34st.com)

Will AI make ev­ery­thing more cor­re­lated?

TFD19 Apr 2026 23:06 UTC
7 points
0 comments2 min readEA link
(www.thefloatingdroid.com)

Why is it so hard to con­vince peo­ple of EA?

David Goodman19 Apr 2026 21:17 UTC
9 points
2 comments9 min readEA link

[Question] What, in your view, ex­plains all these prob­lems?

Contrarian^2 = I 🔸19 Apr 2026 20:32 UTC
1 point
0 comments1 min readEA link

Com­plet­ing the BlueDot AGI Strat­egy Course: Reflec­tions from a Global South Perspective

Nnaemeka Emmanuel Nnadi19 Apr 2026 19:04 UTC
5 points
0 comments2 min readEA link

Har­ness­ing the Ex­pected Value of Other Peo­ple’s Giving

Vahid Baugher19 Apr 2026 18:57 UTC
1 point
0 comments6 min readEA link

Scope and Sensitivity

Contrarian^2 = I 🔸19 Apr 2026 18:56 UTC
1 point
0 comments1 min readEA link

Ap­ply­ing Digi­tal Mar­ket­ing and Growth Hack­ing to EA Outreach: Op­por­tu­ni­ties and Risks

Kimberly Hayes19 Apr 2026 18:53 UTC
2 points
1 comment1 min readEA link

AI can­not taste things

Itsi Weinstock19 Apr 2026 18:52 UTC
1 point
0 comments5 min readEA link

Think­ing about the other 99%

Amr19 Apr 2026 17:06 UTC
4 points
0 comments2 min readEA link

Against Cre­d­u­lous AI Hype

James Fodor19 Apr 2026 16:09 UTC
23 points
4 comments6 min readEA link

Re­sources for start­ing and grow­ing an AI safety org

Bryce Robertson19 Apr 2026 5:16 UTC
9 points
0 comments1 min readEA link

P(doom) ranges from 10% to 99% de­pend­ing on who you ask. I searched 1,259 hours of AI safety pod­casts to map the ac­tual dis­agree­ments.

Bardoonii18 Apr 2026 18:46 UTC
−3 points
2 comments1 min readEA link

Why we need to re­sist the idea of net nega­tive lives in insects

Seidenpuma18 Apr 2026 16:10 UTC
50 points
13 comments4 min readEA link

AI risk is not a Pas­cal’s wager

ozymandias18 Apr 2026 15:49 UTC
11 points
1 comment5 min readEA link
(thingofthings.substack.com)

Effec­tive Altru­ism and Rad­i­cal Trivialisms

Bentham's Bulldog18 Apr 2026 15:38 UTC
14 points
2 comments5 min readEA link

Build­ing the Field of AI Safety

80000_Hours18 Apr 2026 9:52 UTC
25 points
1 comment14 min readEA link