Fol­low up: time sen­si­tive an­i­mal welfare ac­tion

Bentham's Bulldog28 Apr 2026 23:53 UTC
40 points
0 comments1 min readEA link

AI safety fund­ing mechanisms aren’t built for scale

Akshyae Singh28 Apr 2026 23:38 UTC
3 points
0 comments7 min readEA link

The Miss­ing Key to AGI Alignment

lucarade28 Apr 2026 21:41 UTC
−1 points
0 comments10 min readEA link

Is AI welfare work puntable?

OscarD🔸28 Apr 2026 21:07 UTC
24 points
2 comments7 min readEA link

The story be­hind the bad AI stat that moved mar­kets and mis­led millions

80000_Hours28 Apr 2026 20:25 UTC
6 points
0 comments8 min readEA link
(80000hours.org)

Com­ment on “Fore­cast­ing is Way Over­rated, and We Should Stop Fund­ing It”

Jhrosenberg28 Apr 2026 20:15 UTC
52 points
4 comments9 min readEA link

Strat­egy mat­ters when some­one im­ple­ments it. As­tra is cul­ti­vat­ing peo­ple to do both.

a_e_r28 Apr 2026 19:58 UTC
22 points
1 comment4 min readEA link

Ad cre­ative re­view us­ing as­set tag­ging + results

Dalton Sweet28 Apr 2026 17:08 UTC
4 points
2 comments1 min readEA link

Egal­i­tar­i­anism Is False

Vasco Grilo🔸28 Apr 2026 16:27 UTC
12 points
3 comments8 min readEA link
(benthams.substack.com)

Ap­pli­ca­tions Open Un­til May 13: New Roots In­sti­tute Fellowship

Becca Rogers28 Apr 2026 16:20 UTC
16 points
0 comments1 min readEA link

Why our X (Twit­ter) out­reach is quietly tank­ing this year

Kimberly Hayes28 Apr 2026 15:41 UTC
4 points
0 comments2 min readEA link

Analysing the ex­treme spread in ex­is­ten­tial risk sur­vey estimates

titotal28 Apr 2026 14:18 UTC
43 points
2 comments17 min readEA link
(open.substack.com)

Fear and Col­lec­tive Effi­cacy Un­der Ab­stract Risk: Tech­ni­cal Fluency, Emo­tional Ca­pac­ity, and the Fu­ture of AI Governance

Taylor Grogan28 Apr 2026 13:38 UTC
5 points
1 comment10 min readEA link

Time Sen­si­tive Ur­gent An­i­mal Welfare Ac­tion

Bentham's Bulldog28 Apr 2026 13:32 UTC
179 points
1 comment1 min readEA link

Starfish

Aaron Gertler 🔸28 Apr 2026 7:15 UTC
146 points
7 comments3 min readEA link
(archiveofourown.org)

Con­tra Bin­der on far-UVC and filtration

Jeff Kaufman 🔸28 Apr 2026 3:20 UTC
87 points
8 comments3 min readEA link
(www.jefftk.com)

The Cas­cade Effect: Why Sup­port­ing Women Through the Full Cli­mac­teric in Latin Amer­ica May Im­pact 5–6 Peo­ple Per Participant

Karina E. Benitez Lin28 Apr 2026 1:36 UTC
5 points
0 comments7 min readEA link

PLF for Dummies

Amr27 Apr 2026 17:36 UTC
9 points
1 comment2 min readEA link

Strong el Niño effect im­pacts later this year?

NunoSempere27 Apr 2026 17:15 UTC
29 points
0 comments1 min readEA link
(blog.sentinel-team.org)

“If this had been pub­lished a decade ear­lier…” Help us change the canon!

Bella27 Apr 2026 16:25 UTC
64 points
0 comments3 min readEA link

AWL 2025 Year-In-Re­view and 2026 Plans

Animal Welfare League 27 Apr 2026 16:03 UTC
10 points
2 comments18 min readEA link

We need more di­ver­sity in epistemics, wor­ld­views & ways of know­ing to tackle AI risk effectively

Akhil Puri27 Apr 2026 14:10 UTC
11 points
0 comments10 min readEA link
(akhilpuri.substack.com)

Lan­guage mod­els know what mat­ters and the foun­da­tions of ethics bet­ter than you

Michele Campolo27 Apr 2026 14:02 UTC
5 points
0 comments90 min readEA link

From noth­ing to im­por­tant ac­tions: agents that act morally

Michele Campolo27 Apr 2026 14:01 UTC
3 points
0 comments22 min readEA link

Is ne­glect­ed­ness a re­li­able crite­rion for marginal im­pact?

rowboat snowman27 Apr 2026 13:57 UTC
3 points
0 comments8 min readEA link

Why AI Safety’s Mea­sure­ment Frame­works Are Blind to the Risks Already Happening

Tomoko Mitsuoka27 Apr 2026 13:49 UTC
−3 points
2 comments6 min readEA link

Mea­sur­ing epistemic phase tran­si­tions in LLMs: the Epistemic Curie Tem­per­a­ture (open data, 7 mod­els)

SardorR27 Apr 2026 13:47 UTC
0 points
1 comment1 min readEA link

Align­ment Through Ro­bust Mo­ral Development

Vince Liotta27 Apr 2026 13:46 UTC
3 points
0 comments16 min readEA link

Shrimp Sen­tience Re­search: A Pri­ori­ti­za­tion Guide

Guillaume Reho27 Apr 2026 13:21 UTC
59 points
6 comments62 min readEA link
(docs.google.com)

The Data Is In: “Effec­tive” Philan­thropy Has an Ev­i­dence Issue

GraceAdams🔸26 Apr 2026 23:47 UTC
48 points
0 comments1 min readEA link
(www.insidephilanthropy.com)

“Bad faith” means in­ten­tion­ally mis­rep­re­sent­ing your beliefs

TFD26 Apr 2026 19:08 UTC
3 points
0 comments6 min readEA link
(www.thefloatingdroid.com)

LessWrong Com­mu­nity Week­end 2026

jt26 Apr 2026 19:01 UTC
2 points
0 comments5 min readEA link

Why should Effec­tive Altru­ism poverty re­s­olu­tion pro­jects use ethno­graphic data in or­der to help the most num­ber of peo­ple more effec­tively?

AhmadAnthropology 26 Apr 2026 17:31 UTC
1 point
0 comments3 min readEA link

Map­ping Lo­cus of Control

Aayush Sharma26 Apr 2026 17:31 UTC
1 point
0 comments11 min readEA link

[Question] What mechanisms could lead a fully au­tonomous AI sys­tem to act against hu­man welfare?

Rushabh26 Apr 2026 17:30 UTC
1 point
0 comments1 min readEA link

Gover­nance has an in­ner al­ign­ment prob­lem. Here’s an in­sti­tu­tional spec to ad­dress it.

eliask26 Apr 2026 17:28 UTC
1 point
0 comments7 min readEA link

What Re­search Com­mons Can Learn from UBI

dlnrbts28326 Apr 2026 15:18 UTC
3 points
3 comments3 min readEA link

[Question] Re­solv­ing Para­dox: Fund­ing Isn’t Bot­tle­neck vs. ~80% high Re­jec­tion Rates in AI Safety

jackchang11026 Apr 2026 7:52 UTC
21 points
8 comments1 min readEA link

Does AI safety need a stronger con­cept of an­swer­abil­ity?

vladisav jovanovic26 Apr 2026 7:11 UTC
1 point
0 comments1 min readEA link

Fore­cast­ing is Way Over­rated, and We Should Stop Fund­ing It

Marcus Abramovitch 🔸25 Apr 2026 22:36 UTC
335 points
85 comments5 min readEA link

Stop Donat­ing to AI Safety Re­search*

Sophie Kim25 Apr 2026 21:25 UTC
66 points
13 comments9 min readEA link
(thecounterfactual.substack.com)

AI safety can be a Pas­cal’s mug­ging even if p(doom) is high

Elliott Thornley (EJT)25 Apr 2026 16:20 UTC
36 points
10 comments1 min readEA link

EA Ber­lin Univer­sity Group: A Post-Mortem and Reflections

Emily K🔹25 Apr 2026 9:45 UTC
17 points
0 comments12 min readEA link

AI and an­i­mal welfare: am I miss­ing some­thing?

SiobhanBall25 Apr 2026 9:43 UTC
40 points
15 comments3 min readEA link

Man­i­fund’s Fal­con Fund

Marcus Abramovitch 🔸24 Apr 2026 22:09 UTC
82 points
14 comments4 min readEA link
(manifund.org)

Why Fore­cast­ing Fails De­ci­sion Makers

JamesN24 Apr 2026 19:39 UTC
65 points
9 comments4 min readEA link

The Sat­u­ra­tion View

Forethought24 Apr 2026 17:28 UTC
24 points
5 comments3 min readEA link
(www.forethought.org)

Build­ing the EA/​AI Safety Scene in San Francisco

Chris Leong24 Apr 2026 17:14 UTC
15 points
3 comments8 min readEA link

The gap be­tween Sys­tems Safety Eng­ineer­ing and AI Safety is some­thing we need to talk about

Phill Mulvana24 Apr 2026 16:02 UTC
8 points
0 comments5 min readEA link

Welfare Biol­ogy and AI: The Quiz

Dawn Drescher24 Apr 2026 11:57 UTC
19 points
3 comments8 min readEA link
(impartial-priorities.org)