RSS

Events on the EA Forum

TagLast edit: 6 Nov 2025 10:18 UTC by Dane Valerie

This page documents notable past events hosted on or related to the Effective Altruism Forum. Each entry links to a public Forum event page with a short description. Events are listed alphabetically, not chronologically.

(If you’d like to help sort these by date, feel free to suggest edits!)

EA Forum Events

  1. AMAs

    1. Saloni Dattani, Science Writer and co-founder of Works in Progress

    2. Tom Ough, Author of ‘The Anti-Catastrophe League’, Senior Editor at UnHerd

    3. Pablo Melchor, author of Altruismo Racional and President of Ayuda Efectiva

    4. Ask Career Advisors Anything

  2. Debates

    1. AI Pause Debate (2023)

    2. AI Welfare Debate Week (2024)

    3. Animal Welfare vs Global Health Debate Week

    4. DIY Debate Week

    5. Existential Choices Debate Week

  3. Annual events

    1. April Fools’ Day

    2. Benjamin Lay Day

    3. Petrov Day

    4. Draft Amnesty

      1. Draft Amnesty Day (2022)

      2. Draft Amnesty Week (2024)

      3. Draft Amnesty Week (2025)

    5. Giving Season

      1. Giving Season (2023)

        1. Donation Election (2023)

      2. Giving Season (2024)

        1. Funding Strategy Week

        2. Marginal Funding Week

        3. Donation Election

        4. Pledge Highlight Week

  4. Other events

    1. Career Conversations Week (2023)

    2. Career Conversations Week (2025)

    3. Creative Writing Contest

    4. Criticism and Red Teaming Contest

    5. EA Forum Review

    6. EA Strategy Fortnight

    7. Forum Prize

Forum events not run by the EA Forum

  1. Africa EA Forum Competition

  2. Ask Me Anything

  3. Community Builder Writing Contest

  4. GiveWell Change Our Mind Contest

  5. Open Philanthropy AI Worldviews Contest

Ca­reer Con­ver­sa­tions Week: July 21-27

Toby Tremlett🔹8 Jul 2025 9:52 UTC
60 points
15 comments2 min readEA link

All Fo­rum events (I’ve been in­volved in) retrospective

Toby Tremlett🔹28 May 2025 13:59 UTC
33 points
6 comments8 min readEA link

AMA: Ask Ca­reer Ad­vi­sors Anything

Toby Tremlett🔹21 Jul 2025 8:02 UTC
72 points
138 comments1 min readEA link

AMA: Saloni Dat­tani, Science Writer and co-founder of Works in Progress

Toby Tremlett🔹2 Jul 2025 8:33 UTC
53 points
34 comments1 min readEA link

Are longter­mist ideas get­ting harder to find?

OscarD🔸18 Oct 2025 23:43 UTC
52 points
6 comments4 min readEA link

Tech to AI safety men­tor­ship: Mid-ca­reer tran­si­tions with Cameron Holmes

frances_lorenz23 Jul 2025 16:34 UTC
16 points
0 comments2 min readEA link

In Defense of Stakes-Sensitivity

Richard Y Chappell🔸19 Oct 2025 21:35 UTC
16 points
7 comments5 min readEA link
(www.goodthoughts.blog)

Longter­mism and Global AI Gover­nance: Build­ing In­sti­tu­tional Readi­ness in the Global South

Adebayo Mubarak1 Oct 2025 23:54 UTC
14 points
2 comments4 min readEA link

My im­pact-fo­cused ca­reer: you can cre­ate your own roles (?)

Jamie_Harris26 Jul 2025 15:45 UTC
36 points
2 comments4 min readEA link

We should act as if we live at the hinge of history

OscarD🔸18 Oct 2025 22:06 UTC
13 points
2 comments6 min readEA link

Ca­reer Con­ver­sa­tions: High­lights and Retrospective

Toby Tremlett🔹29 Jul 2025 8:10 UTC
34 points
3 comments3 min readEA link

Fruit-pick­ing as an ex­is­ten­tial risk

Arepo19 Oct 2025 14:22 UTC
42 points
13 comments10 min readEA link

‘Es­says on Longter­mism’ Com­pe­ti­tion Winners

Toby Tremlett🔹13 Nov 2025 9:43 UTC
66 points
5 comments2 min readEA link

How AI may be­come de­ceit­ful, syco­phan­tic… and lazy

titotal7 Oct 2025 14:15 UTC
30 points
4 comments22 min readEA link
(titotal.substack.com)

Rule High Stakes In, Not Out

Richard Y Chappell🔸21 Oct 2025 2:44 UTC
12 points
10 comments5 min readEA link

My AI safety in­de­pen­dent re­search en­g­ineer path so far

Artyom K25 Jul 2025 8:49 UTC
16 points
0 comments3 min readEA link

Build­ing EA Pro­jects While Work­ing an Un­re­lated Full-Time Job

Brad West🔸23 Jul 2025 15:27 UTC
85 points
9 comments5 min readEA link

From Google to co-found­ing a global health char­ity: Mid-ca­reer tran­si­tions with Mar­tyn James

frances_lorenz24 Jul 2025 11:22 UTC
48 points
1 comment3 min readEA link

Giv­ing Sea­son 2025 Announcement

Toby Tremlett🔹24 Oct 2025 10:46 UTC
82 points
12 comments3 min readEA link

An­nounc­ing: Draft Amnesty Week Oc­to­ber 13-19

Toby Tremlett🔹25 Sep 2025 10:12 UTC
26 points
7 comments4 min readEA link

Prov­i­dence or fate? I some­times pinch my­self

NickLaing28 Jul 2025 19:51 UTC
95 points
6 comments4 min readEA link

Ten AI safety pro­jects I’d like peo­ple to work on

JulianHazell24 Jul 2025 15:32 UTC
52 points
7 comments10 min readEA link

Utili­tar­i­ans Should Ac­cept that Some Suffer­ing Can­not be “Offset”

Aaron Bergman5 Oct 2025 21:22 UTC
77 points
34 comments26 min readEA link

Hu­man Em­pow­er­ment ver­sus the Longter­mist Im­perium?

Jackson Wagner21 Oct 2025 10:24 UTC
20 points
2 comments21 min readEA link

15 Un­con­ven­tional Ways I Used My Gen­er­al­ist Back­ground to Tran­si­tion into a High-Im­pact Ca­reer: My jour­ney from School Prin­ci­pal to AI Safety Advisor

Moneer19 Aug 2025 10:34 UTC
42 points
5 comments6 min readEA link

Which ar­gu­ments do you find com­pel­ling in de­bate week?

Nathan Young8 Oct 2024 9:22 UTC
20 points
5 comments1 min readEA link

How do AI welfare and AI safety in­ter­act?

Lucius Caviola1 Jul 2024 10:39 UTC
77 points
20 comments7 min readEA link
(outpaced.substack.com)

Can Democ­racy Sur­vive Longter­mism?

Kritika21 Oct 2025 16:13 UTC
3 points
1 comment15 min readEA link

Public Opinion on AI Safety: AIMS 2023 and 2021 Summary

Janet Pauketat25 Sep 2023 18:09 UTC
19 points
0 comments3 min readEA link
(www.sentienceinstitute.org)

How to end the time of perils

David Nelson19 Oct 2025 12:00 UTC
1 point
1 comment5 min readEA link

Mul­ti­plier Ar­gu­ments are of­ten flawed

AGB 🔸13 Oct 2024 21:25 UTC
193 points
64 comments4 min readEA link

Mak­ing AI Welfare an EA pri­or­ity re­quires jus­tifi­ca­tions that have not been given

JWS 🔸7 Jul 2024 21:38 UTC
64 points
21 comments6 min readEA link

Go Mo­bi­lize? Les­sons from GM Protests for Paus­ing AI

Charlie Harrison24 Oct 2023 15:01 UTC
54 points
11 comments31 min readEA link

Vi­atopia and Buy-In

Jordan Arel31 Oct 2025 2:59 UTC
7 points
0 comments19 min readEA link

[Question] What are the strongest ar­gu­ments for the side you voted against in the AW vs GH de­bate?

Will Howard🔹8 Oct 2024 19:45 UTC
51 points
10 comments1 min readEA link

Dis­cus­sion Thread: AI Welfare De­bate Week

Toby Tremlett🔹2 Jul 2024 10:11 UTC
28 points
24 comments1 min readEA link

6 years of build­ing an EA-al­igned ca­reer from an LMIC

Rika Gabriel28 Jul 2025 5:44 UTC
278 points
16 comments8 min readEA link

AI Pause Will Likely Backfire

Nora Belrose16 Sep 2023 10:21 UTC
141 points
167 comments13 min readEA link

Catas­tro­phe with­out Agency

ZenoSr20 Oct 2025 16:42 UTC
3 points
0 comments12 min readEA link

Dona­tion Elec­tion 2025 Winners

Toby Tremlett🔹10 Dec 2025 10:19 UTC
49 points
5 comments2 min readEA link

How to think about slow­ing AI

Zach Stein-Perlman17 Sep 2023 11:23 UTC
74 points
9 comments3 min readEA link

If We Can’t End Fac­tory Farm­ing, Can We Really Shape the Far Fu­ture?

Krimsey17 Oct 2025 16:48 UTC
25 points
14 comments3 min readEA link

Ex­pe­rience size

trammell7 Oct 2024 21:43 UTC
122 points
21 comments20 min readEA link

Without Align­ment, Is Longter­mism (and Thus, EA) Just Noise?

Krimsey17 Oct 2025 20:05 UTC
3 points
1 comment3 min readEA link

Not un­der­stand­ing sen­tience is a sig­nifi­cant x-risk

Cameron B1 Jul 2024 15:38 UTC
28 points
8 comments2 min readEA link

Pru­den­tial longter­mism is de­fanged by the strat­egy of pro­cras­ti­na­tion — and that’s not all

Yarrow Bouchard 🔸6 Nov 2025 17:42 UTC
13 points
2 comments9 min readEA link

Is RP’s Mo­ral Weights Pro­ject too an­i­mal friendly? Four crit­i­cal junctures

NickLaing11 Oct 2024 12:03 UTC
149 points
58 comments5 min readEA link

Longter­mist Im­pli­ca­tions of the Ex­is­tence Neu­tral­ity Hypothesis

Maxime Riché 🔸20 Mar 2025 12:20 UTC
19 points
0 comments21 min readEA link

In­creas­ing Con­cern for Digi­tal Be­ings Through LLM Persuasion

carter allen🔸7 Jul 2024 16:42 UTC
24 points
0 comments6 min readEA link

Stu­art J. Rus­sell on “should we press pause on AI?”

Kaleem18 Sep 2023 13:19 UTC
32 points
3 comments1 min readEA link
(podcasts.apple.com)

The Pro­cras­ti­na­tor’s apol­ogy: We don’t have time to be longtermists

Pilot Pillow21 Oct 2025 5:15 UTC
1 point
0 comments4 min readEA link

De­pop­u­la­tion and Longtermism

MikeGeruso9 Sep 2025 16:24 UTC
16 points
1 comment3 min readEA link

Longter­mism- An­i­mals and Depopulation

Isla Shiner20 Oct 2025 15:48 UTC
2 points
0 comments4 min readEA link

The Value of Con­scious­ness as a Pivotal Question

Derek Shiller3 Jul 2024 18:50 UTC
73 points
21 comments8 min readEA link

Es­says on longter­mism

Jaap Sardana 20 Oct 2025 15:47 UTC
1 point
0 comments6 min readEA link

Es­says on longtermism

David Thorstad8 Sep 2025 5:16 UTC
114 points
3 comments4 min readEA link

Meet the can­di­dates in the Fo­rum’s Dona­tion Elec­tion (2024)

Toby Tremlett🔹18 Nov 2024 12:10 UTC
75 points
0 comments12 min readEA link

[Linkpost] A Case for AI Consciousness

cdkg6 Jul 2024 14:56 UTC
3 points
0 comments1 min readEA link
(philpapers.org)

Beyond Catas­tro­phe: Why AI Longter­mism Must Ac­count for Uber-Beneficence

Mayank Kejriwal21 Oct 2025 16:23 UTC
1 point
1 comment9 min readEA link

[Question] What posts do you want to see dur­ing de­bate week?

Toby Tremlett🔹1 Oct 2024 9:05 UTC
27 points
2 comments1 min readEA link

MY LIFE IN SERVICE.

WataniAlex20 Oct 2025 16:46 UTC
0 points
0 comments13 min readEA link

(out­dated ver­sion) Why Vi­atopia is Important

Jordan Arel21 Oct 2025 11:33 UTC
4 points
0 comments18 min readEA link

Other Civ­i­liza­tions Would Re­cover 84+% of Our Cos­mic Re­sources—A Challenge to Ex­tinc­tion Risk Prioritization

Maxime Riché 🔸17 Mar 2025 13:11 UTC
19 points
0 comments12 min readEA link

Pri­ori­tiz­ing AI Welfare Means Pri­ori­tiz­ing Con­scious­ness Research

CurtTigges8 Jul 2024 5:22 UTC
24 points
0 comments10 min readEA link

How I got a job at a farm an­i­mal welfare nonprofit

Drew Housman25 Jul 2025 22:11 UTC
30 points
6 comments3 min readEA link

[Question] Cause prio cruxes in 2026?

Hayley Clatterbuck11 Nov 2025 15:22 UTC
82 points
29 comments1 min readEA link

Lives not worth liv­ing?

Moritz Stumpe 🔸8 Oct 2024 15:29 UTC
67 points
9 comments7 min readEA link

Dona­tion Elec­tion Fund: Re­wards and Matching

Toby Tremlett🔹31 Oct 2025 14:14 UTC
32 points
17 comments3 min readEA link

Demo­cratic Back­slid­ing and Longter­mism

DaanvD20 Oct 2025 15:47 UTC
14 points
0 comments11 min readEA link

My cur­rent takes on AI Welfare

Toby Tremlett🔹1 Jul 2024 8:43 UTC
24 points
13 comments2 min readEA link

Dis­cus­sion thread: An­i­mal Welfare vs. Global Health De­bate Week

Toby Tremlett🔹7 Oct 2024 8:05 UTC
94 points
380 comments1 min readEA link

The Case for On­tolog­i­cal Longtermism

James Yamada21 Oct 2025 16:19 UTC
8 points
4 comments11 min readEA link

The “tech­nol­ogy” bucket error

Holly Elmore ⏸️ 🔸21 Sep 2023 0:59 UTC
33 points
10 comments4 min readEA link
(open.substack.com)

On the Mo­ral Pa­tiency of Non-Sen­tient Be­ings (Part 2)

Chase Carter7 Jul 2024 22:33 UTC
14 points
2 comments21 min readEA link

Longter­mism and the Prob­lem of Alie­na­tion: A Re­sponse to “Authen­tic­ity, Mean­ing, and Alie­na­tion: Rea­sons to Care Less About Far-Fu­ture Peo­ple”

Simran Puri21 Oct 2025 16:17 UTC
1 point
0 comments17 min readEA link

The pos­si­bil­ity of an in­definite AI pause

Matthew_Barnett19 Sep 2023 12:28 UTC
90 points
73 comments15 min readEA link

How to act wisely in the long term if we rarely know what is right to do?

Ray Horizon12 Oct 2025 23:17 UTC
1 point
4 comments6 min readEA link

Prepar­ing for the In­tel­li­gence Explosion

finm11 Mar 2025 15:38 UTC
120 points
15 comments1 min readEA link
(www.forethought.org)

Im­prov­ing Im­pact In­fras­truc­ture in the Ta­lent Space

Emil Wasteson Wallén 🔸12 Nov 2025 6:38 UTC
41 points
4 comments5 min readEA link

The In­ter­na­tional PauseAI Protest: Ac­tivism un­der uncertainty

Joseph Miller12 Oct 2023 17:36 UTC
136 points
3 comments4 min readEA link

Seek­ing Rip­ple Effects

Richard Y Chappell🔸8 Oct 2024 14:00 UTC
66 points
20 comments2 min readEA link
(www.goodthoughts.blog)

Com­ments on Man­heim’s “What’s in a Pause?”

RobBensinger18 Sep 2023 12:16 UTC
74 points
11 comments6 min readEA link

Why I’m work­ing on AI welfare

kyle_fish6 Jul 2024 6:01 UTC
72 points
8 comments5 min readEA link

Policy ideas for miti­gat­ing AI risk

Thomas Larsen16 Sep 2023 10:31 UTC
121 points
15 comments10 min readEA link

Group Bi­ases in Long-Term Policy Design

Peregrine20 Oct 2025 15:48 UTC
1 point
0 comments2 min readEA link

Longter­mists Should Worry About AI Not Be­ing Developed

DanteTheAbstract17 Oct 2025 18:32 UTC
10 points
0 comments4 min readEA link

De­bate se­ries: should we push for a pause on the de­vel­op­ment of AI?

Ben_West🔸8 Sep 2023 16:29 UTC
252 points
58 comments1 min readEA link

Ma­chine thought on a continuum

arsht4 Jul 2024 13:45 UTC
5 points
1 comment4 min readEA link
(mistakesweremade.substack.com)

Neu­tral­ity about Longter­mism and Danaë’s Additions

DanaeBaumbach21 Oct 2025 16:17 UTC
1 point
0 comments5 min readEA link

Nav­i­gat­ing Early Ca­reer Choices in Alter­na­tive Proteins

Elif Özdemir28 Jul 2025 4:45 UTC
16 points
3 comments5 min readEA link

Effi­cacy of AI Ac­tivism: Have We Ever Said No?

Charlie Harrison27 Oct 2023 16:52 UTC
80 points
25 comments20 min readEA link

Some rea­sons for not pri­ori­tis­ing an­i­mal welfare very strongly

Engin Arıkan11 Oct 2024 19:43 UTC
104 points
16 comments9 min readEA link

The Case for AI Safety Ad­vo­cacy to the Public

Holly Elmore ⏸️ 🔸20 Sep 2023 12:03 UTC
260 points
58 comments14 min readEA link

What posts are you think­ing about writ­ing?

Toby Tremlett🔹29 Sep 2025 12:53 UTC
22 points
17 comments1 min readEA link

The Marginal $100m Would Be Far Bet­ter Spent on An­i­mal Welfare Than Global Health

Ariel Simnegar 🔸9 Oct 2024 20:45 UTC
79 points
19 comments4 min readEA link

In AI Gover­nance, let the Non-EA World Train You First

Camille23 Jul 2025 17:46 UTC
10 points
0 comments1 min readEA link

[Question] What does the sys­tems per­spec­tive say about effec­tive in­ter­ven­tions?

Jonas Hallgren 🔸8 Oct 2024 10:01 UTC
8 points
2 comments1 min readEA link

Digi­tal Minds Take­off Scenarios

Bradford Saad5 Jul 2024 16:06 UTC
43 points
10 comments17 min readEA link

Mo­ral er­ror as an ex­is­ten­tial risk

William_MacAskill17 Mar 2025 16:22 UTC
101 points
3 comments11 min readEA link

It’s not ob­vi­ous that get­ting dan­ger­ous AI later is better

Aaron_Scher23 Sep 2023 5:35 UTC
23 points
9 comments16 min readEA link

[Question] AI con­scious­ness & moral sta­tus: What do the ex­perts think?

Jay Luong6 Jul 2024 15:27 UTC
0 points
3 comments1 min readEA link

Es­says on longtermism

kelvin muchiri10 Oct 2025 21:26 UTC
1 point
0 comments3 min readEA link

AMA: Tom Ough, Author of ‘The Anti-Catas­tro­phe League’, Se­nior Edi­tor at UnHerd

Toby Tremlett🔹31 Jul 2025 11:24 UTC
58 points
30 comments3 min readEA link

Some thoughts on fanaticism

Joey Marcellino20 Oct 2025 13:10 UTC
12 points
9 comments10 min readEA link

Longter­mism and the Challenge of In­finity: Con­fronting In­fini­tar­ian Paral­y­sis (In re­sponse to Longter­mism in an In­finite World (by Chris­tian Tarsney & Hay­den Wilkinson

Yetty18 Sep 2025 12:08 UTC
2 points
0 comments3 min readEA link

Meet the Can­di­dates: Dona­tion Elec­tion 2025

Toby Tremlett🔹24 Nov 2025 9:51 UTC
93 points
6 comments20 min readEA link

Dis­cus­sions of Longter­mism should fo­cus on the prob­lem of Unawareness

Jim Buhler20 Oct 2025 13:17 UTC
40 points
1 comment34 min readEA link

[Question] If an ex­is­ten­tial catas­tro­phe oc­curs, how likely is it to wipe out all an­i­mal sen­tience?

JoA🔸16 Mar 2025 22:30 UTC
11 points
2 comments2 min readEA link

Is a nu­mer­i­cal dis­tinc­tion the trump card in a di­cus­sion of stronger vs. weaker de­on­tic Longtermism

Arthur Field 🔶19 Oct 2025 21:53 UTC
1 point
0 comments6 min readEA link

The case for con­scious AI: Clear­ing the record [AI Con­scious­ness & Public Per­cep­tion]

Jay Luong5 Jul 2024 20:29 UTC
3 points
7 comments8 min readEA link

Digi­tal Minds: Im­por­tance and Key Re­search Ques­tions

Andreas_Mogensen3 Jul 2024 8:59 UTC
87 points
1 comment15 min readEA link

AI Welfare De­bate Week retrospective

Toby Tremlett🔹16 Sep 2024 9:18 UTC
39 points
4 comments6 min readEA link

Eth­i­cal anal­y­sis of pur­ported risks and dis­asters in­volv­ing suffer­ing, ex­tinc­tion, or a lack of pos­i­tive value

JoA🔸17 Mar 2025 13:36 UTC
20 points
0 comments1 min readEA link
(jeet.ieet.org)

Is a nu­mer­i­cal dis­tinc­tion the trump card in a dis­cus­sion of stronger vs. weaker de­on­tic Longter­mism?

PR Cross 20 Oct 2025 16:32 UTC
1 point
0 comments5 min readEA link

(out­dated ver­sion) In­tro­duc­tion to Build­ing Co­op­er­a­tive Vi­atopia: The Case for Longter­mist In­fras­truc­ture Be­fore AI Builds Everything

Jordan Arel21 Oct 2025 11:26 UTC
6 points
0 comments18 min readEA link

2024-2030 may be a unique “make or break” pe­riod for an­i­mal welfare

Engin Arıkan12 Oct 2024 11:34 UTC
43 points
0 comments5 min readEA link

The Ax­iolog­i­cal Im­per­a­tive vs. The Agent’s Good Life

vinniescent24 Sep 2025 8:30 UTC
0 points
0 comments2 min readEA link

Unions for AI safety?

dEAsign24 Sep 2023 0:13 UTC
7 points
12 comments2 min readEA link

When De­cay Meets Stakes: A Min­i­mal Bridge for Longter­mist De­ci­sions

Saicharan Ritwik Chinni19 Oct 2025 17:56 UTC
3 points
2 comments8 min readEA link

How to re­duce risks re­lated to con­scious AI: A user guide [Con­scious AI & Public Per­cep­tion]

Jay Luong5 Jul 2024 14:19 UTC
9 points
1 comment15 min readEA link

In­com­men­su­ra­bil­ity and In­tran­si­tivity in Longter­mism: A Plu­ral­ist Reframe (with a note on why art mat­ters)

Ben Yeoh3 Oct 2025 11:39 UTC
5 points
0 comments21 min readEA link

Talk­ing about longter­mism isn’t very important

cb20 Oct 2025 13:15 UTC
36 points
11 comments3 min readEA link

Challenges from Ca­reer Tran­si­tions and What To Ex­pect From Advising

ClaireB24 Jul 2025 13:22 UTC
26 points
1 comment9 min readEA link

Con­scious AI & Public Per­cep­tion: Four futures

nicoleta-k3 Jul 2024 23:06 UTC
12 points
1 comment16 min readEA link

Mo­ral Con­sid­er­a­tions In De­sign­ing AI Systems

Hans Gundlach5 Jul 2024 18:13 UTC
8 points
1 comment7 min readEA link

Why We Can’t Align AI Un­til We Align Ourselves

mag21 Oct 2025 16:11 UTC
1 point
0 comments6 min readEA link

Ly­ing is Cowardice, not Strategy

Connor Leahy25 Oct 2023 5:59 UTC
−5 points
15 comments5 min readEA link
(cognition.cafe)

An ar­gu­ment for dis­cus­sion of in­creas­ing com­pas­sion as po­ten­tial pre­sent ac­tion for the dis­tant fu­ture with ad­di­tional near-term benefits

RedTeam20 Oct 2025 22:47 UTC
4 points
1 comment16 min readEA link

In­for­ma­tion Preser­va­tion as a Longter­mist Intervention

David Goodman25 Sep 2025 15:50 UTC
24 points
2 comments15 min readEA link

Will dis­agree­ment about AI rights lead to so­cietal con­flict?

Lucius Caviola3 Jul 2024 13:30 UTC
51 points
1 comment22 min readEA link
(outpaced.substack.com)

Es­say: Reimag­in­ing Democ­racy for the Long Term: A Par­ti­ci­pa­tory Ap­proach to Longter­mist Poli­ti­cal Philosophy

Bavertov3 Oct 2025 0:34 UTC
1 point
0 comments5 min readEA link

Sum­mary thread: ‘Es­says on Longter­mism’ Chapters

Toby Tremlett🔹8 Sep 2025 9:36 UTC
34 points
7 comments1 min readEA link

The $100 Million Dilemma: Hu­man Lives vs. En­dan­gered Species—Which Should We Save?

Vee7 Oct 2024 8:12 UTC
6 points
5 comments14 min readEA link

[Question] What would your or­gani­sa­tion do with ex­tra fund­ing?

Will Howard🔹17 Nov 2025 7:42 UTC
32 points
2 comments1 min readEA link

AI is cen­tral­iz­ing by de­fault; let’s not make it worse

Quintin Pope21 Sep 2023 13:35 UTC
53 points
16 comments15 min readEA link

Care­less talk on US-China AI com­pe­ti­tion? (and crit­i­cism of CAIS cov­er­age)

Oliver Sourbut20 Sep 2023 12:46 UTC
52 points
19 comments9 min readEA link
(www.oliversourbut.net)

What posts would you like some­one to write?

Toby Tremlett🔹29 Sep 2025 12:49 UTC
20 points
11 comments1 min readEA link

Brief ca­reer mus­ings: pro­grammes, re­source cre­ation and flow through effects

Lin BL27 Jul 2025 21:01 UTC
16 points
0 comments2 min readEA link

Carl Shul­man on the moral sta­tus of cur­rent and fu­ture AI systems

rgb1 Jul 2024 15:34 UTC
69 points
24 comments12 min readEA link
(experiencemachines.substack.com)

The Miss­ing Heart of Longter­mism: Why Hu­man Values Must Ground Our Con­cern for the Far Future

EAvalues🔸16 Oct 2025 16:54 UTC
7 points
0 comments12 min readEA link

How could a mora­to­rium fail?

Davidmanheim22 Sep 2023 15:11 UTC
49 points
4 comments9 min readEA link

How to Ad­dress EA Dilem­mas – What is Miss­ing from EA Values?

alexis schoenlaub13 Oct 2024 9:33 UTC
7 points
4 comments6 min readEA link

Why Vi­atopia is Important

Jordan Arel31 Oct 2025 2:59 UTC
5 points
0 comments20 min readEA link

Power Laws of Value

tylermjohn17 Mar 2025 10:10 UTC
54 points
21 comments13 min readEA link

Ap­pren­tice­ship Align­ment: from Si­mu­lated En­vi­ron­ment to the Phys­i­cal World

Arri Morris13 Oct 2025 12:32 UTC
1 point
0 comments9 min readEA link

LLMs can­not use­fully be moral patients

LGS2 Jul 2024 4:43 UTC
35 points
24 comments4 min readEA link

Towards longter­mist justice

JM16 Oct 2025 13:52 UTC
3 points
0 comments14 min readEA link

What do RP’s tools tell us about giv­ing $100m to AW or GHD?

Hayley Clatterbuck7 Oct 2024 20:41 UTC
167 points
21 comments9 min readEA link

Eter­nal Fu­tures and Finite Agency: A Me­ta­phys­i­cal Challenge to Longtermism

emmadalbianco18 Oct 2025 23:51 UTC
10 points
0 comments4 min readEA link

Ex­is­ten­tial Choices Sym­po­sium with Will MacAskill and other spe­cial guests

Toby Tremlett🔹14 Mar 2025 13:50 UTC
70 points
154 comments2 min readEA link

Dona­tion Elec­tion 2025: How to Vote

Toby Tremlett🔹20 Nov 2025 14:47 UTC
14 points
6 comments3 min readEA link

Es­says on longtermism

kelvin muchiri11 Oct 2025 15:09 UTC
1 point
0 comments3 min readEA link

10 Cruxes of Ar­tifi­cial Sentience

Jordan Arel1 Jul 2024 2:46 UTC
31 points
0 comments3 min readEA link

On the Mo­ral Pa­tiency of Non-Sen­tient Be­ings (Part 1)

Chase Carter4 Jul 2024 23:41 UTC
20 points
8 comments24 min readEA link

The Longter­mism of Bore­dom: Will the far fu­ture ac­tu­ally be worth liv­ing if we solve suffer­ing?

Melanie Banerjee3 Oct 2025 14:25 UTC
2 points
0 comments3 min readEA link

“Es­say on Longter­mism” com­pe­ti­tion. A re­spond to Chap­ter 10, “What Are the Prospects of Fore­cast­ing the Far Fu­ture?” by David Rhys Bernard and Eva Vi­valt, from Es­says on Longter­mism: Pre­sent Ac­tion for the Dis­tant Fu­ture.

Bavertov27 Sep 2025 10:56 UTC
1 point
0 comments5 min readEA link

A sum­mary on why donat­ing to highly effec­tive An­i­mal Rights Or­ga­ni­za­tions can be thou­sands to tens of thou­sands of times more effec­tive in helping an­i­mals than donat­ing to sanc­tu­ar­ies. Please max­i­mize the im­pact of your dona­tions.

PreciousPig5 Oct 2024 8:01 UTC
19 points
0 comments3 min readEA link

In­tro­duc­tion to Build­ing Co­op­er­a­tive Vi­atopia: The Case for Longter­mist In­fras­truc­ture Be­fore AI Builds Everything

Jordan Arel31 Oct 2025 2:58 UTC
6 points
0 comments19 min readEA link

Ex­plain­ing the dis­crep­an­cies in cost effec­tive­ness rat­ings: A repli­ca­tion and break­down of RP’s an­i­mal welfare cost effec­tive­ness calcu­la­tions

titotal14 Oct 2024 11:34 UTC
158 points
28 comments12 min readEA link

An­i­mals mat­ter a lot on non-he­do­nis­tic views

Michael St Jules 🔸16 Oct 2024 18:15 UTC
52 points
0 comments3 min readEA link

The Value of a Statis­ti­cal Life is not a good metric

Chris Clay🔸19 Mar 2025 9:11 UTC
25 points
4 comments1 min readEA link

An­i­mal welfare is ne­glected in a par­tic­u­lar way: it is frag­ile

Engin Arıkan12 Oct 2024 10:54 UTC
58 points
6 comments5 min readEA link

Con­scious AI: Will we know it when we see it? [Con­scious AI & Public Per­cep­tion]

ixex4 Jul 2024 20:30 UTC
13 points
1 comment12 min readEA link

All About Oper­a­tions & Careers

Deena Englander23 Jul 2025 15:37 UTC
25 points
6 comments7 min readEA link

We are not alone: many com­mu­ni­ties want to stop Big Tech from scal­ing un­safe AI

Remmelt22 Sep 2023 17:38 UTC
28 points
30 comments4 min readEA link

Op­ti­mistic Longter­mism and Sus­pi­cious Judg­ment Calls

Jim Buhler24 Mar 2025 15:55 UTC
24 points
30 comments4 min readEA link

Paus­ing AI vs De­growth in rich countries

Miquel Banchs-Piqué (prev. mikbp)23 Sep 2023 7:09 UTC
−2 points
53 comments1 min readEA link

Oper­a­tions in AI Safety: A One-Year Per­spec­tive and Advice

mick24 Jul 2025 12:39 UTC
25 points
0 comments10 min readEA link
(mickzijdel.com)

Guard­ing To­mor­row: Longter­mism and the Fight Against Global Pan­demics By Adeyanju Temi­tope Andrew

Ade Beraka 22 Sep 2025 14:16 UTC
1 point
0 comments5 min readEA link

Writ­ing about my job: re­search man­ager at the China Ve­gan Society

Rakefet Cohen Ben-Arye 🔸24 Jul 2025 7:31 UTC
27 points
1 comment3 min readEA link

Cost-effec­tive­ness of Shrimp Welfare Pro­ject’s Hu­mane Slaugh­ter Initiative

Vasco Grilo🔸6 Oct 2024 8:25 UTC
78 points
28 comments5 min readEA link

“We can Prevent AI Disaster Like We Prevented Nu­clear Catas­tro­phe”

Peter23 Sep 2023 20:36 UTC
15 points
1 comment1 min readEA link
(time.com)

Pause For Thought: The AI Pause Debate

Scott Alexander10 Oct 2023 15:34 UTC
113 points
20 comments14 min readEA link
(www.astralcodexten.com)

When Has the World Ever Been Net Pos­i­tive?

Krimsey18 Sep 2025 14:17 UTC
10 points
2 comments3 min readEA link

Three Open Flanks of Longtermism

ManfredKohler6 Oct 2025 13:37 UTC
1 point
0 comments11 min readEA link

The Curse of Sta­si­sism: Why Bri­tain and Amer­ica Are Bury­ing Their Own En­light­en­ment Le­ga­cyUn­ti­tled Draft

蒲渠波6 Oct 2025 13:29 UTC
−12 points
1 comment8 min readEA link

An­nounc­ing AI Welfare De­bate Week (July 1-7)

Toby Tremlett🔹18 Jun 2024 8:06 UTC
84 points
8 comments3 min readEA link

Re­sponse on the Mo­ral Sta­tus of Fu­ture Peo­ple in Longter­mism and the Com­plaints of Fu­ture Peo­ple by Emma J. Cur­ran.

Yetty18 Sep 2025 14:05 UTC
0 points
0 comments2 min readEA link

Con­scious AI con­cerns all of us. [Con­scious AI & Public Per­cep­tions]

ixex3 Jul 2024 3:12 UTC
25 points
1 comment12 min readEA link

We’re Not Ready: thoughts on “paus­ing” and re­spon­si­ble scal­ing policies

Holden Karnofsky27 Oct 2023 15:19 UTC
150 points
22 comments8 min readEA link

What’s in a Pause?

Davidmanheim16 Sep 2023 10:13 UTC
73 points
10 comments9 min readEA link

The de­fault tra­jec­tory for an­i­mal welfare means vastly more suffering

JamesÖz 🔸11 Oct 2024 5:51 UTC
274 points
18 comments5 min readEA link

Ir­recov­er­able collapse

Yarrow Bouchard 🔸21 Oct 2025 11:05 UTC
12 points
4 comments4 min readEA link

...but is in­creas­ing the value of fu­tures tractable?

Davidmanheim19 Mar 2025 8:49 UTC
47 points
23 comments1 min readEA link

Writ­ing about my job: Com­mu­nity Liaison

Charlotte Darnell28 Jul 2025 16:32 UTC
76 points
0 comments7 min readEA link

Ways I see the global health → an­i­mal welfare shift backfiring

Henry Howard🔸9 Oct 2024 10:47 UTC
77 points
46 comments6 min readEA link

[Question] Com­mon re­but­tal to “paus­ing” or reg­u­lat­ing AI

sammyboiz🔸22 May 2024 4:21 UTC
4 points
2 comments1 min readEA link

The over­all cost-effec­tive­ness of an in­ter­ven­tion of­ten mat­ters less than the coun­ter­fac­tual use of its funding

abrahamrowe12 Nov 2025 1:31 UTC
129 points
15 comments7 min readEA link

Shortlist of Vi­atopia Interventions

Jordan Arel31 Oct 2025 3:00 UTC
10 points
1 comment33 min readEA link

Per­sis­tence, Not Pro­jec­tion: The Case for Loop Main­te­nance over Longtermism

Emergence10119 Oct 2025 10:13 UTC
1 point
5 comments9 min readEA link

Beyond Ex­tinc­tion: Re­vis­it­ing the Ques­tion and Broad­en­ing Our View

arvomm17 Mar 2025 16:03 UTC
36 points
3 comments10 min readEA link

[Linkpost] “AI Align­ment vs. AI Eth­i­cal Treat­ment: Ten Challenges”

Bradford Saad5 Jul 2024 14:55 UTC
10 points
0 comments1 min readEA link
(docs.google.com)

Sum­mary: In­tro­spec­tive Ca­pa­bil­ities in LLMs (Robert Long)

rileyharris2 Jul 2024 18:08 UTC
11 points
1 comment4 min readEA link

SoGive’s tools for set­ting philan­thropy strategy

Sanjay16 Nov 2025 10:02 UTC
26 points
0 comments2 min readEA link

Timelines are short, p(doom) is high: a global stop to fron­tier AI de­vel­op­ment un­til x-safety con­sen­sus is our only rea­son­able hope

Greg_Colbourn ⏸️ 12 Oct 2023 11:24 UTC
78 points
83 comments9 min readEA link

Tech­nol­ogy’s Dou­ble Edge: Re­assess­ing Longter­mist Pri­ori­ties in an Age of Ex­po­nen­tial Innovation

Ray Raven13 Sep 2025 18:32 UTC
5 points
2 comments4 min readEA link

A Biolog­i­cal Crux for AI Consciousness

Bradford Saad3 Jul 2024 12:36 UTC
11 points
2 comments1 min readEA link
(docs.google.com)

Am­bi­tious Im­pact’s cost-effec­tive­ness es­ti­mates sug­gest the best in­ter­ven­tions in an­i­mal welfare are much more cost-effec­tive than the best in global health and de­vel­op­ment?

Vasco Grilo🔸11 Oct 2024 16:32 UTC
25 points
3 comments2 min readEA link

Longter­mism

Jaap Sardana 20 Oct 2025 17:29 UTC
1 point
0 comments8 min readEA link

AMA: Pablo Mel­chor, au­thor of Altru­ismo Ra­cional and Pres­i­dent of Ayuda Efectiva

Toby Tremlett🔹22 Aug 2025 8:24 UTC
64 points
24 comments1 min readEA link

Aim for con­di­tional pauses

AnonResearcherMajorAILab25 Sep 2023 1:05 UTC
100 points
42 comments12 min readEA link

Three Cruxes for Ex­is­ten­tial Choices Presentation

wallower24 Mar 2025 5:24 UTC
6 points
0 comments1 min readEA link
(drive.google.com)

Writ­ing about my job: Pro­gram Director

Gavriel Kleinwaks23 Jul 2025 17:31 UTC
34 points
0 comments5 min readEA link

Stum­bling Our Way into Global Catas­tro­phe One Tweet-at-a-Time

Faqih21 Oct 2025 6:38 UTC
1 point
0 comments1 min readEA link

Dis­cus­sion Thread: Ex­is­ten­tial Choices De­bate Week

Toby Tremlett🔹14 Mar 2025 17:20 UTC
43 points
175 comments1 min readEA link

(out­dated ver­sion) Shortlist of Longter­mist Interventions

Jordan Arel21 Oct 2025 11:59 UTC
4 points
0 comments14 min readEA link

[Trig­ger warn­ing: vi­o­lence] An­i­mal vs hu­man welfare: shar­ing some per­sonal reflections

Forumite10 Oct 2024 11:49 UTC
177 points
8 comments4 min readEA link

Shortlived sen­tience/​consciousness

Martin (Huge) Vlach1 Jul 2024 13:59 UTC
2 points
2 comments1 min readEA link

(out­dated ver­sion) Vi­atopia and Buy-In

Jordan Arel21 Oct 2025 11:39 UTC
6 points
0 comments20 min readEA link

An­nounc­ing: Ex­is­ten­tial Choices De­bate Week (March 17-23)

Toby Tremlett🔹4 Mar 2025 12:05 UTC
84 points
23 comments5 min readEA link

The Con­ver­gent Path to the Stars—Similar Utility Across Civ­i­liza­tions Challenges Ex­tinc­tion Prioritization

Maxime Riché 🔸18 Mar 2025 17:09 UTC
8 points
1 comment20 min readEA link