RSS

Crit­i­cism of effec­tive al­tru­ist causes

TagLast edit: Jul 22, 2022, 1:44 PM by Leo

The criticism of effective altruist causes tag covers posts that criticize popular claims about the importance of a given cause area, or an idea within that area. They don’t have to be critical of the area as a whole — just one or more popular ideas related to the area.

Related entries

criticism of effective altruism | criticism of effective altruism culture | criticism of effective altruist organizations | criticism of longtermism and existential risk studies

Pre-an­nounc­ing a con­test for cri­tiques and red teaming

LizkaMar 25, 2022, 11:52 AM
173 points
27 comments2 min readEA link

Re­sponse to Re­cent Crit­i­cisms of Longtermism

abDec 13, 2021, 1:36 PM
249 points
31 comments28 min readEA link

Why AI is Harder Than We Think—Me­lanie Mitchell

Eevee🔹Apr 28, 2021, 8:19 AM
45 points
7 comments2 min readEA link
(arxiv.org)

Help me find the crux be­tween EA/​XR and Progress Studies

jasoncrawfordJun 2, 2021, 6:47 PM
119 points
37 comments3 min readEA link

[Linkpost] Eric Sch­witzgebel: Against Longtermism

ag4000Jan 6, 2022, 2:15 PM
41 points
4 comments1 min readEA link

How much cur­rent an­i­mal suffer­ing does longter­mism let us ig­nore?

Jacob EliosoffApr 21, 2022, 9:10 AM
40 points
50 comments6 min readEA link

Progress stud­ies vs. longter­mist EA: some differences

Max_DanielMay 31, 2021, 9:35 PM
84 points
27 comments3 min readEA link

Su­per­vol­ca­noes tail risk has been ex­ag­ger­ated?

Vasco Grilo🔸Mar 6, 2024, 8:38 AM
46 points
9 comments8 min readEA link
(journals.ametsoc.org)

Why I No Longer Pri­ori­tize Wild An­i­mal Welfare

sauliusFeb 15, 2023, 12:11 PM
321 points
65 comments5 min readEA link

Ben Garfinkel: How sure are we about this AI stuff?

bgarfinkelFeb 9, 2019, 7:17 PM
128 points
20 comments18 min readEA link

[ed­ited] Inequal­ity is a (small) prob­lem for EA and eco­nomic growth

Karthik TadepalliAug 8, 2022, 9:42 AM
99 points
28 comments7 min readEA link

[Question] How much EA anal­y­sis of AI safety as a cause area ex­ists?

richard_ngoSep 6, 2019, 11:15 AM
94 points
20 comments2 min readEA link

Will splashy philan­thropy cause the biose­cu­rity field to fo­cus on the wrong risks?

Tessa A 🔸Apr 30, 2019, 4:03 PM
45 points
10 comments2 min readEA link
(thebulletin.org)

Is RP’s Mo­ral Weights Pro­ject too an­i­mal friendly? Four crit­i­cal junctures

NickLaingOct 11, 2024, 12:03 PM
132 points
56 comments5 min readEA link

Democratis­ing Risk—or how EA deals with critics

CarlaZoeCDec 28, 2021, 3:05 PM
273 points
311 comments4 min readEA link

Con­cerns with the Wel­lbe­ing of Fu­ture Gen­er­a­tions Bill

LarksMar 9, 2022, 6:12 PM
126 points
37 comments21 min readEA link

Re. Longter­mism: A re­sponse to the EA fo­rum (part 2)

vadmasMar 1, 2021, 6:13 PM
15 points
4 comments1 min readEA link

Effec­tive Altru­ism is an Ide­ol­ogy, not (just) a Question

James FodorJun 28, 2019, 7:18 AM
168 points
46 comments16 min readEA link
(thegodlesstheist.com)

On AI Weapons

kbogNov 13, 2019, 12:48 PM
76 points
10 comments30 min readEA link

Crit­i­cal Re­view of ‘The Precipice’: A Re­assess­ment of the Risks of AI and Pandemics

James FodorMay 11, 2020, 11:11 AM
111 points
32 comments26 min readEA link

20 Cri­tiques of AI Safety That I Found on Twitter

Daniel KirmaniJun 23, 2022, 3:11 PM
14 points
13 comments1 min readEA link

Biolog­i­cal An­chors ex­ter­nal re­view by Jen­nifer Lin (linkpost)

peterhartreeNov 30, 2022, 1:06 PM
36 points
0 comments1 min readEA link
(docs.google.com)

Blake Richards on Why he is Skep­ti­cal of Ex­is­ten­tial Risk from AI

Michaël TrazziJun 14, 2022, 7:11 PM
63 points
14 comments4 min readEA link
(theinsideview.ai)

AI timelines and the­o­ret­i­cal un­der­stand­ing of deep learn­ing

Venky1024Sep 12, 2021, 4:26 PM
4 points
8 comments2 min readEA link

[linkpost] Peter Singer: The Hinge of History

micJan 16, 2022, 1:25 AM
38 points
8 comments3 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 2

James FodorDec 13, 2018, 5:12 AM
10 points
12 comments7 min readEA link

Why the ex­pected num­bers of farmed an­i­mals in the far fu­ture might be huge

FaiMar 4, 2022, 7:59 PM
134 points
29 comments16 min readEA link

[Cross-post] Change my mind: we should define and mea­sure the effec­tive­ness of ad­vanced AI

David JohnstonApr 6, 2022, 12:20 AM
4 points
0 comments7 min readEA link

Cri­tique of OpenPhil’s macroe­co­nomic policy advocacy

Hauke HillebrandtMar 24, 2022, 10:03 PM
143 points
39 comments24 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 3

James FodorDec 13, 2018, 5:13 AM
3 points
5 comments7 min readEA link

Why I’m skep­ti­cal about un­proven causes (and you should be too)

Peter WildefordAug 7, 2013, 4:00 AM
54 points
1 comment11 min readEA link

The Epistemic Challenge to Longter­mism (Tarsney, 2020)

MichaelA🔸Apr 4, 2021, 3:09 AM
79 points
27 comments2 min readEA link
(globalprioritiesinstitute.org)

Pos­si­ble mis­con­cep­tions about (strong) longtermism

JackMMar 9, 2021, 5:58 PM
90 points
43 comments19 min readEA link

A Se­quence Against Strong Longter­mism

vadmasJul 22, 2021, 8:07 PM
20 points
14 comments1 min readEA link

Win­ners of the EA Crit­i­cism and Red Team­ing Contest

LizkaOct 1, 2022, 1:50 AM
226 points
41 comments19 min readEA link

The Fermi Para­dox has not been dissolved

James FodorDec 12, 2020, 12:02 PM
107 points
14 comments14 min readEA link

[Question] Why should we *not* put effort into AI safety re­search?

Ben ThompsonMay 16, 2021, 5:11 AM
15 points
5 comments1 min readEA link

Peo­ple in bunkers, “sar­dines” and why biorisks may be over­rated as a global priority

Evan R. MurphyOct 23, 2021, 12:19 AM
22 points
6 comments3 min readEA link

Vot­ing re­form seems over­rated

Nathan_BarnardApr 10, 2021, 12:35 AM
12 points
11 comments1 min readEA link

[Question] What are the lead­ing cri­tiques of “longter­mism” and re­lated concepts

AlasdairGivesMay 30, 2020, 10:54 AM
47 points
27 comments1 min readEA link

Were the Great Tragedies of His­tory “Mere Rip­ples”?

philosophytorresFeb 8, 2021, 2:56 PM
−9 points
11 comments1 min readEA link

Why Re­search into Wild An­i­mal Suffer­ing Con­cerns me

Jordan_WarnerOct 25, 2020, 10:26 PM
22 points
42 comments1 min readEA link

A Crit­i­cal Re­view of Open Philan­thropy’s Bet On Crim­i­nal Jus­tice Reform

NunoSempereJun 16, 2022, 4:40 PM
303 points
97 comments26 min readEA link

Against GDP as a met­ric for timelines and take­off speeds

kokotajlodDec 29, 2020, 5:50 PM
47 points
6 comments14 min readEA link

Up­dat­ing on Nu­clear Power

rileyharrisApr 24, 2022, 5:35 AM
5 points
17 comments1 min readEA link

[Link] “The AI Timelines Scam”

Milan GriffesJul 11, 2019, 3:37 AM
22 points
2 comments1 min readEA link

On how var­i­ous plans miss the hard bits of the al­ign­ment challenge

So8resJul 12, 2022, 5:35 AM
126 points
13 comments29 min readEA link

Thoughts on elec­toral reform

Tobias_BaumannFeb 18, 2020, 4:23 PM
85 points
31 comments4 min readEA link

RCTs in Devel­op­ment Eco­nomics, Their Crit­ics and Their Evolu­tion (Og­den, 2020) [linkpost]

KarolinaSarek🔸Apr 6, 2021, 12:29 PM
78 points
5 comments12 min readEA link

Disen­tan­gling “Im­prov­ing In­sti­tu­tional De­ci­sion-Mak­ing”

LizkaSep 13, 2021, 11:50 PM
96 points
16 comments19 min readEA link

The Base Rate of Longter­mism Is Bad

ColdButtonIssuesSep 5, 2022, 1:29 PM
225 points
27 comments7 min readEA link

AGI ruin sce­nar­ios are likely (and dis­junc­tive)

So8resJul 27, 2022, 3:24 AM
53 points
5 comments6 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 5

James FodorDec 13, 2018, 5:19 AM
12 points
2 comments6 min readEA link

Thoughts on “A case against strong longter­mism” (Mas­rani)

MichaelA🔸May 3, 2021, 2:22 PM
39 points
33 comments2 min readEA link

Longter­mist slo­gans that need to be retired

Michael_WiebeMay 9, 2022, 1:07 AM
5 points
21 comments1 min readEA link

A tale of 2.5 or­thog­o­nal­ity theses

ArepoMay 1, 2022, 1:53 PM
141 points
31 comments11 min readEA link

Birds, Brains, Planes, and AI: Against Ap­peals to the Com­plex­ity/​Mys­te­ri­ous­ness/​Effi­ciency of the Brain

kokotajlodJan 18, 2021, 12:39 PM
27 points
2 comments1 min readEA link

Ju­lia Galef and An­gus Deaton: pod­cast dis­cus­sion of RCT is­sues (ex­cerpts)

Aaron Gertler 🔸Jan 4, 2021, 9:35 PM
29 points
5 comments9 min readEA link
(rationallyspeakingpodcast.org)

No­body’s on the ball on AGI alignment

leopoldMar 29, 2023, 2:26 PM
327 points
65 comments9 min readEA link
(www.forourposterity.com)

More global warm­ing might be good to miti­gate the food shocks caused by abrupt sun­light re­duc­tion scenarios

Vasco Grilo🔸Apr 29, 2023, 8:24 AM
46 points
39 comments13 min readEA link

[Question] Are you op­ti­mistic about the com­mer­cial­iza­tion of alt pro­teins in 2023 and be­yond?

Eevee🔹Apr 29, 2023, 10:20 PM
24 points
7 comments1 min readEA link

We don’t need AGI for an amaz­ing future

Karl von WendtMay 4, 2023, 12:11 PM
57 points
2 comments1 min readEA link

Can a ter­ror­ist at­tack cause hu­man ex­tinc­tion? Not on priors

Vasco Grilo🔸Dec 2, 2023, 8:20 AM
43 points
9 comments15 min readEA link

Epistemics (Part 2: Ex­am­ples) | Reflec­tive Altruism

Eevee🔹May 19, 2023, 9:28 PM
34 points
0 comments2 min readEA link
(ineffectivealtruismblog.com)

Change my mind: Ve­ganism en­tails trade-offs, and health is one of the axes

ElizabethJun 3, 2023, 12:12 AM
128 points
84 comments1 min readEA link

Why Yud­kowsky is wrong about “co­va­lently bonded equiv­a­lents of biol­ogy”

titotalDec 6, 2023, 2:09 PM
29 points
20 comments16 min readEA link
(open.substack.com)

Sav­ing lives in nor­mal times is bet­ter to im­prove the longterm fu­ture than do­ing so in catas­tro­phes?

Vasco Grilo🔸Apr 20, 2024, 8:37 AM
13 points
25 comments9 min readEA link

Can a con­flict cause hu­man ex­tinc­tion? Yet again, not on priors

Vasco Grilo🔸Jun 19, 2024, 4:59 PM
19 points
2 comments11 min readEA link

Helping an­i­mals or sav­ing hu­man lives in high in­come coun­tries is ar­guably bet­ter than sav­ing hu­man lives in low in­come coun­tries?

Vasco Grilo🔸Mar 21, 2024, 9:05 AM
12 points
10 comments12 min readEA link

Utili­tar­i­anism and the re­place­abil­ity of de­sires and attachments

MichaelStJulesJul 27, 2024, 1:57 AM
34 points
13 comments12 min readEA link

Nu­clear win­ter scepticism

Vasco Grilo🔸Aug 13, 2023, 10:55 AM
110 points
42 comments10 min readEA link
(www.navalgazing.net)

The Meat Eater Problem

Vasco Grilo🔸Jun 17, 2023, 6:52 AM
61 points
1 comment7 min readEA link
(journalofcontroversialideas.org)

[Question] Are we con­fi­dent that su­per­in­tel­li­gent ar­tifi­cial in­tel­li­gence dis­em­pow­er­ing hu­mans would be bad?

Vasco Grilo🔸Jun 10, 2023, 9:24 AM
24 points
27 comments1 min readEA link

[Question] Do you think de­creas­ing the con­sump­tion of an­i­mals is good/​bad? Think again?

Vasco Grilo🔸May 27, 2023, 8:22 AM
89 points
41 comments5 min readEA link

Pri­ori­tis­ing an­i­mal welfare over global health and de­vel­op­ment?

Vasco Grilo🔸May 13, 2023, 9:03 AM
112 points
50 comments18 min readEA link

Find­ing bugs in GiveWell’s top charities

Vasco Grilo🔸Jan 23, 2023, 4:49 PM
47 points
14 comments6 min readEA link

Ex­ag­ger­at­ing the risks (Part 13: Ord on Biorisk)

Vasco Grilo🔸Dec 31, 2023, 8:45 AM
57 points
18 comments13 min readEA link
(ineffectivealtruismblog.com)

In­ter­na­tional risk of food in­se­cu­rity and mass mor­tal­ity in a run­away global warm­ing scenario

Vasco Grilo🔸Sep 2, 2023, 7:28 AM
15 points
2 comments6 min readEA link
(www.sciencedirect.com)

So­nia Ben Oua­grham-Gorm­ley on Bar­ri­ers to Bioweapons

Vasco Grilo🔸Feb 15, 2024, 5:58 PM
21 points
0 comments1 min readEA link
(hearthisidea.com)

Famine deaths due to the cli­matic effects of nu­clear war

Vasco Grilo🔸Oct 14, 2023, 12:05 PM
40 points
21 comments66 min readEA link

Cri­tique of “Com­pre­hen­sive ev­i­dence im­plies a higher so­cial cost of CO2”

Vasco Grilo🔸Aug 19, 2023, 8:49 AM
30 points
0 comments7 min readEA link
(daviddfriedman.substack.com)

Var­i­ance of the an­nual con­flict and epi­demic/​pan­demic deaths as a frac­tion of the global population

Vasco Grilo🔸Sep 10, 2024, 5:02 PM
16 points
0 comments2 min readEA link

At­ten­tion on AI X-Risk Likely Hasn’t Dis­tracted from Cur­rent Harms from AI

Erich_Grunewald 🔸Dec 21, 2023, 5:24 PM
190 points
13 comments1 min readEA link
(www.erichgrunewald.com)

Con­cepts of ex­is­ten­tial catastrophe

Vasco Grilo🔸Apr 15, 2024, 5:16 PM
11 points
1 comment8 min readEA link
(globalprioritiesinstitute.org)

Can a war cause hu­man ex­tinc­tion? Once again, not on priors

Vasco Grilo🔸Jan 25, 2024, 7:56 AM
67 points
29 comments18 min readEA link

Farmed an­i­mals may have pos­i­tive lives now or in a few decades?

Vasco Grilo🔸Oct 26, 2024, 9:18 AM
24 points
10 comments7 min readEA link

GiveWell may have made 1 billion dol­lars of harm­ful grants, and Am­bi­tious Im­pact in­cu­bated 8 harm­ful or­gani­sa­tions via in­creas­ing fac­tory-farm­ing?

Vasco Grilo🔸Dec 22, 2024, 10:19 AM
98 points
108 comments9 min readEA link

EAA is rel­a­tively over­in­vest­ing in cor­po­rate welfare reforms

katoJan 6, 2022, 2:47 AM
69 points
30 comments5 min readEA link

Cost-effec­tive­ness of the fish welfare in­ter­ven­tions recom­mended by Am­bi­tious Im­pact, and Fish Welfare Ini­ti­a­tive’s farm program

Vasco Grilo🔸Jan 24, 2025, 5:35 PM
41 points
7 comments13 min readEA link

In­sec­ti­cide-treated nets sig­nifi­cantly harm mosquitoes, but one can eas­ily offset this?

Vasco Grilo🔸Feb 3, 2025, 6:03 PM
28 points
41 comments7 min readEA link

Re­sponse to re­cent crit­i­cisms of EA “longter­mist” thinking

kbogJan 6, 2020, 4:31 AM
27 points
46 comments11 min readEA link

Against im­mor­tal­ity?

Owen Cotton-BarrattApr 28, 2022, 11:51 AM
110 points
40 comments3 min readEA link

There’s Lots More To Do

Jeff Kaufman 🔸May 29, 2019, 7:58 PM
134 points
30 comments2 min readEA link

Disagree­ing about what’s effec­tive isn’t dis­agree­ing with effec­tive altruism

Robert_WiblinJul 16, 2015, 7:00 AM
18 points
1 comment3 min readEA link
(80000hours.org)

Cri­tique of Su­per­in­tel­li­gence Part 1

James FodorDec 13, 2018, 5:10 AM
22 points
13 comments8 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 4

James FodorDec 13, 2018, 5:14 AM
4 points
2 comments4 min readEA link

Why EAs are skep­ti­cal about AI Safety

Lukas Trötzmüller🔸Jul 18, 2022, 7:01 PM
290 points
31 comments29 min readEA link

Why I’m skep­ti­cal of moral cir­cle ex­pan­sion as a cause area

Question MarkJul 14, 2022, 8:29 PM
18 points
5 comments3 min readEA link

Cash trans­fers are not nec­es­sar­ily wealth transfers

BenHoffmanDec 1, 2017, 11:35 PM
17 points
12 comments12 min readEA link

Reflec­tion—Growth and the case against ran­domista de­vel­op­ment—EA Fo­rum

Joanna PicettiJul 28, 2022, 11:38 PM
−16 points
3 comments4 min readEA link
(forum.effectivealtruism.org)

Defec­tive Altru­ism ar­ti­cle in Cur­rent Af­fairs Magazine

ukc10014Sep 22, 2022, 1:27 PM
13 points
7 comments2 min readEA link

Prov­ing too much: A re­sponse to the EA forum

vadmasFeb 15, 2021, 7:00 PM
24 points
7 comments1 min readEA link

My ar­gu­ment against AGI

cveresOct 12, 2022, 6:32 AM
2 points
29 comments3 min readEA link

In defense of the certifiers

LewisBollardJan 24, 2025, 3:12 PM
163 points
10 comments7 min readEA link

Searle vs Bostrom: cru­cial con­sid­er­a­tions for EA AI work?

ForumiteJul 13, 2022, 10:18 AM
11 points
2 comments1 min readEA link

Against longtermism

Brian LuiAug 11, 2022, 5:37 AM
38 points
30 comments6 min readEA link

How to think about an un­cer­tain fu­ture: les­sons from other sec­tors & mis­takes of longter­mist EAs

weeatquinceSep 5, 2020, 12:51 PM
63 points
31 comments14 min readEA link

The rea­son­able­ness of spe­cial concerns

jwtAug 29, 2022, 12:10 AM
3 points
0 comments3 min readEA link

Thoughts on Émile P. Tor­res’ new ar­ti­cle, ‘Un­der­stand­ing “longter­mism”: Why this sud­denly in­fluen­tial philos­o­phy is so toxic’?

AtsinaAug 22, 2022, 9:13 AM
8 points
2 comments1 min readEA link

A case against strong longtermism

vadmasDec 15, 2020, 8:56 PM
68 points
79 comments1 min readEA link

Are AI safe­ty­ists cry­ing wolf?

sarahhwJan 8, 2025, 8:54 PM
61 points
21 comments16 min readEA link
(longerramblings.substack.com)

Com­ments on Ernest Davis’s com­ments on Bostrom’s Superintelligence

GilesJan 24, 2015, 4:40 AM
2 points
8 comments9 min readEA link

How the an­i­mal move­ment could do even more good

Tobias_BaumannFeb 28, 2022, 11:10 PM
80 points
1 comment7 min readEA link
(centerforreducingsuffering.org)

Effec­tive al­tru­ism is worse than tra­di­tional philan­thropy in the way it ex­cludes the ex­treme poor in the global south.

Jaime SevillaDec 17, 2022, 2:44 PM
57 points
29 comments2 min readEA link
(dear-humanity.org)

Spec­u­la­tive sce­nar­ios for cli­mate-caused ex­is­ten­tial catastrophes

vincentzhJan 27, 2023, 5:01 PM
26 points
2 comments4 min readEA link

Three Bi­ases That Made Me Believe in AI Risk

beth​Feb 13, 2019, 11:22 PM
41 points
20 comments3 min readEA link

Editable “Im­por­tant, Tractable, Ne­glected” cri­tiques review

DirectedEvolutionFeb 11, 2023, 4:01 AM
10 points
2 comments1 min readEA link
(forum.effectivealtruism.org)

There can be highly ne­glected solu­tions to less-ne­glected problems

Linda LinseforsFeb 10, 2023, 8:08 PM
211 points
35 comments3 min readEA link

Re­sponse to Tor­res’ ‘The Case Against Longter­mism’

HaydnBelfieldMar 8, 2021, 6:09 PM
138 points
73 comments5 min readEA link

AI Risk is like Ter­mi­na­tor; Stop Say­ing it’s Not

skluugMar 8, 2022, 7:17 PM
191 points
43 comments10 min readEA link
(skluug.substack.com)

An epistemic cri­tique of longtermism

Nathan_BarnardJul 10, 2022, 10:59 AM
12 points
4 comments9 min readEA link

Why this isn’t the “most im­por­tant cen­tury”

Arjun KhemaniJul 27, 2022, 3:27 PM
2 points
2 comments5 min readEA link
(arjunkhemani.com)

The Dis­solu­tion of AI Safety

RokoDec 12, 2024, 10:46 AM
−7 points
0 comments1 min readEA link
(www.transhumanaxiology.com)

A Re­sponse to OpenPhil’s R&D Model

Anthony RepettoAug 3, 2022, 11:31 PM
3 points
2 comments3 min readEA link

Marginal Aid and Effec­tive Altruism

TomDrakeMar 30, 2023, 8:31 PM
39 points
14 comments2 min readEA link

A re­sponse to Matthews on AI Risk

RyanCareyAug 11, 2015, 12:58 PM
11 points
16 comments6 min readEA link

Con­cerns with Longtermism

Wahhab BaldwinSep 16, 2022, 5:37 AM
13 points
9 comments3 min readEA link

Against pre­dic­tion markets

Denise_MelchinMay 12, 2018, 12:08 PM
25 points
20 comments4 min readEA link

Linkpost—Beyond Hyper­an­thro­po­mor­phism: Or, why fears of AI are not even wrong, and how to make them real

LockeAug 24, 2022, 4:24 PM
−4 points
3 comments2 min readEA link
(studio.ribbonfarm.com)

Raphaël Millière on the Limits of Deep Learn­ing and AI x-risk skepticism

Michaël TrazziJun 24, 2022, 6:33 PM
20 points
0 comments4 min readEA link
(theinsideview.ai)

Hob­bit Manifesto

Clay CubeAug 30, 2022, 8:24 PM
8 points
9 comments7 min readEA link

Is the Far Fu­ture Ir­rele­vant for Mo­ral De­ci­sion-Mak­ing?

Tristan DOct 1, 2024, 7:42 AM
35 points
31 comments2 min readEA link
(www.sciencedirect.com)

For Longter­mism, Em­ploy an Earth-Based Morality

Wahhab BaldwinSep 26, 2022, 6:57 PM
−3 points
0 comments2 min readEA link

On Mike Berkow­itz’s 80k Podcast

Timothy_LiptrotApr 21, 2021, 1:53 AM
16 points
10 comments4 min readEA link

Sum­mary: Against the Sin­gu­lar­ity Hy­poth­e­sis (David Thorstad)

Noah Varley🔸Mar 27, 2024, 1:48 PM
63 points
10 comments5 min readEA link

(Cross­post) Is Effec­tive Altru­ism just a gi­ant meme?

cryptopsyfan69420Oct 5, 2024, 11:33 PM
−18 points
5 comments4 min readEA link

The Failed Strat­egy of Ar­tifi­cial In­tel­li­gence Doomers

yhoisethFeb 5, 2025, 7:34 PM
12 points
2 comments1 min readEA link
(letter.palladiummag.com)

Are we already past the precipice?

Dem0sthenesAug 10, 2022, 4:01 AM
1 point
5 comments2 min readEA link

Strong Longter­mism, Ir­refutabil­ity, and Mo­ral Progress

ben_chuggDec 26, 2020, 7:44 PM
60 points
99 comments11 min readEA link

Re­view of Past Grants: The $100,000 Grant for a Video Game?

NicolaeJun 3, 2024, 2:28 PM
202 points
64 comments2 min readEA link

Longter­mism and Uncertainty

Wahhab BaldwinSep 26, 2022, 4:21 PM
7 points
2 comments3 min readEA link
No comments.