RSS

AI risk skepticism

TagLast edit: Feb 12, 2024, 8:57 AM by Vasco Grilo🔸

AI risk skepticism is skepticism about arguments that advanced artificial intelligence poses a catastrophic or existential risk.

Further reading

Bergal, Asya & Robert Long (2019) Conversation with Robin Hanson, AI Impacts, November 13.

Garfinkel, Ben (2019) How sure are we about this AI stuff?, Effective Altruism Forum, February 9.

Vinding, Magnus (2017/​2022) A Contra AI FOOM Reading List.

David Thorstad’s series on exaggerating AI risk.

Related entries

AI alignment | AI safety | criticism of effective altruism

Coun­ter­ar­gu­ments to the ba­sic AI risk case

Katja_GraceOct 14, 2022, 8:30 PM
284 points
23 comments34 min readEA link

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin PopeMar 21, 2023, 1:23 AM
166 points
21 comments39 min readEA link

My highly per­sonal skep­ti­cism brain­dump on ex­is­ten­tial risk from ar­tifi­cial in­tel­li­gence.

NunoSempereJan 23, 2023, 8:08 PM
436 points
116 comments14 min readEA link
(nunosempere.com)

Ex­po­nen­tial AI take­off is a myth

Christoph Hartmann 🔸May 31, 2023, 11:47 AM
47 points
11 comments9 min readEA link

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin PopeApr 11, 2023, 6:48 PM
43 points
2 comments1 min readEA link

[linkpost] “What Are Rea­son­able AI Fears?” by Robin Han­son, 2023-04-23

Arjun PanicksseryApr 14, 2023, 11:26 PM
41 points
3 comments4 min readEA link
(quillette.com)

[Question] What is the cur­rent most rep­re­sen­ta­tive EA AI x-risk ar­gu­ment?

Matthew_BarnettDec 15, 2023, 10:04 PM
117 points
50 comments3 min readEA link

tito­tal on AI risk scepticism

Vasco Grilo🔸May 30, 2024, 5:03 PM
76 points
3 comments6 min readEA link
(forum.effectivealtruism.org)

Count­ing ar­gu­ments provide no ev­i­dence for AI doom

Nora BelroseFeb 27, 2024, 11:03 PM
84 points
15 comments1 min readEA link

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus VindingNov 24, 2022, 11:54 AM
74 points
32 comments22 min readEA link

Why EAs are skep­ti­cal about AI Safety

Lukas Trötzmüller🔸Jul 18, 2022, 7:01 PM
290 points
31 comments29 min readEA link

The bul­ls­eye frame­work: My case against AI doom

titotalMay 30, 2023, 11:52 AM
71 points
15 comments17 min readEA link

Ben Garfinkel: How sure are we about this AI stuff?

bgarfinkelFeb 9, 2019, 7:17 PM
128 points
20 comments18 min readEA link

Rea­sons I’ve been hes­i­tant about high lev­els of near-ish AI risk

eliflandJul 22, 2022, 1:32 AM
208 points
16 comments7 min readEA link
(www.foxy-scout.com)

Why AI is Harder Than We Think—Me­lanie Mitchell

Eevee🔹Apr 28, 2021, 8:19 AM
45 points
7 comments2 min readEA link
(arxiv.org)

“Di­a­mon­doid bac­te­ria” nanobots: deadly threat or dead-end? A nan­otech in­ves­ti­ga­tion

titotalSep 29, 2023, 2:01 PM
102 points
33 comments20 min readEA link
(titotal.substack.com)

De­cep­tive Align­ment is <1% Likely by Default

DavidWFeb 21, 2023, 3:07 PM
54 points
26 comments14 min readEA link

Bandgaps, Brains, and Bioweapons: The limi­ta­tions of com­pu­ta­tional sci­ence and what it means for AGI

titotalMay 26, 2023, 3:57 PM
59 points
0 comments18 min readEA link

[Question] Are AI risks tractable?

defun 🔸May 21, 2024, 1:45 PM
23 points
1 comment1 min readEA link

Three Bi­ases That Made Me Believe in AI Risk

beth​Feb 13, 2019, 11:22 PM
41 points
20 comments3 min readEA link

Chain­ing the evil ge­nie: why “outer” AI safety is prob­a­bly easy

titotalAug 30, 2022, 1:55 PM
40 points
12 comments10 min readEA link

New ar­ti­cle from Oren Etzioni

Aryeh EnglanderFeb 25, 2020, 3:38 PM
23 points
3 comments2 min readEA link

De­con­struct­ing Bostrom’s Clas­sic Ar­gu­ment for AI Doom

Nora BelroseMar 11, 2024, 6:03 AM
25 points
0 comments1 min readEA link
(www.youtube.com)

How I failed to form views on AI safety

Ada-Maaria HyvärinenApr 17, 2022, 11:05 AM
213 points
72 comments40 min readEA link

Bench­mark Perfor­mance is a Poor Mea­sure of Gen­er­al­is­able AI Rea­son­ing Capabilities

James FodorFeb 21, 2025, 4:25 AM
10 points
3 comments24 min readEA link

13 Very Differ­ent Stances on AGI

Ozzie GooenDec 27, 2021, 11:30 PM
84 points
23 comments3 min readEA link

A tale of 2.5 or­thog­o­nal­ity theses

ArepoMay 1, 2022, 1:53 PM
141 points
31 comments11 min readEA link

Yann LeCun on AGI and AI Safety

Chris LeongAug 8, 2023, 11:43 PM
23 points
4 comments1 min readEA link
(drive.google.com)

Red-team­ing ex­is­ten­tial risk from AI

Zed TararNov 30, 2023, 2:35 PM
30 points
16 comments6 min readEA link

On the Dwarkesh/​Chol­let Pod­cast, and the cruxes of scal­ing to AGI

JWS 🔸Jun 15, 2024, 8:24 PM
72 points
49 comments17 min readEA link

Fu­ture Mat­ters #7: AI timelines, AI skep­ti­cism, and lock-in

PabloFeb 3, 2023, 11:47 AM
54 points
0 comments17 min readEA link

My cover story in Ja­cobin on AI cap­i­tal­ism and the x-risk debates

GarrisonFeb 12, 2024, 11:34 PM
154 points
10 comments6 min readEA link
(jacobin.com)

AI is cen­tral­iz­ing by de­fault; let’s not make it worse

Quintin PopeSep 21, 2023, 1:35 PM
53 points
16 comments15 min readEA link

Rea­sons for my nega­tive feel­ings to­wards the AI risk discussion

fergusqSep 1, 2022, 7:33 AM
43 points
9 comments4 min readEA link

Can a ter­ror­ist at­tack cause hu­man ex­tinc­tion? Not on priors

Vasco Grilo🔸Dec 2, 2023, 8:20 AM
43 points
9 comments15 min readEA link

In favour of ex­plor­ing nag­ging doubts about x-risk

Owen Cotton-BarrattJun 25, 2024, 11:52 PM
89 points
15 comments2 min readEA link

Did Ben­gio and Teg­mark lose a de­bate about AI x-risk against LeCun and Mitchell?

Karl von WendtJun 25, 2023, 4:59 PM
80 points
24 comments1 min readEA link

AI scal­ing myths

Noah Varley🔸Jun 27, 2024, 8:29 PM
30 points
0 comments1 min readEA link
(open.substack.com)

Blake Richards on Why he is Skep­ti­cal of Ex­is­ten­tial Risk from AI

Michaël TrazziJun 14, 2022, 7:11 PM
63 points
14 comments4 min readEA link
(theinsideview.ai)

Imi­ta­tion Learn­ing is Prob­a­bly Ex­is­ten­tially Safe

Vasco Grilo🔸Apr 30, 2024, 5:06 PM
19 points
7 comments3 min readEA link
(www.openphilanthropy.org)

Mo­ti­va­tion gaps: Why so much EA crit­i­cism is hos­tile and lazy

titotalApr 22, 2024, 11:49 AM
213 points
44 comments19 min readEA link
(titotal.substack.com)

“X dis­tracts from Y” as a thinly-dis­guised fight over group sta­tus /​ politics

Steven ByrnesSep 25, 2023, 3:29 PM
89 points
9 comments8 min readEA link

De­stroy the “ne­oliberal hal­lu­ci­na­tion” & fight for an­i­mal rights through open res­cue.

Chloe LeffakisAug 15, 2023, 4:47 AM
−17 points
2 comments1 min readEA link
(www.reddit.com)

AGI Bat­tle Royale: Why “slow takeover” sce­nar­ios de­volve into a chaotic multi-AGI fight to the death

titotalSep 22, 2022, 3:00 PM
49 points
11 comments15 min readEA link

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.hFeb 21, 2021, 1:41 PM
16 points
1 comment7 min readEA link

How “AGI” could end up be­ing many differ­ent spe­cial­ized AI’s stitched together

titotalMay 8, 2023, 12:32 PM
31 points
2 comments9 min readEA link

Why AGI sys­tems will not be fa­nat­i­cal max­imisers (un­less trained by fa­nat­i­cal hu­mans)

titotalMay 17, 2023, 11:58 AM
43 points
3 comments15 min readEA link

The Leeroy Jenk­ins prin­ci­ple: How faulty AI could guaran­tee “warn­ing shots”

titotalJan 14, 2024, 3:03 PM
54 points
2 comments21 min readEA link
(titotal.substack.com)

I bet Greg Colbourn 10 k€ that AI will not kill us all by the end of 2027

Vasco Grilo🔸Jun 4, 2024, 4:37 PM
195 points
57 comments2 min readEA link

[Question] Why should we *not* put effort into AI safety re­search?

Ben ThompsonMay 16, 2021, 5:11 AM
15 points
5 comments1 min readEA link

Ex­perts’ AI timelines are longer than you have been told?

Vasco Grilo🔸Jan 9, 2025, 5:30 PM
30 points
11 comments3 min readEA link
(bayes.net)

The AI Mes­siah

ryancbriggsMay 5, 2022, 4:58 PM
71 points
44 comments2 min readEA link

Cut­ting AI Safety down to size

Holly Elmore ⏸️ 🔸Nov 9, 2024, 11:40 PM
86 points
5 comments5 min readEA link

Tether­ware #2: What ev­ery hu­man should know about our most likely AI future

Jáchym FibírFeb 28, 2025, 11:25 AM
3 points
0 comments11 min readEA link
(tetherware.substack.com)

Propos­ing the Con­di­tional AI Safety Treaty (linkpost TIME)

OttoNov 15, 2024, 1:56 PM
12 points
6 comments3 min readEA link
(time.com)

From Cri­sis to Con­trol: Estab­lish­ing a Re­silient In­ci­dent Re­sponse Frame­work for De­ployed AI Models

KevinNJan 31, 2025, 1:06 PM
10 points
1 comment6 min readEA link
(www.techpolicy.press)

David­son’s Model of Take­off Speeds: A Crit­i­cal Take

Violet HourJan 31, 2025, 6:46 PM
38 points
2 comments19 min readEA link

[Question] Benefits/​Risks of Scott Aaron­son’s Ortho­dox/​Re­form Fram­ing for AI Alignment

JeremyNov 21, 2022, 5:47 PM
15 points
5 comments1 min readEA link
(scottaaronson.blog)

Cog­ni­tive Bi­ases Con­tribut­ing to AI X-risk — a deleted ex­cerpt from my 2018 ARCHES draft

Andrew CritchDec 3, 2024, 9:29 AM
14 points
1 comment1 min readEA link

Mere ex­po­sure effect: Bias in Eval­u­at­ing AGI X-Risks

RemmeltDec 27, 2022, 2:05 PM
4 points
1 comment1 min readEA link

Loss of con­trol of AI is not a likely source of AI x-risk

squekNov 9, 2022, 5:48 AM
8 points
0 comments1 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 2

James FodorDec 13, 2018, 5:12 AM
10 points
12 comments7 min readEA link

A Cri­tique of AI Takeover Scenarios

James FodorAug 31, 2022, 1:49 PM
53 points
4 comments12 min readEA link

Pod­cast: Mag­nus Vind­ing on re­duc­ing suffer­ing, why AI progress is likely to be grad­ual and dis­tributed and how to rea­son about poli­tics

Gus DockerNov 21, 2021, 3:29 PM
26 points
0 comments1 min readEA link
(www.utilitarianpodcast.com)

No, CS ma­jors didn’t de­lude them­selves that the best way to save the world is to do CS research

Robert_WiblinDec 15, 2015, 5:13 PM
20 points
7 comments3 min readEA link

AGI Isn’t Close—Fu­ture Fund Wor­ld­view Prize

Toni MUENDELDec 18, 2022, 4:03 PM
−8 points
24 comments13 min readEA link

[Question] What Do AI Safety Pitches Not Get About Your Field?

a_e_rSep 20, 2022, 6:13 PM
70 points
18 comments1 min readEA link

Why some peo­ple be­lieve in AGI, but I don’t.

cveresOct 26, 2022, 3:09 AM
13 points
2 comments4 min readEA link

The miss­ing link to AGI

Yuri BarzovSep 28, 2022, 4:37 PM
1 point
7 comments1 min readEA link

The Cred­i­bil­ity of Apoca­lyp­tic Claims: A Cri­tique of Techno-Fu­tur­ism within Ex­is­ten­tial Risk

EmberAug 16, 2022, 7:48 PM
25 points
35 comments17 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 1

James FodorDec 13, 2018, 5:10 AM
22 points
13 comments8 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 3

James FodorDec 13, 2018, 5:13 AM
3 points
5 comments7 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 5

James FodorDec 13, 2018, 5:19 AM
12 points
2 comments6 min readEA link

Maybe AI risk shouldn’t af­fect your life plan all that much

JustisJul 22, 2022, 3:30 PM
22 points
4 comments6 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 4

James FodorDec 13, 2018, 5:14 AM
4 points
2 comments4 min readEA link

Stress Ex­ter­nal­ities More in AI Safety Pitches

NickGabsSep 26, 2022, 8:31 PM
31 points
9 comments2 min readEA link

Op­ti­mism, AI risk, and EA blind spots

JustisSep 28, 2022, 5:21 PM
87 points
21 comments8 min readEA link

Is this com­mu­nity over-em­pha­siz­ing AI al­ign­ment?

LixiangJan 8, 2023, 6:23 AM
1 point
5 comments1 min readEA link

Hereti­cal Thoughts on AI | Eli Dourado

𝕮𝖎𝖓𝖊𝖗𝖆Jan 19, 2023, 4:11 PM
142 points
15 comments1 min readEA link

What are the “no free lunch” the­o­rems?

Vishakha AgrawalFeb 4, 2025, 2:02 AM
3 points
0 comments1 min readEA link
(aisafety.info)

New tool for ex­plor­ing EA Fo­rum and LessWrong—Tree of Tags

Filip SondejOct 27, 2022, 5:43 PM
43 points
8 comments1 min readEA link

The Dis­solu­tion of AI Safety

RokoDec 12, 2024, 10:46 AM
−7 points
0 comments1 min readEA link
(www.transhumanaxiology.com)

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

stepanlosJul 14, 2023, 5:10 PM
5 points
1 comment6 min readEA link

The Prospect of an AI Winter

Erich_Grunewald 🔸Mar 27, 2023, 8:55 PM
56 points
13 comments1 min readEA link

Notes on “the hot mess the­ory of AI mis­al­ign­ment”

JakubKApr 21, 2023, 10:07 AM
44 points
3 comments1 min readEA link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

cdkgMay 29, 2023, 9:59 AM
29 points
6 comments26 min readEA link

What can su­per­in­tel­li­gent ANI tell us about su­per­in­tel­li­gent AGI?

Ted SandersJun 12, 2023, 6:32 AM
81 points
20 comments5 min readEA link

Sum­mary: Against the Sin­gu­lar­ity Hy­poth­e­sis (David Thorstad)

Noah Varley🔸Mar 27, 2024, 1:48 PM
63 points
10 comments5 min readEA link

My Proven AI Safety Ex­pla­na­tion (as a com­put­ing stu­dent)

Mica WhiteFeb 6, 2024, 3:58 AM
8 points
4 comments6 min readEA link

Against AI As An Ex­is­ten­tial Risk

Noah BirnbaumJul 30, 2024, 7:24 PM
6 points
3 comments1 min readEA link
(irrationalitycommunity.substack.com)

No “Zero-Shot” Without Ex­po­nen­tial Data: Pre­train­ing Con­cept Fre­quency Deter­mines Mul­ti­modal Model Performance

Noah Varley🔸May 14, 2024, 11:57 PM
36 points
2 comments1 min readEA link
(arxiv.org)

The Failed Strat­egy of Ar­tifi­cial In­tel­li­gence Doomers

yhoisethFeb 5, 2025, 7:34 PM
12 points
2 comments1 min readEA link
(letter.palladiummag.com)

‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

FroolowOct 18, 2022, 10:54 PM
111 points
63 comments39 min readEA link

Sum­mary: Against the sin­gu­lar­ity hypothesis

Global Priorities InstituteMay 22, 2024, 11:05 AM
46 points
14 comments4 min readEA link
(globalprioritiesinstitute.org)

Shut­ting down all com­pet­ing AI pro­jects might not buy a lot of time due to In­ter­nal Time Pressure

ThomasCederborgOct 3, 2024, 12:05 AM
6 points
1 comment12 min readEA link

Why mis­al­igned AGI won’t lead to mass kil­lings (and what ac­tu­ally mat­ters in­stead)

Julian NalenzFeb 6, 2025, 1:22 PM
−3 points
5 comments3 min readEA link
(blog.hermesloom.org)

[Question] What pre­dic­tions from the­o­ret­i­cal AI Safety re­search have been con­firmed by em­piri­cal work?

freedomandutilityDec 29, 2024, 8:19 AM
43 points
10 comments1 min readEA link

Are AI safe­ty­ists cry­ing wolf?

sarahhwJan 8, 2025, 8:54 PM
61 points
21 comments16 min readEA link
(longerramblings.substack.com)

How do fic­tional sto­ries illus­trate AI mis­al­ign­ment?

Vishakha AgrawalJan 15, 2025, 6:16 AM
4 points
0 comments2 min readEA link
(aisafety.info)

My per­sonal cruxes for work­ing on AI safety

BuckFeb 13, 2020, 7:11 AM
136 points
35 comments44 min readEA link

Should AI X-Risk Wor­ri­ers Short the Mar­ket?

postlibertarianNov 4, 2024, 4:16 PM
14 points
1 comment6 min readEA link

LLMs might not be the fu­ture of search: at least, not yet.

James-Hartree-LawJan 22, 2025, 9:40 PM
4 points
1 comment4 min readEA link

AI com­pa­nies are un­likely to make high-as­surance safety cases if timelines are short

Ryan GreenblattJan 23, 2025, 6:41 PM
45 points
1 comment1 min readEA link
No comments.