RSS

AI risk skepticism

TagLast edit: 12 Feb 2024 8:57 UTC by Vasco Grilo🔸

AI risk skepticism is skepticism about arguments that advanced artificial intelligence poses a catastrophic or existential risk.

Further reading

Bergal, Asya & Robert Long (2019) Conversation with Robin Hanson, AI Impacts, November 13.

Garfinkel, Ben (2019) How sure are we about this AI stuff?, Effective Altruism Forum, February 9.

Vinding, Magnus (2017/​2022) A Contra AI FOOM Reading List.

David Thorstad’s series on exaggerating AI risk.

Related entries

AI alignment | AI safety | criticism of effective altruism

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 1:23 UTC
166 points
21 comments39 min readEA link

Coun­ter­ar­gu­ments to the ba­sic AI risk case

Katja_Grace14 Oct 2022 20:30 UTC
284 points
23 comments34 min readEA link

My highly per­sonal skep­ti­cism brain­dump on ex­is­ten­tial risk from ar­tifi­cial in­tel­li­gence.

NunoSempere23 Jan 2023 20:08 UTC
435 points
116 comments14 min readEA link
(nunosempere.com)

[linkpost] “What Are Rea­son­able AI Fears?” by Robin Han­son, 2023-04-23

Arjun Panickssery14 Apr 2023 23:26 UTC
41 points
3 comments4 min readEA link
(quillette.com)

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin Pope11 Apr 2023 18:48 UTC
43 points
2 comments1 min readEA link

Ex­po­nen­tial AI take­off is a myth

Christoph Hartmann 🔸31 May 2023 11:47 UTC
46 points
11 comments9 min readEA link

“Di­a­mon­doid bac­te­ria” nanobots: deadly threat or dead-end? A nan­otech in­ves­ti­ga­tion

titotal29 Sep 2023 14:01 UTC
102 points
33 comments20 min readEA link
(titotal.substack.com)

Count­ing ar­gu­ments provide no ev­i­dence for AI doom

Nora Belrose27 Feb 2024 23:03 UTC
84 points
15 comments1 min readEA link

Rea­sons I’ve been hes­i­tant about high lev­els of near-ish AI risk

elifland22 Jul 2022 1:32 UTC
207 points
16 comments7 min readEA link
(www.foxy-scout.com)

The bul­ls­eye frame­work: My case against AI doom

titotal30 May 2023 11:52 UTC
70 points
15 comments17 min readEA link

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus Vinding24 Nov 2022 11:54 UTC
74 points
32 comments22 min readEA link

Why EAs are skep­ti­cal about AI Safety

Lukas Trötzmüller🔸18 Jul 2022 19:01 UTC
290 points
31 comments29 min readEA link

Why AI is Harder Than We Think—Me­lanie Mitchell

Eevee🔹28 Apr 2021 8:19 UTC
45 points
7 comments2 min readEA link
(arxiv.org)

Ben Garfinkel: How sure are we about this AI stuff?

bgarfinkel9 Feb 2019 19:17 UTC
128 points
20 comments18 min readEA link

tito­tal on AI risk scepticism

Vasco Grilo🔸30 May 2024 17:03 UTC
75 points
3 comments6 min readEA link
(forum.effectivealtruism.org)

[Question] What is the cur­rent most rep­re­sen­ta­tive EA AI x-risk ar­gu­ment?

Matthew_Barnett15 Dec 2023 22:04 UTC
117 points
50 comments3 min readEA link

[Question] Are AI risks tractable?

defun 🔸21 May 2024 13:45 UTC
23 points
1 comment1 min readEA link

Three Bi­ases That Made Me Believe in AI Risk

beth​13 Feb 2019 23:22 UTC
41 points
20 comments3 min readEA link

De­con­struct­ing Bostrom’s Clas­sic Ar­gu­ment for AI Doom

Nora Belrose11 Mar 2024 6:03 UTC
25 points
0 comments1 min readEA link
(www.youtube.com)

De­cep­tive Align­ment is <1% Likely by Default

DavidW21 Feb 2023 15:07 UTC
54 points
26 comments14 min readEA link

Chain­ing the evil ge­nie: why “outer” AI safety is prob­a­bly easy

titotal30 Aug 2022 13:55 UTC
40 points
12 comments10 min readEA link

Bandgaps, Brains, and Bioweapons: The limi­ta­tions of com­pu­ta­tional sci­ence and what it means for AGI

titotal26 May 2023 15:57 UTC
59 points
0 comments18 min readEA link

New ar­ti­cle from Oren Etzioni

Aryeh Englander25 Feb 2020 15:38 UTC
23 points
3 comments2 min readEA link

How I failed to form views on AI safety

Ada-Maaria Hyvärinen17 Apr 2022 11:05 UTC
213 points
72 comments40 min readEA link

Imi­ta­tion Learn­ing is Prob­a­bly Ex­is­ten­tially Safe

Vasco Grilo🔸30 Apr 2024 17:06 UTC
19 points
7 comments3 min readEA link
(www.openphilanthropy.org)

Mo­ti­va­tion gaps: Why so much EA crit­i­cism is hos­tile and lazy

titotal22 Apr 2024 11:49 UTC
211 points
44 comments19 min readEA link
(titotal.substack.com)

“X dis­tracts from Y” as a thinly-dis­guised fight over group sta­tus /​ politics

Steven Byrnes25 Sep 2023 15:29 UTC
89 points
9 comments8 min readEA link

Rea­sons for my nega­tive feel­ings to­wards the AI risk discussion

fergusq1 Sep 2022 7:33 UTC
43 points
9 comments4 min readEA link

13 Very Differ­ent Stances on AGI

Ozzie Gooen27 Dec 2021 23:30 UTC
84 points
23 comments3 min readEA link

AGI Bat­tle Royale: Why “slow takeover” sce­nar­ios de­volve into a chaotic multi-AGI fight to the death

titotal22 Sep 2022 15:00 UTC
49 points
11 comments15 min readEA link

Can a ter­ror­ist at­tack cause hu­man ex­tinc­tion? Not on priors

Vasco Grilo🔸2 Dec 2023 8:20 UTC
43 points
9 comments15 min readEA link

Fu­ture Mat­ters #7: AI timelines, AI skep­ti­cism, and lock-in

Pablo3 Feb 2023 11:47 UTC
54 points
0 comments17 min readEA link

How “AGI” could end up be­ing many differ­ent spe­cial­ized AI’s stitched together

titotal8 May 2023 12:32 UTC
31 points
2 comments9 min readEA link

AI is cen­tral­iz­ing by de­fault; let’s not make it worse

Quintin Pope21 Sep 2023 13:35 UTC
53 points
16 comments15 min readEA link

A tale of 2.5 or­thog­o­nal­ity theses

Arepo1 May 2022 13:53 UTC
140 points
31 comments11 min readEA link

Why AGI sys­tems will not be fa­nat­i­cal max­imisers (un­less trained by fa­nat­i­cal hu­mans)

titotal17 May 2023 11:58 UTC
43 points
3 comments15 min readEA link

My cover story in Ja­cobin on AI cap­i­tal­ism and the x-risk debates

Garrison12 Feb 2024 23:34 UTC
154 points
10 comments6 min readEA link
(jacobin.com)

The Leeroy Jenk­ins prin­ci­ple: How faulty AI could guaran­tee “warn­ing shots”

titotal14 Jan 2024 15:03 UTC
54 points
2 comments21 min readEA link
(titotal.substack.com)

On the Dwarkesh/​Chol­let Pod­cast, and the cruxes of scal­ing to AGI

JWS 🔸15 Jun 2024 20:24 UTC
66 points
48 comments17 min readEA link

I bet Greg Colbourn 10 k€ that AI will not kill us all by the end of 2027

Vasco Grilo🔸4 Jun 2024 16:37 UTC
189 points
57 comments2 min readEA link

De­stroy the “ne­oliberal hal­lu­ci­na­tion” & fight for an­i­mal rights through open res­cue.

Chloe Leffakis15 Aug 2023 4:47 UTC
−17 points
2 comments1 min readEA link
(www.reddit.com)

Blake Richards on Why he is Skep­ti­cal of Ex­is­ten­tial Risk from AI

Michaël Trazzi14 Jun 2022 19:11 UTC
63 points
14 comments4 min readEA link
(theinsideview.ai)

Red-team­ing ex­is­ten­tial risk from AI

Zed Tarar30 Nov 2023 14:35 UTC
30 points
16 comments6 min readEA link

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.h21 Feb 2021 13:41 UTC
16 points
1 comment7 min readEA link

The AI Mes­siah

ryancbriggs5 May 2022 16:58 UTC
71 points
44 comments2 min readEA link

[Question] Why should we *not* put effort into AI safety re­search?

Ben Thompson16 May 2021 5:11 UTC
15 points
5 comments1 min readEA link

In favour of ex­plor­ing nag­ging doubts about x-risk

Owen Cotton-Barratt25 Jun 2024 23:52 UTC
89 points
15 comments2 min readEA link

AI scal­ing myths

Nicholas Kruus🔸27 Jun 2024 20:29 UTC
30 points
0 comments1 min readEA link
(open.substack.com)

Yann LeCun on AGI and AI Safety

Chris Leong8 Aug 2023 23:43 UTC
23 points
4 comments1 min readEA link
(drive.google.com)

Did Ben­gio and Teg­mark lose a de­bate about AI x-risk against LeCun and Mitchell?

Karl von Wendt25 Jun 2023 16:59 UTC
80 points
24 comments1 min readEA link

Notes on “the hot mess the­ory of AI mis­al­ign­ment”

JakubK21 Apr 2023 10:07 UTC
44 points
3 comments1 min readEA link

Mere ex­po­sure effect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt27 Dec 2022 14:05 UTC
4 points
1 comment1 min readEA link

Loss of con­trol of AI is not a likely source of AI x-risk

squek9 Nov 2022 5:48 UTC
8 points
0 comments1 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 2

Fods1213 Dec 2018 5:12 UTC
10 points
12 comments7 min readEA link

A Cri­tique of AI Takeover Scenarios

Fods1231 Aug 2022 13:49 UTC
53 points
4 comments12 min readEA link

Pod­cast: Mag­nus Vind­ing on re­duc­ing suffer­ing, why AI progress is likely to be grad­ual and dis­tributed and how to rea­son about poli­tics

Gus Docker21 Nov 2021 15:29 UTC
26 points
0 comments1 min readEA link
(www.utilitarianpodcast.com)

No, CS ma­jors didn’t de­lude them­selves that the best way to save the world is to do CS research

Robert_Wiblin15 Dec 2015 17:13 UTC
20 points
7 comments3 min readEA link

My per­sonal cruxes for work­ing on AI safety

Buck13 Feb 2020 7:11 UTC
136 points
35 comments44 min readEA link

AGI Isn’t Close—Fu­ture Fund Wor­ld­view Prize

Toni MUENDEL18 Dec 2022 16:03 UTC
−8 points
24 comments13 min readEA link

[Question] What Do AI Safety Pitches Not Get About Your Field?

a_e_r20 Sep 2022 18:13 UTC
70 points
18 comments1 min readEA link

Why some peo­ple be­lieve in AGI, but I don’t.

cveres26 Oct 2022 3:09 UTC
13 points
2 comments4 min readEA link

The miss­ing link to AGI

Yuri Barzov28 Sep 2022 16:37 UTC
1 point
7 comments1 min readEA link

The Cred­i­bil­ity of Apoca­lyp­tic Claims: A Cri­tique of Techno-Fu­tur­ism within Ex­is­ten­tial Risk

Ember16 Aug 2022 19:48 UTC
24 points
35 comments17 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 1

Fods1213 Dec 2018 5:10 UTC
22 points
13 comments8 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 3

Fods1213 Dec 2018 5:13 UTC
3 points
5 comments7 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 5

Fods1213 Dec 2018 5:19 UTC
12 points
2 comments6 min readEA link

Maybe AI risk shouldn’t af­fect your life plan all that much

Justis22 Jul 2022 15:30 UTC
22 points
4 comments6 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 4

Fods1213 Dec 2018 5:14 UTC
4 points
2 comments4 min readEA link

Stress Ex­ter­nal­ities More in AI Safety Pitches

NickGabs26 Sep 2022 20:31 UTC
31 points
9 comments2 min readEA link

Op­ti­mism, AI risk, and EA blind spots

Justis28 Sep 2022 17:21 UTC
87 points
21 comments8 min readEA link

Is this com­mu­nity over-em­pha­siz­ing AI al­ign­ment?

Lixiang8 Jan 2023 6:23 UTC
1 point
5 comments1 min readEA link

Hereti­cal Thoughts on AI | Eli Dourado

𝕮𝖎𝖓𝖊𝖗𝖆19 Jan 2023 16:11 UTC
142 points
15 comments1 min readEA link

New tool for ex­plor­ing EA Fo­rum and LessWrong—Tree of Tags

Filip Sondej27 Oct 2022 17:43 UTC
43 points
8 comments1 min readEA link

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

stepanlos14 Jul 2023 17:10 UTC
5 points
1 comment6 min readEA link

The Prospect of an AI Winter

Erich_Grunewald 🔸27 Mar 2023 20:55 UTC
56 points
13 comments1 min readEA link

[Question] Benefits/​Risks of Scott Aaron­son’s Ortho­dox/​Re­form Fram­ing for AI Alignment

Jeremy21 Nov 2022 17:47 UTC
15 points
5 comments1 min readEA link
(scottaaronson.blog)

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

cdkg29 May 2023 9:59 UTC
29 points
6 comments26 min readEA link

What can su­per­in­tel­li­gent ANI tell us about su­per­in­tel­li­gent AGI?

Ted Sanders12 Jun 2023 6:32 UTC
81 points
20 comments5 min readEA link

Sum­mary: Against the Sin­gu­lar­ity Hy­poth­e­sis (David Thorstad)

Nicholas Kruus🔸27 Mar 2024 13:48 UTC
63 points
10 comments5 min readEA link

My Proven AI Safety Ex­pla­na­tion (as a com­put­ing stu­dent)

Mica White6 Feb 2024 3:58 UTC
8 points
4 comments6 min readEA link

Against AI As An Ex­is­ten­tial Risk

Daniel Birnbaum30 Jul 2024 19:24 UTC
6 points
3 comments1 min readEA link
(irrationalitycommunity.substack.com)

No “Zero-Shot” Without Ex­po­nen­tial Data: Pre­train­ing Con­cept Fre­quency Deter­mines Mul­ti­modal Model Performance

Nicholas Kruus🔸14 May 2024 23:57 UTC
36 points
2 comments1 min readEA link
(arxiv.org)

‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

Froolow18 Oct 2022 22:54 UTC
111 points
63 comments39 min readEA link

Sum­mary: Against the sin­gu­lar­ity hypothesis

Global Priorities Institute22 May 2024 11:05 UTC
46 points
14 comments4 min readEA link
(globalprioritiesinstitute.org)

Shut­ting down all com­pet­ing AI pro­jects might not buy a lot of time due to In­ter­nal Time Pressure

ThomasCederborg3 Oct 2024 0:05 UTC
6 points
1 comment12 min readEA link

Should AI X-Risk Wor­ri­ers Short the Mar­ket?

postlibertarian4 Nov 2024 16:16 UTC
14 points
1 comment6 min readEA link

Cut­ting AI Safety down to size

Holly Elmore ⏸️ 🔸9 Nov 2024 23:40 UTC
76 points
4 comments5 min readEA link
No comments.