RSS

AI risk skepticism

TagLast edit: 12 Feb 2024 8:57 UTC by Vasco Grilo

AI risk skepticism is skepticism about arguments that advanced artificial intelligence poses a catastrophic or existential risk.

Further reading

Bergal, Asya & Robert Long (2019) Conversation with Robin Hanson, AI Impacts, November 13.

Garfinkel, Ben (2019) How sure are we about this AI stuff?, Effective Altruism Forum, February 9.

Vinding, Magnus (2017/​2022) A Contra AI FOOM Reading List.

David Thorstad’s series on exaggerating AI risk.

Related entries

AI alignment | AI safety | criticism of effective altruism

Coun­ter­ar­gu­ments to the ba­sic AI risk case

Katja_Grace14 Oct 2022 20:30 UTC
280 points
23 comments34 min readEA link

My Ob­jec­tions to “We’re All Gonna Die with Eliezer Yud­kowsky”

Quintin Pope21 Mar 2023 1:23 UTC
167 points
20 comments39 min readEA link

My highly per­sonal skep­ti­cism brain­dump on ex­is­ten­tial risk from ar­tifi­cial in­tel­li­gence.

NunoSempere23 Jan 2023 20:08 UTC
431 points
116 comments14 min readEA link
(nunosempere.com)

Ex­po­nen­tial AI take­off is a myth

Christoph Hartmann31 May 2023 11:47 UTC
38 points
11 comments9 min readEA link

[linkpost] “What Are Rea­son­able AI Fears?” by Robin Han­son, 2023-04-23

Arjun Panickssery14 Apr 2023 23:26 UTC
41 points
3 comments4 min readEA link
(quillette.com)

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin Pope11 Apr 2023 18:48 UTC
43 points
2 comments1 min readEA link

Ben Garfinkel: How sure are we about this AI stuff?

bgarfinkel9 Feb 2019 19:17 UTC
128 points
20 comments18 min readEA link

Why EAs are skep­ti­cal about AI Safety

Lukas Trötzmüller18 Jul 2022 19:01 UTC
289 points
31 comments30 min readEA link

Rea­sons I’ve been hes­i­tant about high lev­els of near-ish AI risk

elifland22 Jul 2022 1:32 UTC
206 points
16 comments7 min readEA link
(www.foxy-scout.com)

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus Vinding24 Nov 2022 11:54 UTC
74 points
32 comments22 min readEA link

[Question] What is the cur­rent most rep­re­sen­ta­tive EA AI x-risk ar­gu­ment?

Matthew_Barnett15 Dec 2023 22:04 UTC
116 points
48 comments3 min readEA link

Why AI is Harder Than We Think—Me­lanie Mitchell

BrownHairedEevee28 Apr 2021 8:19 UTC
41 points
7 comments2 min readEA link
(arxiv.org)

Count­ing ar­gu­ments provide no ev­i­dence for AI doom

Nora Belrose27 Feb 2024 23:03 UTC
61 points
13 comments1 min readEA link

‘Dis­solv­ing’ AI Risk – Pa­ram­e­ter Uncer­tainty in AI Fu­ture Forecasting

Froolow18 Oct 2022 22:54 UTC
112 points
63 comments39 min readEA link

Three Bi­ases That Made Me Believe in AI Risk

beth​13 Feb 2019 23:22 UTC
41 points
20 comments3 min readEA link

How I failed to form views on AI safety

Ada-Maaria Hyvärinen17 Apr 2022 11:05 UTC
210 points
71 comments40 min readEA link

De­con­struct­ing Bostrom’s Clas­sic Ar­gu­ment for AI Doom

Nora Belrose11 Mar 2024 6:03 UTC
25 points
0 comments1 min readEA link
(www.youtube.com)

New ar­ti­cle from Oren Etzioni

Aryeh Englander25 Feb 2020 15:38 UTC
23 points
3 comments2 min readEA link

Yann LeCun on AGI and AI Safety

Chris Leong8 Aug 2023 23:43 UTC
22 points
4 comments1 min readEA link
(drive.google.com)

The AI Mes­siah

ryancbriggs5 May 2022 16:58 UTC
69 points
44 comments2 min readEA link

De­stroy the “ne­oliberal hal­lu­ci­na­tion” & fight for an­i­mal rights through open res­cue.

Chloe Leffakis15 Aug 2023 4:47 UTC
−17 points
2 comments1 min readEA link
(www.reddit.com)

In­ter­view with Tom Chivers: “AI is a plau­si­ble ex­is­ten­tial risk, but it feels as if I’m in Pas­cal’s mug­ging”

felix.h21 Feb 2021 13:41 UTC
16 points
1 comment7 min readEA link

[Question] Why should we *not* put effort into AI safety re­search?

Ben Thompson16 May 2021 5:11 UTC
15 points
5 comments1 min readEA link

A tale of 2.5 or­thog­o­nal­ity theses

Arepo1 May 2022 13:53 UTC
138 points
31 comments15 min readEA link

13 Very Differ­ent Stances on AGI

Ozzie Gooen27 Dec 2021 23:30 UTC
84 points
23 comments3 min readEA link

“Di­a­mon­doid bac­te­ria” nanobots: deadly threat or dead-end? A nan­otech in­ves­ti­ga­tion

titotal29 Sep 2023 14:01 UTC
99 points
33 comments20 min readEA link
(titotal.substack.com)

Did Ben­gio and Teg­mark lose a de­bate about AI x-risk against LeCun and Mitchell?

Karl von Wendt25 Jun 2023 16:59 UTC
80 points
24 comments1 min readEA link

The bul­ls­eye frame­work: My case against AI doom

titotal30 May 2023 11:52 UTC
55 points
7 comments17 min readEA link

Red-team­ing ex­is­ten­tial risk from AI

Zed Tarar30 Nov 2023 14:35 UTC
30 points
16 comments6 min readEA link

Fu­ture Mat­ters #7: AI timelines, AI skep­ti­cism, and lock-in

Pablo3 Feb 2023 11:47 UTC
54 points
0 comments17 min readEA link

My cover story in Ja­cobin on AI cap­i­tal­ism and the x-risk debates

Garrison12 Feb 2024 23:34 UTC
152 points
10 comments6 min readEA link
(jacobin.com)

AI is cen­tral­iz­ing by de­fault; let’s not make it worse

Quintin Pope21 Sep 2023 13:35 UTC
53 points
16 comments15 min readEA link

Blake Richards on Why he is Skep­ti­cal of Ex­is­ten­tial Risk from AI

Michaël Trazzi14 Jun 2022 19:11 UTC
63 points
14 comments4 min readEA link
(theinsideview.ai)

Rea­sons for my nega­tive feel­ings to­wards the AI risk discussion

fergusq1 Sep 2022 7:33 UTC
41 points
9 comments4 min readEA link

Can a ter­ror­ist at­tack cause hu­man ex­tinc­tion? Not on priors

Vasco Grilo2 Dec 2023 8:20 UTC
43 points
8 comments15 min readEA link

Lan­guage Agents Re­duce the Risk of Ex­is­ten­tial Catastrophe

cdkg29 May 2023 9:59 UTC
29 points
6 comments26 min readEA link

What can su­per­in­tel­li­gent ANI tell us about su­per­in­tel­li­gent AGI?

Ted Sanders12 Jun 2023 6:32 UTC
81 points
20 comments5 min readEA link

Sum­mary: Against the Sin­gu­lar­ity Hy­poth­e­sis (David Thorstad)

Nicholas Kruus27 Mar 2024 13:48 UTC
37 points
4 comments5 min readEA link

My Proven AI Safety Ex­pla­na­tion (as a com­put­ing stu­dent)

Mica White6 Feb 2024 3:58 UTC
8 points
4 comments6 min readEA link

AI Risk and Sur­vivor­ship Bias—How An­dreessen and LeCun got it wrong

stepanlos14 Jul 2023 17:10 UTC
4 points
1 comment6 min readEA link

Mere ex­po­sure effect: Bias in Eval­u­at­ing AGI X-Risks

Remmelt27 Dec 2022 14:05 UTC
4 points
1 comment1 min readEA link

Loss of con­trol of AI is not a likely source of AI x-risk

squek9 Nov 2022 5:48 UTC
8 points
0 comments1 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 2

Fods1213 Dec 2018 5:12 UTC
10 points
12 comments7 min readEA link

A Cri­tique of AI Takeover Scenarios

Fods1231 Aug 2022 13:49 UTC
53 points
4 comments12 min readEA link

Pod­cast: Mag­nus Vind­ing on re­duc­ing suffer­ing, why AI progress is likely to be grad­ual and dis­tributed and how to rea­son about poli­tics

Gus Docker21 Nov 2021 15:29 UTC
26 points
0 comments1 min readEA link
(www.utilitarianpodcast.com)

No, CS ma­jors didn’t de­lude them­selves that the best way to save the world is to do CS research

Robert_Wiblin15 Dec 2015 17:13 UTC
20 points
7 comments3 min readEA link

My per­sonal cruxes for work­ing on AI safety

Buck13 Feb 2020 7:11 UTC
135 points
35 comments45 min readEA link

AGI Isn’t Close—Fu­ture Fund Wor­ld­view Prize

Toni MUENDEL18 Dec 2022 16:03 UTC
−8 points
24 comments13 min readEA link

[Question] What Do AI Safety Pitches Not Get About Your Field?

a_e_r20 Sep 2022 18:13 UTC
70 points
18 comments1 min readEA link

Why some peo­ple be­lieve in AGI, but I don’t.

cveres26 Oct 2022 3:09 UTC
13 points
2 comments4 min readEA link

The miss­ing link to AGI

Yuri Barzov28 Sep 2022 16:37 UTC
1 point
7 comments1 min readEA link

The Cred­i­bil­ity of Apoca­lyp­tic Claims: A Cri­tique of Techno-Fu­tur­ism within Ex­is­ten­tial Risk

Ember16 Aug 2022 19:48 UTC
24 points
35 comments17 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 1

Fods1213 Dec 2018 5:10 UTC
22 points
13 comments8 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 3

Fods1213 Dec 2018 5:13 UTC
3 points
5 comments7 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 5

Fods1213 Dec 2018 5:19 UTC
12 points
2 comments6 min readEA link

Chain­ing the evil ge­nie: why “outer” AI safety is prob­a­bly easy

titotal30 Aug 2022 13:55 UTC
20 points
11 comments10 min readEA link

Maybe AI risk shouldn’t af­fect your life plan all that much

Justis22 Jul 2022 15:30 UTC
21 points
4 comments6 min readEA link

Cri­tique of Su­per­in­tel­li­gence Part 4

Fods1213 Dec 2018 5:14 UTC
4 points
2 comments4 min readEA link

Stress Ex­ter­nal­ities More in AI Safety Pitches

NickGabs26 Sep 2022 20:31 UTC
31 points
9 comments2 min readEA link

Op­ti­mism, AI risk, and EA blind spots

Justis28 Sep 2022 17:21 UTC
87 points
21 comments8 min readEA link

Is this com­mu­nity over-em­pha­siz­ing AI al­ign­ment?

Lixiang8 Jan 2023 6:23 UTC
1 point
5 comments1 min readEA link

Hereti­cal Thoughts on AI | Eli Dourado

𝕮𝖎𝖓𝖊𝖗𝖆19 Jan 2023 16:11 UTC
138 points
15 comments1 min readEA link

New tool for ex­plor­ing EA Fo­rum and LessWrong—Tree of Tags

Filip Sondej27 Oct 2022 17:43 UTC
43 points
8 comments1 min readEA link

De­cep­tive Align­ment is <1% Likely by Default

DavidW21 Feb 2023 15:07 UTC
46 points
24 comments14 min readEA link

[Question] Benefits/​Risks of Scott Aaron­son’s Ortho­dox/​Re­form Fram­ing for AI Alignment

Jeremy21 Nov 2022 17:47 UTC
15 points
5 comments1 min readEA link
(scottaaronson.blog)

The Prospect of an AI Winter

Erich_Grunewald27 Mar 2023 20:55 UTC
56 points
13 comments1 min readEA link

Notes on “the hot mess the­ory of AI mis­al­ign­ment”

JakubK21 Apr 2023 10:07 UTC
44 points
3 comments1 min readEA link

Bandgaps, Brains, and Bioweapons: The limi­ta­tions of com­pu­ta­tional sci­ence and what it means for AGI

titotal26 May 2023 15:57 UTC
46 points
0 comments18 min readEA link
No comments.