RSS

Slow­ing down AI

TagLast edit: 25 Apr 2023 17:11 UTC by Dane Valerie

Slowing down AI development has been proposed as a policy intervention to reduce AGI risk by increasing the time it takes for society to develop AGI.

Related entries

AI race | AI takeoff

De­bate se­ries: should we push for a pause on the de­vel­op­ment of AI?

Ben_West🔸8 Sep 2023 16:29 UTC
252 points
58 comments1 min readEA link

Let’s think about slow­ing down AI

Katja_Grace23 Dec 2022 19:56 UTC
339 points
9 comments38 min readEA link

Lev­er­age points for a pause

Remmelt28 Aug 2024 9:21 UTC
6 points
0 comments1 min readEA link

Re­quest to AGI or­ga­ni­za­tions: Share your views on paus­ing AI progress

Akash11 Apr 2023 17:30 UTC
85 points
1 comment1 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo31 Oct 2023 5:46 UTC
14 points
1 comment2 min readEA link
(www.campaignforaisafety.org)

An AI crash is our best bet for re­strict­ing AI

Remmelt11 Oct 2024 2:12 UTC
20 points
3 comments1 min readEA link

Unions for AI safety?

dEAsign24 Sep 2023 0:13 UTC
7 points
12 comments2 min readEA link

“Near Mid­night in Suicide City”

Greg_Colbourn ⏸️ 6 Dec 2024 19:54 UTC
5 points
0 comments1 min readEA link
(www.youtube.com)

Against Aschen­bren­ner: How ‘Si­tu­a­tional Aware­ness’ con­structs a nar­ra­tive that un­der­mines safety and threat­ens humanity

GideonF15 Jul 2024 16:21 UTC
238 points
22 comments21 min readEA link

FLI open let­ter: Pause gi­ant AI experiments

Zach Stein-Perlman29 Mar 2023 4:04 UTC
220 points
38 comments2 min readEA link
(futureoflife.org)

Cruxes on US lead for some do­mes­tic AI regulation

Zach Stein-Perlman10 Sep 2023 18:00 UTC
20 points
6 comments2 min readEA link

EA Any­where Dis­cus­sion: The case for slow­ing down AI

Sasha Berezhnoi 🔸31 Mar 2023 6:43 UTC
5 points
0 comments1 min readEA link

Ap­ply to CEEALAR to do AGI mora­to­rium work

Greg_Colbourn ⏸️ 26 Jul 2023 21:24 UTC
62 points
0 comments1 min readEA link

UN Sec­re­tary-Gen­eral recog­nises ex­is­ten­tial threat from AI

Greg_Colbourn ⏸️ 15 Jun 2023 17:03 UTC
58 points
1 comment1 min readEA link

Why Stop AI is bar­ri­cad­ing OpenAI

Remmelt14 Oct 2024 7:12 UTC
−19 points
28 comments6 min readEA link
(docs.google.com)

Un­veiling the Amer­i­can Public Opinion on AI Mo­ra­to­rium and Govern­ment In­ter­ven­tion: The Im­pact of Me­dia Exposure

Otto8 May 2023 10:49 UTC
28 points
5 comments6 min readEA link

Ex-OpenAI re­searcher says OpenAI mass-vi­o­lated copy­right law

Remmelt24 Oct 2024 1:00 UTC
11 points
0 comments1 min readEA link
(suchir.net)

Is it time for a pause?

Kelsey Piper6 Apr 2023 11:48 UTC
103 points
6 comments5 min readEA link

Some rea­sons to start a pro­ject to stop harm­ful AI

Remmelt22 Aug 2024 16:23 UTC
5 points
0 comments2 min readEA link

A note about differ­en­tial tech­nolog­i­cal development

So8res24 Jul 2022 23:41 UTC
58 points
8 comments6 min readEA link

OpenAI’s o1 tried to avoid be­ing shut down, and lied about it, in evals

Greg_Colbourn ⏸️ 6 Dec 2024 15:25 UTC
23 points
9 comments1 min readEA link
(www.transformernews.ai)

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn ⏸️ 19 Jul 2023 16:46 UTC
31 points
2 comments1 min readEA link

Par­tial Tran­script of Re­cent Se­nate Hear­ing Dis­cussing AI X-Risk

Daniel_Eth27 Jul 2023 9:16 UTC
150 points
2 comments22 min readEA link
(medium.com)

Nav­i­gat­ing AI Risks (NAIR) #1: Slow­ing Down AI

simeon_c14 Apr 2023 14:35 UTC
12 points
1 comment1 min readEA link
(navigatingairisks.substack.com)

A moral back­lash against AI will prob­a­bly slow down AGI development

Geoffrey Miller31 May 2023 21:31 UTC
147 points
22 comments14 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo7 Aug 2023 6:09 UTC
32 points
2 comments2 min readEA link
(www.campaignforaisafety.org)

Which side of the AI safety com­mu­nity are you in?

Greg_Colbourn ⏸️ 23 Oct 2025 14:23 UTC
9 points
1 comment2 min readEA link
(www.lesswrong.com)

We’re not pre­pared for an AI mar­ket crash

Remmelt1 Apr 2025 4:33 UTC
27 points
4 comments2 min readEA link

List of pe­ti­tions against OpenAI’s for-profit move

Remmelt25 Apr 2025 10:03 UTC
13 points
4 comments1 min readEA link

[Question] Slow­ing down AI progress?

Eleni_A26 Jul 2022 8:46 UTC
16 points
9 comments1 min readEA link

Some quotes from Tues­day’s Se­nate hear­ing on AI

Daniel_Eth17 May 2023 12:13 UTC
105 points
7 comments4 min readEA link

Fund­ing cir­cle aimed at slow­ing down AI—look­ing for participants

Greg_Colbourn ⏸️ 25 Jan 2024 23:58 UTC
92 points
3 comments2 min readEA link

AMA: PauseAI US needs money! Ask founder/​Exec Dir Holly El­more any­thing for 11/​19

Holly Elmore ⏸️ 🔸11 Nov 2024 23:51 UTC
91 points
57 comments4 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down

EliezerYudkowsky9 Apr 2023 15:53 UTC
50 points
3 comments12 min readEA link

Up­date from Cam­paign for AI Safety

Nik Samoylov1 Jun 2023 10:46 UTC
22 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn ⏸️ 2 May 2023 10:40 UTC
13 points
28 comments3 min readEA link

Giv­ing away copies of Un­con­trol­lable by Dar­ren McKee

Greg_Colbourn ⏸️ 14 Dec 2023 17:00 UTC
39 points
2 comments1 min readEA link

Dis­cus­sion Group: Slow­ing Down AI Progress

Group Organizer12 Jan 2023 4:52 UTC
4 points
0 comments1 min readEA link

Why I’m do­ing PauseAI

Joseph Miller30 Apr 2024 16:21 UTC
147 points
36 comments4 min readEA link

The state of AI in differ­ent coun­tries — an overview

Lizka14 Sep 2023 10:37 UTC
68 points
6 comments13 min readEA link
(aisafetyfundamentals.com)

Pause House, Blackpool

Greg_Colbourn ⏸️ 13 Oct 2025 11:36 UTC
79 points
0 comments1 min readEA link
(gregcolbourn.substack.com)

In­stead of tech­ni­cal re­search, more peo­ple should fo­cus on buy­ing time

Akash5 Nov 2022 20:43 UTC
107 points
31 comments14 min readEA link

OpenAI lost $5 billion in 2024 (and its losses are in­creas­ing)

Remmelt31 Mar 2025 4:17 UTC
0 points
3 comments12 min readEA link
(www.wheresyoured.at)

CoreWeave Is A Time Bomb

Remmelt31 Mar 2025 3:52 UTC
10 points
2 comments2 min readEA link
(www.wheresyoured.at)

Katja Grace: Let’s think about slow­ing down AI

peterhartree23 Dec 2022 0:57 UTC
84 points
6 comments2 min readEA link
(worldspiritsockpuppet.substack.com)

Cor­po­rate cam­paigns work: a key learn­ing for AI Safety

Jamie_Harris17 Aug 2023 21:35 UTC
72 points
12 comments6 min readEA link

Crash sce­nario 1: Rapidly mo­bil­ise for a 2025 AI crash

Remmelt11 Apr 2025 6:54 UTC
8 points
0 comments1 min readEA link

Data Tax­a­tion: A Pro­posal for Slow­ing Down AGI Progress

Per Ivar Friborg11 Apr 2023 17:27 UTC
42 points
6 comments12 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo16 Jun 2023 9:45 UTC
15 points
3 comments2 min readEA link
(www.campaignforaisafety.org)

De­con­fus­ing Pauses: Long Term Mo­ra­to­rium vs Slow­ing AI

GideonF4 Aug 2024 11:32 UTC
17 points
3 comments5 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn ⏸️ 2 May 2023 10:17 UTC
68 points
35 comments13 min readEA link

An Up­date On The Cam­paign For AI Safety Dot Org

yanni kyriacos5 May 2023 0:19 UTC
26 points
4 comments1 min readEA link

Protest against Meta’s ir­re­versible pro­lifer­a­tion (Sept 29, San Fran­cisco)

Holly Elmore ⏸️ 🔸19 Sep 2023 23:40 UTC
114 points
32 comments1 min readEA link

“Slower tech de­vel­op­ment” can be about or­der­ing, grad­u­al­ness, or dis­tance from now

MichaelA🔸14 Nov 2021 20:58 UTC
47 points
3 comments4 min readEA link

US pub­lic opinion of AI policy and risk

Jamie E12 May 2023 13:22 UTC
111 points
7 comments15 min readEA link

Who wants to bet me $25k at 1:7 odds that there won’t be an AI mar­ket crash in the next year?

Remmelt8 Apr 2025 8:31 UTC
7 points
5 comments1 min readEA link

Map of all 40 copy­right suits v. AI in U.S.

Remmelt26 Mar 2025 7:57 UTC
16 points
0 comments1 min readEA link
(chatgptiseatingtheworld.com)

Hunger strike in front of An­thropic by one guy con­cerned about AI risk

Remmelt5 Sep 2025 4:00 UTC
19 points
18 comments1 min readEA link

The costs of caution

Kelsey Piper1 May 2023 20:04 UTC
112 points
17 comments4 min readEA link

AI 2027: What Su­per­in­tel­li­gence Looks Like (Linkpost)

Manuel Allgaier11 Apr 2025 10:31 UTC
51 points
3 comments42 min readEA link
(ai-2027.com)

The Case For Civil Di­sobe­di­ence For The AI Movement

Murali Thoppil24 Apr 2023 13:07 UTC
16 points
3 comments4 min readEA link
(murali42e.substack.com)

Safety-con­cerned EAs should pri­ori­tize AI gov­er­nance over alignment

sammyboiz🔸11 Jun 2024 15:47 UTC
61 points
20 comments1 min readEA link

Katja Grace on Slow­ing Down AI, AI Ex­pert Sur­veys And Es­ti­mat­ing AI Risk

Michaël Trazzi16 Sep 2022 18:00 UTC
48 points
6 comments3 min readEA link
(theinsideview.ai)

Slow­ing down AI progress is an un­der­ex­plored al­ign­ment strategy

Michael Huang13 Jul 2022 3:22 UTC
92 points
11 comments3 min readEA link
(www.lesswrong.com)

An­thropic is be­ing sued for copy­ing books to train Claude

Remmelt31 Aug 2024 2:57 UTC
3 points
0 comments2 min readEA link
(fingfx.thomsonreuters.com)

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibs29 Mar 2023 23:30 UTC
212 points
75 comments3 min readEA link
(time.com)

The His­tory, Episte­mol­ogy and Strat­egy of Tech­nolog­i­cal Res­traint, and les­sons for AI (short es­say)

MMMaas10 Aug 2022 11:00 UTC
90 points
6 comments9 min readEA link
(verfassungsblog.de)

OpenAI defected, but we can take hon­est actions

Remmelt21 Oct 2024 8:41 UTC
19 points
1 comment2 min readEA link

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

GideonF16 Mar 2023 14:37 UTC
59 points
11 comments15 min readEA link

Be­ware of the new scal­ing paradigm

JohanEA19 Sep 2024 17:03 UTC
9 points
2 comments3 min readEA link

In­tro­duc­ing StakeOut.AI

Harry Luk17 Feb 2024 0:21 UTC
52 points
6 comments9 min readEA link

The In­ter­na­tional PauseAI Protest: Ac­tivism un­der uncertainty

Joseph Miller12 Oct 2023 17:36 UTC
136 points
3 comments4 min readEA link

Fif­teen Law­suits against OpenAI

Remmelt9 Mar 2024 12:22 UTC
55 points
5 comments1 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo30 Aug 2023 5:36 UTC
7 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

Ly­ing is Cowardice, not Strategy

Connor Leahy25 Oct 2023 5:59 UTC
−5 points
15 comments5 min readEA link
(cognition.cafe)

[Cross­post] AI Reg­u­la­tion May Be More Im­por­tant Than AI Align­ment For Ex­is­ten­tial Safety

Otto24 Aug 2023 16:01 UTC
14 points
2 comments5 min readEA link

[Question] How bad would AI progress need to be for us to think gen­eral tech­nolog­i­cal progress is also bad?

Jim Buhler6 Jul 2024 18:44 UTC
10 points
0 comments1 min readEA link

Mess AI – de­liber­ate cor­rup­tion of the train­ing data to pre­vent superintelligence

turchin17 Oct 2025 9:23 UTC
5 points
0 comments2 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo27 Sep 2023 2:44 UTC
16 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

Effi­cacy of AI Ac­tivism: Have We Ever Said No?

Charlie Harrison27 Oct 2023 16:52 UTC
80 points
25 comments20 min readEA link

[Question] Launch­ing Ap­pli­ca­tions for the Global AI Safety Fel­low­ship 2025!

Impact Academy27 Nov 2024 15:33 UTC
9 points
1 comment1 min readEA link

[Cross­post] An AI Pause Is Hu­man­ity’s Best Bet For Prevent­ing Ex­tinc­tion (TIME)

Otto24 Jul 2023 10:18 UTC
36 points
3 comments7 min readEA link
(time.com)

OPEC for a slow AGI takeoff

vyrax21 Apr 2023 10:53 UTC
4 points
0 comments3 min readEA link

IABIED Re­view—An Un­for­tu­nate Miss

Darren McKee18 Sep 2025 22:39 UTC
20 points
2 comments9 min readEA link

[Question] Am I tak­ing crazy pills? Why aren’t EAs ad­vo­cat­ing for a pause on AI ca­pa­bil­ities?

yanni kyriacos15 Aug 2023 23:29 UTC
18 points
21 comments1 min readEA link

A Re­ply to MacAskill on “If Any­one Builds It, Every­one Dies”

RobBensinger27 Sep 2025 23:03 UTC
9 points
7 comments17 min readEA link

Re­duc­ing global AI com­pe­ti­tion through the Com­merce Con­trol List and Im­mi­gra­tion re­form: a dual-pronged approach

ben.smith3 Sep 2024 5:28 UTC
15 points
0 comments9 min readEA link

State­ment on AI Ex­tinc­tion—Signed by AGI Labs, Top Aca­demics, and Many Other Notable Figures

Center for AI Safety30 May 2023 9:06 UTC
429 points
28 comments1 min readEA link
(www.safe.ai)

Go Mo­bi­lize? Les­sons from GM Protests for Paus­ing AI

Charlie Harrison24 Oct 2023 15:01 UTC
54 points
11 comments31 min readEA link

Is prin­ci­pled mass-out­reach pos­si­ble, for AGI X-risk?

Nicholas Kross21 Jan 2024 17:45 UTC
12 points
2 comments3 min readEA link
No comments.