RSS

Slow­ing down AI

TagLast edit: 25 Apr 2023 17:11 UTC by Dane Valerie

Slowing down AI development has been proposed as a policy intervention to reduce AGI risk by increasing the time it takes for society to develop AGI.

Related entries

AI race | AI takeoff

De­bate se­ries: should we push for a pause on the de­vel­op­ment of AI?

Ben_West🔸8 Sep 2023 16:29 UTC
252 points
58 comments1 min readEA link

Lev­er­age points for a pause

Remmelt28 Aug 2024 9:21 UTC
6 points
0 comments1 min readEA link

Let’s think about slow­ing down AI

Katja_Grace23 Dec 2022 19:56 UTC
334 points
9 comments1 min readEA link

Re­quest to AGI or­ga­ni­za­tions: Share your views on paus­ing AI progress

Akash11 Apr 2023 17:30 UTC
85 points
1 comment1 min readEA link

An AI crash is our best bet for re­strict­ing AI

Remmelt11 Oct 2024 2:12 UTC
20 points
1 comment1 min readEA link

Unions for AI safety?

dEAsign24 Sep 2023 0:13 UTC
7 points
12 comments2 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo31 Oct 2023 5:46 UTC
14 points
1 comment2 min readEA link
(www.campaignforaisafety.org)

Ap­ply to CEEALAR to do AGI mora­to­rium work

Greg_Colbourn26 Jul 2023 21:24 UTC
62 points
0 comments1 min readEA link

The Case For Civil Di­sobe­di­ence For The AI Movement

Murali Thoppil24 Apr 2023 13:07 UTC
16 points
3 comments4 min readEA link
(murali42e.substack.com)

An­thropic is be­ing sued for copy­ing books to train Claude

Remmelt31 Aug 2024 2:57 UTC
3 points
0 comments1 min readEA link
(fingfx.thomsonreuters.com)

Cor­po­rate cam­paigns work: a key learn­ing for AI Safety

Jamie_Harris17 Aug 2023 21:35 UTC
71 points
12 comments6 min readEA link

Thoughts on yes­ter­day’s UN Se­cu­rity Coun­cil meet­ing on AI

Greg_Colbourn19 Jul 2023 16:46 UTC
31 points
2 comments1 min readEA link

Par­tial Tran­script of Re­cent Se­nate Hear­ing Dis­cussing AI X-Risk

Daniel_Eth27 Jul 2023 9:16 UTC
150 points
2 comments22 min readEA link
(medium.com)

Some quotes from Tues­day’s Se­nate hear­ing on AI

Daniel_Eth17 May 2023 12:13 UTC
105 points
7 comments4 min readEA link

P(doom|AGI) is high: why the de­fault out­come of AGI is doom

Greg_Colbourn2 May 2023 10:40 UTC
13 points
28 comments3 min readEA link

Un­veiling the Amer­i­can Public Opinion on AI Mo­ra­to­rium and Govern­ment In­ter­ven­tion: The Im­pact of Me­dia Exposure

Otto8 May 2023 10:49 UTC
28 points
5 comments6 min readEA link

An Up­date On The Cam­paign For AI Safety Dot Org

yanni kyriacos5 May 2023 0:19 UTC
26 points
4 comments1 min readEA link

UN Sec­re­tary-Gen­eral recog­nises ex­is­ten­tial threat from AI

Greg_Colbourn15 Jun 2023 17:03 UTC
58 points
1 comment1 min readEA link

AGI ris­ing: why we are in a new era of acute risk and in­creas­ing pub­lic aware­ness, and what to do now

Greg_Colbourn2 May 2023 10:17 UTC
68 points
35 comments13 min readEA link

We are fight­ing a shared bat­tle (a call for a differ­ent ap­proach to AI Strat­egy)

Gideon Futerman16 Mar 2023 14:37 UTC
59 points
11 comments15 min readEA link

De­con­fus­ing Pauses: Long Term Mo­ra­to­rium vs Slow­ing AI

Gideon Futerman4 Aug 2024 11:32 UTC
17 points
3 comments5 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo16 Jun 2023 9:45 UTC
15 points
3 comments2 min readEA link
(www.campaignforaisafety.org)

Giv­ing away copies of Un­con­trol­lable by Dar­ren McKee

Greg_Colbourn14 Dec 2023 17:00 UTC
39 points
2 comments1 min readEA link

Be­ware of the new scal­ing paradigm

Johan de Kock19 Sep 2024 17:03 UTC
9 points
2 comments3 min readEA link

Why I’m do­ing PauseAI

Joseph Miller30 Apr 2024 16:21 UTC
141 points
34 comments1 min readEA link

Some rea­sons to start a pro­ject to stop harm­ful AI

Remmelt22 Aug 2024 16:23 UTC
5 points
0 comments1 min readEA link

Why Stop AI is bar­ri­cad­ing OpenAI

Remmelt14 Oct 2024 7:12 UTC
−29 points
28 comments6 min readEA link
(docs.google.com)

Against Aschen­bren­ner: How ‘Si­tu­a­tional Aware­ness’ con­structs a nar­ra­tive that un­der­mines safety and threat­ens humanity

Gideon Futerman15 Jul 2024 16:21 UTC
238 points
22 comments21 min readEA link

Fund­ing cir­cle aimed at slow­ing down AI—look­ing for participants

Greg_Colbourn25 Jan 2024 23:58 UTC
92 points
2 comments2 min readEA link

OpenAI defected, but we can take hon­est actions

Remmelt21 Oct 2024 8:41 UTC
19 points
1 comment2 min readEA link

Ex-OpenAI re­searcher says OpenAI mass-vi­o­lated copy­right law

Remmelt24 Oct 2024 1:00 UTC
9 points
0 comments1 min readEA link
(suchir.net)

Katja Grace on Slow­ing Down AI, AI Ex­pert Sur­veys And Es­ti­mat­ing AI Risk

Michaël Trazzi16 Sep 2022 18:00 UTC
48 points
6 comments3 min readEA link
(theinsideview.ai)

Slow­ing down AI progress is an un­der­ex­plored al­ign­ment strategy

Michael Huang13 Jul 2022 3:22 UTC
92 points
11 comments3 min readEA link
(www.lesswrong.com)

Dis­cus­sion Group: Slow­ing Down AI Progress

Group Organizer12 Jan 2023 4:52 UTC
4 points
0 comments1 min readEA link

[Question] Slow­ing down AI progress?

Eleni_A26 Jul 2022 8:46 UTC
16 points
9 comments1 min readEA link

Katja Grace: Let’s think about slow­ing down AI

peterhartree23 Dec 2022 0:57 UTC
84 points
6 comments2 min readEA link
(worldspiritsockpuppet.substack.com)

Data Tax­a­tion: A Pro­posal for Slow­ing Down AGI Progress

Per Ivar Friborg11 Apr 2023 17:27 UTC
42 points
6 comments12 min readEA link

EA Any­where Dis­cus­sion: The case for slow­ing down AI

Sasha Berezhnoi 🔸31 Mar 2023 6:43 UTC
5 points
0 comments1 min readEA link

FLI open let­ter: Pause gi­ant AI experiments

Zach Stein-Perlman29 Mar 2023 4:04 UTC
220 points
38 comments1 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down by Eliezer Yudkowsky

jacquesthibs29 Mar 2023 23:30 UTC
211 points
75 comments3 min readEA link
(time.com)

Nav­i­gat­ing AI Risks (NAIR) #1: Slow­ing Down AI

simeon_c14 Apr 2023 14:35 UTC
12 points
1 comment1 min readEA link

The costs of caution

Kelsey Piper1 May 2023 20:04 UTC
112 points
17 comments4 min readEA link

Paus­ing AI Devel­op­ments Isn’t Enough. We Need to Shut it All Down

EliezerYudkowsky9 Apr 2023 15:53 UTC
50 points
3 comments1 min readEA link

“Slower tech de­vel­op­ment” can be about or­der­ing, grad­u­al­ness, or dis­tance from now

MichaelA🔸14 Nov 2021 20:58 UTC
47 points
3 comments4 min readEA link

Cruxes on US lead for some do­mes­tic AI regulation

Zach Stein-Perlman10 Sep 2023 18:00 UTC
20 points
6 comments2 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo7 Aug 2023 6:09 UTC
32 points
2 comments2 min readEA link
(www.campaignforaisafety.org)

The state of AI in differ­ent coun­tries — an overview

Lizka14 Sep 2023 10:37 UTC
68 points
6 comments13 min readEA link
(aisafetyfundamentals.com)

Protest against Meta’s ir­re­versible pro­lifer­a­tion (Sept 29, San Fran­cisco)

Holly Elmore ⏸️ 🔸19 Sep 2023 23:40 UTC
114 points
32 comments1 min readEA link

The His­tory, Episte­mol­ogy and Strat­egy of Tech­nolog­i­cal Res­traint, and les­sons for AI (short es­say)

MMMaas10 Aug 2022 11:00 UTC
90 points
6 comments9 min readEA link
(verfassungsblog.de)

Is it time for a pause?

Kelsey Piper6 Apr 2023 11:48 UTC
103 points
6 comments5 min readEA link

In­stead of tech­ni­cal re­search, more peo­ple should fo­cus on buy­ing time

Akash5 Nov 2022 20:43 UTC
107 points
31 comments1 min readEA link

A note about differ­en­tial tech­nolog­i­cal development

So8res24 Jul 2022 23:41 UTC
58 points
8 comments6 min readEA link

Up­date from Cam­paign for AI Safety

Nik Samoylov1 Jun 2023 10:46 UTC
22 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

A moral back­lash against AI will prob­a­bly slow down AGI development

Geoffrey Miller31 May 2023 21:31 UTC
141 points
22 comments14 min readEA link

US pub­lic opinion of AI policy and risk

Jamie Elsey12 May 2023 13:22 UTC
111 points
7 comments15 min readEA link

Up­dates from Cam­paign for AI Safety

Jolyn Khoo27 Sep 2023 2:44 UTC
16 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

Fif­teen Law­suits against OpenAI

Remmelt9 Mar 2024 12:22 UTC
54 points
5 comments1 min readEA link

State­ment on AI Ex­tinc­tion—Signed by AGI Labs, Top Aca­demics, and Many Other Notable Figures

Center for AI Safety30 May 2023 9:06 UTC
427 points
28 comments1 min readEA link
(www.safe.ai)

[Cross­post] AI Reg­u­la­tion May Be More Im­por­tant Than AI Align­ment For Ex­is­ten­tial Safety

Otto24 Aug 2023 16:01 UTC
14 points
2 comments5 min readEA link

Go Mo­bi­lize? Les­sons from GM Protests for Paus­ing AI

Charlie Harrison24 Oct 2023 15:01 UTC
48 points
11 comments31 min readEA link

Effi­cacy of AI Ac­tivism: Have We Ever Said No?

Charlie Harrison27 Oct 2023 16:52 UTC
78 points
25 comments20 min readEA link

Ly­ing is Cowardice, not Strategy

Connor Leahy25 Oct 2023 5:59 UTC
−5 points
15 comments5 min readEA link
(cognition.cafe)

The In­ter­na­tional PauseAI Protest: Ac­tivism un­der uncertainty

Joseph Miller12 Oct 2023 17:36 UTC
129 points
3 comments4 min readEA link

[Question] How bad would AI progress need to be for us to think gen­eral tech­nolog­i­cal progress is also bad?

Jim Buhler6 Jul 2024 18:44 UTC
10 points
0 comments1 min readEA link

[Question] Am I tak­ing crazy pills? Why aren’t EAs ad­vo­cat­ing for a pause on AI ca­pa­bil­ities?

yanni kyriacos15 Aug 2023 23:29 UTC
18 points
21 comments1 min readEA link

Is prin­ci­pled mass-out­reach pos­si­ble, for AGI X-risk?

Nicholas / Heather Kross21 Jan 2024 17:45 UTC
12 points
2 comments1 min readEA link

OPEC for a slow AGI takeoff

vyrax21 Apr 2023 10:53 UTC
4 points
0 comments3 min readEA link

[Cross­post] An AI Pause Is Hu­man­ity’s Best Bet For Prevent­ing Ex­tinc­tion (TIME)

Otto24 Jul 2023 10:18 UTC
36 points
3 comments7 min readEA link
(time.com)

Up­dates from Cam­paign for AI Safety

Jolyn Khoo30 Aug 2023 5:36 UTC
7 points
0 comments2 min readEA link
(www.campaignforaisafety.org)

In­tro­duc­ing StakeOut.AI

Harry Luk17 Feb 2024 0:21 UTC
52 points
6 comments9 min readEA link

Re­duc­ing global AI com­pe­ti­tion through the Com­merce Con­trol List and Im­mi­gra­tion re­form: a dual-pronged approach

ben.smith3 Sep 2024 5:28 UTC
15 points
0 comments9 min readEA link
No comments.