RSS

AI takeoff

TagLast edit: 16 Mar 2022 17:42 UTC by Pablo

AI takeoff is a hypothesized period of transition during which an advanced artificial intelligence acquires superhuman intellectual capacity.

Further reading

Barnett, Matthew (2020) Distinguishing definitions of takeoff, AI Alignment Forum, February 13.
A compendium of explicit definitions of ‘AI takeoff’.

Christiano, Paul (2018) Takeoff speeds, The Sideways View, February 24.

Related entries

artificial intelligence | decisive strategic advantage | superintelligence

What a com­pute-cen­tric frame­work says about AI take­off speeds

Tom_Davidson23 Jan 2023 4:09 UTC
189 points
7 comments16 min readEA link
(www.lesswrong.com)

Evolu­tion pro­vides no ev­i­dence for the sharp left turn

Quintin Pope11 Apr 2023 18:48 UTC
43 points
2 comments1 min readEA link

Ex­po­nen­tial AI take­off is a myth

Christoph Hartmann31 May 2023 11:47 UTC
40 points
11 comments9 min readEA link

A com­pute-based frame­work for think­ing about the fu­ture of AI

Matthew_Barnett31 May 2023 22:00 UTC
96 points
36 comments19 min readEA link

“Slower tech de­vel­op­ment” can be about or­der­ing, grad­u­al­ness, or dis­tance from now

MichaelA14 Nov 2021 20:58 UTC
47 points
3 comments4 min readEA link

AI im­pacts and Paul Chris­ti­ano on take­off speeds

Crosspost2 Mar 2018 11:16 UTC
4 points
0 comments1 min readEA link

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus Vinding24 Nov 2022 11:54 UTC
74 points
32 comments22 min readEA link

Con­ti­nu­ity Assumptions

Jan_Kulveit13 Jun 2022 21:36 UTC
44 points
4 comments4 min readEA link
(www.alignmentforum.org)

Suc­cess with­out dig­nity: a nearcast­ing story of avoid­ing catas­tro­phe by luck

Holden Karnofsky15 Mar 2023 20:17 UTC
106 points
3 comments1 min readEA link

AI Could Defeat All Of Us Combined

Holden Karnofsky10 Jun 2022 23:25 UTC
143 points
14 comments17 min readEA link

How we could stum­ble into AI catastrophe

Holden Karnofsky16 Jan 2023 14:52 UTC
78 points
0 comments31 min readEA link
(www.cold-takes.com)

Hereti­cal Thoughts on AI | Eli Dourado

𝕮𝖎𝖓𝖊𝖗𝖆19 Jan 2023 16:11 UTC
138 points
15 comments1 min readEA link

How quickly AI could trans­form the world (Tom David­son on The 80,000 Hours Pod­cast)

80000_Hours8 May 2023 13:23 UTC
82 points
3 comments17 min readEA link

Cy­borg Pe­ri­ods: There will be mul­ti­ple AI transitions

Jan_Kulveit22 Feb 2023 16:09 UTC
61 points
1 comment1 min readEA link

Con­tin­u­ous doesn’t mean slow

Tom_Davidson10 May 2023 12:17 UTC
64 points
1 comment4 min readEA link

Vignettes Work­shop (AI Im­pacts)

kokotajlod15 Jun 2021 11:02 UTC
43 points
5 comments1 min readEA link

Shul­man and Yud­kowsky on AI progress

CarlShulman4 Dec 2021 11:37 UTC
46 points
0 comments20 min readEA link

AGI Take­off dy­nam­ics—In­tel­li­gence vs Quan­tity ex­plo­sion

EdoArad26 Jul 2023 9:20 UTC
14 points
0 comments2 min readEA link
(github.com)

Fu­ture Mat­ters #8: Bing Chat, AI labs on safety, and paus­ing Fu­ture Matters

Pablo21 Mar 2023 14:50 UTC
81 points
5 comments24 min readEA link

What is au­ton­omy, and how does it lead to greater risk from AI?

Davidmanheim1 Aug 2023 8:06 UTC
10 points
0 comments6 min readEA link
(www.lesswrong.com)

An­nounc­ing Epoch: A re­search or­ga­ni­za­tion in­ves­ti­gat­ing the road to Trans­for­ma­tive AI

Jaime Sevilla27 Jun 2022 13:39 UTC
183 points
11 comments2 min readEA link
(epochai.org)

Yud­kowsky and Chris­ti­ano dis­cuss “Take­off Speeds”

EliezerYudkowsky22 Nov 2021 19:42 UTC
42 points
0 comments60 min readEA link

Me­tac­u­lus Pre­dicts Weak AGI in 2 Years and AGI in 10

Chris Leong24 Mar 2023 19:43 UTC
27 points
12 comments1 min readEA link

MIRI Con­ver­sa­tions: Tech­nol­ogy Fore­cast­ing & Grad­u­al­ism (Distil­la­tion)

TheMcDouglas13 Jul 2022 10:45 UTC
27 points
9 comments19 min readEA link

A Cri­tique of AI Takeover Scenarios

Fods1231 Aug 2022 13:49 UTC
53 points
4 comments12 min readEA link

Epoch is hiring a Re­search Data Analyst

merilalama22 Nov 2022 17:34 UTC
21 points
0 comments4 min readEA link
(careers.rethinkpriorities.org)

The Wind­fall Clause has a reme­dies problem

John Bridge23 May 2022 10:31 UTC
40 points
0 comments20 min readEA link

AI ac­cel­er­a­tion from a safety per­spec­tive: Trade-offs and con­sid­er­a­tions

mariushobbhahn19 Jan 2022 9:44 UTC
12 points
1 comment7 min readEA link

What role should evolu­tion­ary analo­gies play in un­der­stand­ing AI take­off speeds?

anson11 Dec 2021 1:16 UTC
12 points
0 comments42 min readEA link

Is any­one else also get­ting more wor­ried about hard take­off AGI sce­nar­ios?

JonCefalu9 Jan 2023 6:04 UTC
19 points
11 comments3 min readEA link

Why I think it’s im­por­tant to work on AI forecasting

Matthew_Barnett27 Feb 2023 21:24 UTC
179 points
10 comments10 min readEA link

As­ter­isk Magaz­ine Is­sue 03: AI

Alejandro Ortega24 Jul 2023 15:53 UTC
34 points
3 comments1 min readEA link
(asteriskmag.com)

We don’t un­der­stand what hap­pened with cul­ture enough

Jan_Kulveit9 Oct 2023 14:56 UTC
22 points
2 comments6 min readEA link

AGI Bat­tle Royale: Why “slow takeover” sce­nar­ios de­volve into a chaotic multi-AGI fight to the death

titotal22 Sep 2022 15:00 UTC
42 points
9 comments15 min readEA link

Un-un­plug­ga­bil­ity—can’t we just un­plug it?

Oliver Sourbut15 May 2023 13:23 UTC
15 points
0 comments1 min readEA link
(www.oliversourbut.net)

Power laws in Speedrun­ning and Ma­chine Learning

Jaime Sevilla24 Apr 2023 10:06 UTC
48 points
0 comments1 min readEA link

Un­veiling the Amer­i­can Public Opinion on AI Mo­ra­to­rium and Govern­ment In­ter­ven­tion: The Im­pact of Me­dia Exposure

Otto8 May 2023 10:49 UTC
28 points
5 comments6 min readEA link

EA is un­der­es­ti­mat­ing in­tel­li­gence agen­cies and this is dangerous

trevor126 Aug 2023 16:52 UTC
28 points
4 comments10 min readEA link
No comments.