RSS

In­tel­li­gence explosion

TagLast edit: 27 Jul 2022 14:25 UTC by Leo

An intelligence explosion (sometimes called a technological singularity, or singularity for short) is a hypothesized event in which a sufficiently advanced artificial intelligence rapidly attains superhuman intellectual ability by a process of recursive self-improvement.

Further reading

Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.

Chalmers, David J. (2010) The singularity: A philosophical analysis, Journal of Consciousness Studies, vol. 17, pp. 7–65.

Pearce, David (2012) The biointelligence explosion: how recursively self-improving organic robots will modify their own source code and bootstrap our way to full-spectrum superintelligence, in Amnon H. Eden et al. (eds.) Singularity Hypotheses: A Scientific and Philosophical Assessment, Berlin: Springer, pp. 199–238.

Sandberg, Anders (2013) An overview of models of technological singularity, in Max More & Natasha Vita-More (eds.) The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future, Malden: Wiley, pp. 376–394.

Vinding, Magnus (2017) A contra AI FOOM reading list, Magnus Vinding’s Blog, December (updated June 2022).

Related entries

AI skepticism | AI takeoff | artificial intelligence | flourishing futures | superintelligence | transformative artificial intelligence

Why I’m Scep­ti­cal of Foom

𝕮𝖎𝖓𝖊𝖗𝖆8 Dec 2022 10:01 UTC
21 points
7 comments1 min readEA link

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus Vinding24 Nov 2022 11:54 UTC
74 points
32 comments22 min readEA link

The flaws that make to­day’s AI ar­chi­tec­ture un­safe and a new ap­proach that could fix it

80000_Hours22 Jun 2020 22:15 UTC
3 points
0 comments87 min readEA link
(80000hours.org)

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

richard_ngo19 Nov 2021 1:54 UTC
23 points
4 comments39 min readEA link

Coun­ter­ar­gu­ments to the ba­sic AI risk case

Katja_Grace14 Oct 2022 20:30 UTC
280 points
23 comments34 min readEA link

Com­ments on Ernest Davis’s com­ments on Bostrom’s Superintelligence

Giles24 Jan 2015 4:40 UTC
2 points
8 comments9 min readEA link

What a com­pute-cen­tric frame­work says about AI take­off speeds

Tom_Davidson23 Jan 2023 4:09 UTC
189 points
6 comments16 min readEA link
(www.lesswrong.com)

Linkpost: Dwarkesh Pa­tel in­ter­view­ing Carl Shulman

Stefan_Schubert14 Jun 2023 15:30 UTC
106 points
5 comments1 min readEA link
(podcastaddict.com)

Risks from GPT-4 Byproduct of Re­cur­sively Op­ti­miz­ing AIs

ben hayum6 Apr 2023 5:52 UTC
87 points
5 comments10 min readEA link
(www.lesswrong.com)

AGI will be made of het­ero­ge­neous com­po­nents, Trans­former and Selec­tive SSM blocks will be among them

Roman Leventov27 Dec 2023 14:51 UTC
5 points
0 comments1 min readEA link

The Sin­gu­lar­ity and Its Me­ta­phys­i­cal Implications

Tomer_Goloboy28 Mar 2022 0:18 UTC
12 points
0 comments9 min readEA link