RSS

In­tel­li­gence explosion

TagLast edit: 27 Jul 2022 14:25 UTC by Leo

An intelligence explosion (sometimes called a technological singularity, or singularity for short) is a hypothesized event in which a sufficiently advanced artificial intelligence rapidly attains superhuman intellectual ability by a process of recursive self-improvement.

Further reading

Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.

Chalmers, David J. (2010) The singularity: A philosophical analysis, Journal of Consciousness Studies, vol. 17, pp. 7–65.

Pearce, David (2012) The biointelligence explosion: how recursively self-improving organic robots will modify their own source code and bootstrap our way to full-spectrum superintelligence, in Amnon H. Eden et al. (eds.) Singularity Hypotheses: A Scientific and Philosophical Assessment, Berlin: Springer, pp. 199–238.

Sandberg, Anders (2013) An overview of models of technological singularity, in Max More & Natasha Vita-More (eds.) The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future, Malden: Wiley, pp. 376–394.

Vinding, Magnus (2017) A contra AI FOOM reading list, Magnus Vinding’s Blog, December (updated June 2022).

Related entries

AI skepticism | AI takeoff | artificial intelligence | flourishing futures | superintelligence | transformative artificial intelligence

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus Vinding24 Nov 2022 11:54 UTC
74 points
32 comments22 min readEA link

Why I’m Scep­ti­cal of Foom

𝕮𝖎𝖓𝖊𝖗𝖆8 Dec 2022 10:01 UTC
22 points
7 comments1 min readEA link

What a com­pute-cen­tric frame­work says about AI take­off speeds

Tom_Davidson23 Jan 2023 4:09 UTC
189 points
7 comments16 min readEA link
(www.lesswrong.com)

Linkpost: Dwarkesh Pa­tel in­ter­view­ing Carl Shulman

Stefan_Schubert14 Jun 2023 15:30 UTC
110 points
5 comments1 min readEA link
(podcastaddict.com)

The flaws that make to­day’s AI ar­chi­tec­ture un­safe and a new ap­proach that could fix it

80000_Hours22 Jun 2020 22:15 UTC
3 points
0 comments86 min readEA link
(80000hours.org)

Coun­ter­ar­gu­ments to the ba­sic AI risk case

Katja_Grace14 Oct 2022 20:30 UTC
284 points
23 comments34 min readEA link

Com­ments on Ernest Davis’s com­ments on Bostrom’s Superintelligence

Giles24 Jan 2015 4:40 UTC
2 points
8 comments9 min readEA link

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

richard_ngo19 Nov 2021 1:54 UTC
23 points
4 comments39 min readEA link

Con­cern About the In­tel­li­gence Divide Due to AI

Soe Lin21 Aug 2024 9:53 UTC
17 points
1 comment2 min readEA link

The Sin­gu­lar­ity and Its Me­ta­phys­i­cal Implications

Tomer_Goloboy28 Mar 2022 0:18 UTC
12 points
0 comments9 min readEA link

AGI will be made of het­ero­ge­neous com­po­nents, Trans­former and Selec­tive SSM blocks will be among them

Roman Leventov27 Dec 2023 14:51 UTC
5 points
0 comments1 min readEA link
No comments.