RSS

In­tel­li­gence explosion

TagLast edit: Jul 27, 2022, 2:25 PM by Leo

An intelligence explosion (sometimes called a technological singularity, or singularity for short) is a hypothesized event in which a sufficiently advanced artificial intelligence rapidly attains superhuman intellectual ability by a process of recursive self-improvement.

Further reading

Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.

Chalmers, David J. (2010) The singularity: A philosophical analysis, Journal of Consciousness Studies, vol. 17, pp. 7–65.

Pearce, David (2012) The biointelligence explosion: how recursively self-improving organic robots will modify their own source code and bootstrap our way to full-spectrum superintelligence, in Amnon H. Eden et al. (eds.) Singularity Hypotheses: A Scientific and Philosophical Assessment, Berlin: Springer, pp. 199–238.

Sandberg, Anders (2013) An overview of models of technological singularity, in Max More & Natasha Vita-More (eds.) The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future, Malden: Wiley, pp. 376–394.

Vinding, Magnus (2017) A contra AI FOOM reading list, Magnus Vinding’s Blog, December (updated June 2022).

Related entries

AI skepticism | AI takeoff | artificial intelligence | flourishing futures | superintelligence | transformative artificial intelligence

Why I’m Scep­ti­cal of Foom

𝕮𝖎𝖓𝖊𝖗𝖆Dec 8, 2022, 10:01 AM
22 points
7 comments1 min readEA link

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus VindingNov 24, 2022, 11:54 AM
74 points
32 comments22 min readEA link

Com­ments on Ernest Davis’s com­ments on Bostrom’s Superintelligence

GilesJan 24, 2015, 4:40 AM
2 points
8 comments9 min readEA link

The flaws that make to­day’s AI ar­chi­tec­ture un­safe and a new ap­proach that could fix it

80000_HoursJun 22, 2020, 10:15 PM
3 points
0 comments86 min readEA link
(80000hours.org)

Ngo and Yud­kowsky on AI ca­pa­bil­ity gains

richard_ngoNov 19, 2021, 1:54 AM
23 points
4 comments39 min readEA link

Prepar­ing for the In­tel­li­gence Explosion

finmMar 11, 2025, 3:38 PM
114 points
3 comments1 min readEA link
(www.forethought.org)

Coun­ter­ar­gu­ments to the ba­sic AI risk case

Katja_GraceOct 14, 2022, 8:30 PM
284 points
23 comments34 min readEA link

What a com­pute-cen­tric frame­work says about AI take­off speeds

Tom_DavidsonJan 23, 2023, 4:09 AM
189 points
7 comments16 min readEA link
(www.lesswrong.com)

Linkpost: Dwarkesh Pa­tel in­ter­view­ing Carl Shulman

Stefan_SchubertJun 14, 2023, 3:30 PM
110 points
5 comments1 min readEA link
(podcastaddict.com)

AGI will be made of het­ero­ge­neous com­po­nents, Trans­former and Selec­tive SSM blocks will be among them

Roman LeventovDec 27, 2023, 2:51 PM
5 points
0 comments1 min readEA link

Con­cern About the In­tel­li­gence Divide Due to AI

Soe LinAug 21, 2024, 9:53 AM
17 points
1 comment2 min readEA link

The Sin­gu­lar­ity and Its Me­ta­phys­i­cal Implications

Tomer_GoloboyMar 28, 2022, 12:18 AM
12 points
0 comments9 min readEA link
No comments.