RSS

Superintelligence

TagLast edit: Jul 13, 2022, 11:59 PM by Pablo

A superintelligence is a cognitive system whose intellectual performance across all relevant domains vastly exceeds that of any human. Forms of superintelligence include quality superintelligence, speed superintelligence and collective superintelligence.

If a superintelligence comes to exist, it could conceivably be either a machine (created through substantial progress in artificial intelligence) or a biological entity (created through genetic engineering or other human modification).

Since intelligence is the distinctive trait that has enabled humans to develop a civilization and become the dominant species on Earth, the development of much smarter agents than us would arguably be the most significant event in human history.

While it is difficult to predict, or even to conceive, what a future in which such agents exist would look like, several philosophers and computer scientists have recently argued that the arrival of superintelligence, particularly machine superintelligence, could pose an existential risk. On the other hand, if these risks are avoided, a superintelligence could be greatly beneficial, and might enable many of the world’s problems to be solved.

Further reading

Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.

Related entries

artificial intelligence | collective superintelligence | quality superintelligence | speed superintelligence

What can su­per­in­tel­li­gent ANI tell us about su­per­in­tel­li­gent AGI?

Ted SandersJun 12, 2023, 6:32 AM
81 points
20 comments5 min readEA link

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus VindingNov 24, 2022, 11:54 AM
74 points
32 comments22 min readEA link

[Question] The pos­i­tive case for a fo­cus on achiev­ing safe AI?

vipulnaikJun 25, 2021, 4:01 AM
41 points
1 comment1 min readEA link

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo🔸Jul 8, 2023, 8:40 AM
8 points
0 comments2 min readEA link

[Question] Are we con­fi­dent that su­per­in­tel­li­gent ar­tifi­cial in­tel­li­gence dis­em­pow­er­ing hu­mans would be bad?

Vasco Grilo🔸Jun 10, 2023, 9:24 AM
24 points
27 comments1 min readEA link

Bandgaps, Brains, and Bioweapons: The limi­ta­tions of com­pu­ta­tional sci­ence and what it means for AGI

titotalMay 26, 2023, 3:57 PM
59 points
0 comments18 min readEA link

An­nounc­ing Su­per­in­tel­li­gence Imag­ined: A cre­ative con­test on the risks of superintelligence

TaylorJnsJun 12, 2024, 3:20 PM
17 points
0 comments1 min readEA link

The ne­ces­sity of “Guardian AI” and two con­di­tions for its achievement

ProicaMay 28, 2024, 11:42 AM
1 point
1 comment15 min readEA link

Ge­offrey Hin­ton on the Past, Pre­sent, and Fu­ture of AI

Stephen McAleeseOct 12, 2024, 4:41 PM
5 points
1 comment1 min readEA link

Alt­man on the board, AGI, and superintelligence

OscarD🔸Jan 6, 2025, 2:37 PM
20 points
1 comment1 min readEA link
(blog.samaltman.com)

Op­tion control

Joe_CarlsmithNov 4, 2024, 5:54 PM
11 points
0 comments1 min readEA link

What are the differ­ences be­tween AGI, trans­for­ma­tive AI, and su­per­in­tel­li­gence?

Vishakha AgrawalJan 23, 2025, 10:11 AM
12 points
0 comments3 min readEA link
(aisafety.info)

[Linkpost] The Prob­lem With The Cur­rent State of AGI Definitions

YitzMay 29, 2022, 5:01 PM
7 points
0 comments4 min readEA link

Tether­ware #2: What ev­ery hu­man should know about our most likely AI future

Jáchym FibírFeb 28, 2025, 11:25 AM
3 points
0 comments11 min readEA link
(tetherware.substack.com)

The Case for Su­per­in­tel­li­gence Safety As A Cause: A Non-Tech­ni­cal Summary

HunterJayMay 21, 2019, 5:17 AM
12 points
9 comments6 min readEA link

[Dis­cus­sion] How Broad is the Hu­man Cog­ni­tive Spec­trum?

𝕮𝖎𝖓𝖊𝖗𝖆Jan 7, 2023, 12:59 AM
16 points
1 comment1 min readEA link

OpenAI is start­ing a new “Su­per­in­tel­li­gence al­ign­ment” team and they’re hiring

Alejandro OrtegaJul 5, 2023, 6:27 PM
100 points
16 comments1 min readEA link
(openai.com)

OpenAI’s mas­sive push to make su­per­in­tel­li­gence safe in 4 years or less (Jan Leike on the 80,000 Hours Pod­cast)

80000_HoursAug 8, 2023, 6:00 PM
32 points
1 comment19 min readEA link
(80000hours.org)
No comments.