RSS

Superintelligence

TagLast edit: 13 Jul 2022 23:59 UTC by Pablo

A superintelligence is a cognitive system whose intellectual performance across all relevant domains vastly exceeds that of any human. Forms of superintelligence include quality superintelligence, speed superintelligence and collective superintelligence.

If a superintelligence comes to exist, it could conceivably be either a machine (created through substantial progress in artificial intelligence) or a biological entity (created through genetic engineering or other human modification).

Since intelligence is the distinctive trait that has enabled humans to develop a civilization and become the dominant species on Earth, the development of much smarter agents than us would arguably be the most significant event in human history.

While it is difficult to predict, or even to conceive, what a future in which such agents exist would look like, several philosophers and computer scientists have recently argued that the arrival of superintelligence, particularly machine superintelligence, could pose an existential risk. On the other hand, if these risks are avoided, a superintelligence could be greatly beneficial, and might enable many of the world’s problems to be solved.

Further reading

Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.

Related entries

artificial intelligence | collective superintelligence | quality superintelligence | speed superintelligence

What can su­per­in­tel­li­gent ANI tell us about su­per­in­tel­li­gent AGI?

Ted Sanders12 Jun 2023 6:32 UTC
81 points
20 comments5 min readEA link

Two con­trast­ing mod­els of “in­tel­li­gence” and fu­ture growth

Magnus Vinding24 Nov 2022 11:54 UTC
74 points
32 comments22 min readEA link

[Question] The pos­i­tive case for a fo­cus on achiev­ing safe AI?

vipulnaik25 Jun 2021 4:01 UTC
41 points
1 comment1 min readEA link

[Question] Will the vast ma­jor­ity of tech­nolog­i­cal progress hap­pen in the longterm fu­ture?

Vasco Grilo🔸8 Jul 2023 8:40 UTC
8 points
0 comments2 min readEA link

[Question] Are we con­fi­dent that su­per­in­tel­li­gent ar­tifi­cial in­tel­li­gence dis­em­pow­er­ing hu­mans would be bad?

Vasco Grilo🔸10 Jun 2023 9:24 UTC
24 points
27 comments1 min readEA link

Bandgaps, Brains, and Bioweapons: The limi­ta­tions of com­pu­ta­tional sci­ence and what it means for AGI

titotal26 May 2023 15:57 UTC
59 points
0 comments18 min readEA link

An­nounc­ing Su­per­in­tel­li­gence Imag­ined: A cre­ative con­test on the risks of superintelligence

TaylorJns12 Jun 2024 15:20 UTC
17 points
0 comments1 min readEA link

The ne­ces­sity of “Guardian AI” and two con­di­tions for its achievement

Proica28 May 2024 11:42 UTC
1 point
1 comment15 min readEA link

Ge­offrey Hin­ton on the Past, Pre­sent, and Fu­ture of AI

Stephen McAleese12 Oct 2024 16:41 UTC
5 points
1 comment1 min readEA link

[Linkpost] The Prob­lem With The Cur­rent State of AGI Definitions

Yitz29 May 2022 17:01 UTC
7 points
0 comments4 min readEA link

Op­tion control

Joe_Carlsmith4 Nov 2024 17:54 UTC
11 points
0 comments1 min readEA link

The Case for Su­per­in­tel­li­gence Safety As A Cause: A Non-Tech­ni­cal Summary

HunterJay21 May 2019 5:17 UTC
12 points
9 comments6 min readEA link

[Dis­cus­sion] How Broad is the Hu­man Cog­ni­tive Spec­trum?

𝕮𝖎𝖓𝖊𝖗𝖆7 Jan 2023 0:59 UTC
16 points
1 comment1 min readEA link

OpenAI is start­ing a new “Su­per­in­tel­li­gence al­ign­ment” team and they’re hiring

Alejandro Ortega5 Jul 2023 18:27 UTC
100 points
16 comments1 min readEA link
(openai.com)

OpenAI’s mas­sive push to make su­per­in­tel­li­gence safe in 4 years or less (Jan Leike on the 80,000 Hours Pod­cast)

80000_Hours8 Aug 2023 18:00 UTC
32 points
1 comment19 min readEA link
(80000hours.org)
No comments.