RSS

AI risk

TagLast edit: 31 May 2021 16:42 UTC by EA Wiki assistant

An AI risk is a catastrophic or existential risk arising from the creation of advanced artificial intelligence (AI).

Developments in AI have the potential to enable people around the world to flourish in hitherto unimagined ways. Such developments might also give humanity tools to address other sources of risk.

Despite this, AI also poses its own risks. AI systems behave in ways that sometimes surprise people. At the moment, such systems are usually quite narrow in their capabilities—for example being excellent at Go, or at minimizing power consumption in a server facility. If people designed a machine intelligence which was a sufficiently good general reasoner, or even better at general reasoning than people are, it might become difficult for human agents to interfere with its functioning. If it then behaved in a way which did not reflect human values, it might pose a real risk to humanity. Such a machine intelligence might use its intellectual superiority to develop a decisive strategic advantage, and if its behaviour was for some reason incompatible with human well-being, it could then pose an existential risk.

Note that this does not depend on the machine intelligence gaining consciousness, or having any ill will towards humanity.

Bibliography

Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.
Offers a detailed analysis of risks posed by AI.

Dewey, Daniel (2015) Three areas of research on the superintelligence control problem, Global Priorities Project, October 20.
Provides an overview and suggested reading in AI risk.

Karnofsky, Holden (2016) Potential risks from advanced artificial intelligence: the philanthropic opportunity, Open Philanthropy, May 6.
Explains why the Open Philanthropy Project regards risks from AI as an area worth exploring.

Krakovna, Victoria (2017) Introductory resources on AI safety research, Victoria Krakovna’s Blog, October 19.
A list of readings on AI safety.

Related entries

AI safety

Per­sua­sion Tools: AI takeover with­out AGI or agency?

kokotajlod20 Nov 2020 16:56 UTC
11 points
5 comments10 min readEA link

Draft re­port on ex­is­ten­tial risk from power-seek­ing AI

Joe_Carlsmith28 Apr 2021 21:41 UTC
76 points
33 comments1 min readEA link

Some thoughts on risks from nar­row, non-agen­tic AI

richard_ngo19 Jan 2021 0:07 UTC
34 points
2 comments8 min readEA link

Miti­gat­ing x-risk through modularity

Toby Newberry17 Dec 2020 19:54 UTC
83 points
4 comments14 min readEA link

AI Safety Ca­reer Bot­tle­necks Sur­vey Re­sponses Responses

Linda Linsefors28 May 2021 10:41 UTC
19 points
1 comment5 min readEA link

[Question] How much will pre-trans­for­ma­tive AI speed up R&D?

Ben_Snodin31 May 2021 20:20 UTC
23 points
0 comments1 min readEA link
No comments.