RSS

AI risk

TagLast edit: 8 Jan 2022 20:15 UTC by Pablo

An AI risk is a catastrophic or existential risk arising from the creation of advanced artificial intelligence (AI).

Developments in AI have the potential to enable people around the world to flourish in hitherto unimagined ways. Such developments might also give humanity tools to address other sources of risk.

Despite this, AI also poses its own risks. AI systems behave in ways that sometimes surprise people. At the moment, such systems are usually quite narrow in their capabilities—for example being excellent at Go, or at minimizing power consumption in a server facility. If people designed a machine intelligence which was a sufficiently good general reasoner, or even better at general reasoning than people are, it might become difficult for human agents to interfere with its functioning. If it then behaved in a way which did not reflect human values, it might pose a real risk to humanity. Such a machine intelligence might use its intellectual superiority to develop a decisive strategic advantage, and if its behaviour was for some reason incompatible with human well-being, it could then pose an existential risk.

Note that this does not depend on the machine intelligence gaining consciousness, or having any ill will towards humanity.

Further reading

Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.
Offers a detailed analysis of risks posed by AI.

Christiano, Paul (2019) What failure looks like, LessWrong, March 17.

Dewey, Daniel (2015) Three areas of research on the superintelligence control problem, Global Priorities Project, October 20.
Provides an overview and suggested reading in AI risk.

Karnofsky, Holden (2016) Potential risks from advanced artificial intelligence: the philanthropic opportunity, Open Philanthropy, May 6.
Explains why the Open Philanthropy Project regards risks from AI as an area worth exploring.

Dai, Wei & Daniel Kokotajlo (2019) The main sources of AI risk?, AI Alignment Forum, March 21.
An attempt to list all the significant sources of AI risk.

Related entries

AI safety

Per­sua­sion Tools: AI takeover with­out AGI or agency?

kokotajlod20 Nov 2020 16:56 UTC
12 points
5 comments10 min readEA link

Draft re­port on ex­is­ten­tial risk from power-seek­ing AI

Joe_Carlsmith28 Apr 2021 21:41 UTC
76 points
33 comments1 min readEA link

Some thoughts on risks from nar­row, non-agen­tic AI

richard_ngo19 Jan 2021 0:07 UTC
35 points
2 comments8 min readEA link

Miti­gat­ing x-risk through modularity

Toby Newberry17 Dec 2020 19:54 UTC
91 points
6 comments14 min readEA link

Disen­tan­gling ar­gu­ments for the im­por­tance of AI safety

richard_ngo23 Jan 2019 14:58 UTC
59 points
14 comments8 min readEA link

NIST AI Risk Man­age­ment Frame­work re­quest for in­for­ma­tion (RFI)

Aryeh Englander31 Aug 2021 22:24 UTC
7 points
0 comments2 min readEA link

[Link post] How plau­si­ble are AI Takeover sce­nar­ios?

SammyDMartin27 Sep 2021 13:03 UTC
26 points
0 comments1 min readEA link

Gen­eral vs spe­cific ar­gu­ments for the longter­mist im­por­tance of shap­ing AI development

SamClarke15 Oct 2021 14:43 UTC
41 points
5 comments2 min readEA link

[Question] Can hu­man ex­tinc­tion due to AI be jus­tified as good?

acylhalide17 Oct 2021 14:08 UTC
6 points
19 comments1 min readEA link

Re­views of “Is power-seek­ing AI an ex­is­ten­tial risk?”

Joe_Carlsmith16 Dec 2021 20:50 UTC
43 points
1 comment1 min readEA link

AI Safety Ca­reer Bot­tle­necks Sur­vey Re­sponses Responses

Linda Linsefors28 May 2021 10:41 UTC
33 points
1 comment5 min readEA link

[Question] How much will pre-trans­for­ma­tive AI speed up R&D?

Ben_Snodin31 May 2021 20:20 UTC
23 points
0 comments1 min readEA link

Po­ten­tial Risks from Ad­vanced Ar­tifi­cial In­tel­li­gence: The Philan­thropic Opportunity

Holden Karnofsky6 May 2016 12:55 UTC
2 points
0 comments23 min readEA link
(www.openphilanthropy.org)

Chris Olah on what the hell is go­ing on in­side neu­ral networks

80000_Hours4 Aug 2021 15:13 UTC
5 points
0 comments133 min readEA link

List of AI safety courses and resources

Daniel del Castillo6 Sep 2021 14:26 UTC
47 points
3 comments1 min readEA link

The prob­lem of ar­tifi­cial suffering

Martin Trouilloud24 Sep 2021 14:43 UTC
40 points
3 comments9 min readEA link

Seek­ing so­cial sci­ence stu­dents /​ col­lab­o­ra­tors in­ter­ested in AI ex­is­ten­tial risks

Vael Gates24 Sep 2021 21:56 UTC
54 points
7 comments3 min readEA link

[Creative Writ­ing Con­test] The Puppy Problem

Louis13 Oct 2021 14:01 UTC
11 points
0 comments7 min readEA link

Clar­ifi­ca­tions about struc­tural risk from AI

SamClarke18 Jan 2022 12:57 UTC
25 points
0 comments5 min readEA link

AI ac­cel­er­a­tion from a safety per­spec­tive: Trade-offs and con­sid­er­a­tions

mariushobbhahn19 Jan 2022 9:44 UTC
12 points
1 comment7 min readEA link

CSER is hiring for a se­nior re­search as­so­ci­ate on longterm AI risk and governance

SamClarke24 Jan 2022 13:24 UTC
9 points
3 comments1 min readEA link
No comments.