RSS

Mal­ig­nant AI failure mode

TagLast edit: 8 Jan 2022 12:49 UTC by Pablo

An AI failure mode is a way in which a project to develop machine superintelligence may fail. Malignant AI failure modes are AI failure modes that result in an existential catastrophe.

Nick Bostrom categorizes malignant AI failure modes into three basic types: perverse instantiation, which involves the satisfaction of an AI’s goals in ways contrary to the intentions of those who programmed it; infrastructure profusion, which involves the transformation of large parts of the accessible universe into infrastructure in the service of some goal that impedes the realization of humanity’s long-term potential; and mind crime, which involves the mistreatment of morally relevant computational processes.

Further reading

Bostrom, Nick (2014) Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press, pp. 119-126.

Related entries

existential risk | infrastructure profusion | mind crime | perverse instantiation | superintelligence

How likely are ma­lign pri­ors over ob­jec­tives? [aborted WIP]

David Johnston11 Nov 2022 6:03 UTC
6 points
0 comments1 min readEA link

Against GDP as a met­ric for timelines and take­off speeds

kokotajlod29 Dec 2020 17:50 UTC
47 points
6 comments14 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

Kiliank9 May 2022 19:53 UTC
17 points
2 comments8 min readEA link
No comments.