RSS

In­stru­men­tal con­ver­gence thesis

TagLast edit: 17 Jul 2023 18:55 UTC by brook

The instrumental convergence thesis is the hypothesised overlap in instrumental goals expected to be exhibited by a broad class of advanced AI systems, due to their usefulness for the attainment of a wide range of possible final goals.

Commonly expected instrumental goals towards which all intelligent agents would converge are self-preservation, goal preservation, and resource acquisition.

Further reading

Bostrom, Nick (2012) The superintelligent will: motivation and instrumental rationality in advanced artificial agents, Minds and Machines, vol. 22, pp. 71–85.

Yudkowsky, Eliezer (2013) Five theses, two lemmas, and a couple of strategic implications, Machine Intelligence Research Institute’s Blog, May 5.

Related entries

basic AI drive | orthogonality thesis

There is only one goal or drive—only self-per­pet­u­a­tion counts

freest one13 Jun 2023 1:37 UTC
2 points
4 comments8 min readEA link

Where I’m at with AI risk: con­vinced of dan­ger but not (yet) of doom

Amber Dawn21 Mar 2023 13:23 UTC
62 points
16 comments6 min readEA link

Stone Age Her­bal­ist’s notes on ant war­fare and slavery

trevor19 Nov 2024 2:42 UTC
6 points
0 comments1 min readEA link
(x.com)

I there a demo of “You can’t fetch the coffee if you’re dead”?

Ram Rachum10 Nov 2022 11:03 UTC
8 points
3 comments1 min readEA link

How to store hu­man val­ues on a computer

oliver_siegel4 Nov 2022 19:36 UTC
1 point
2 comments1 min readEA link

AI Risk is like Ter­mi­na­tor; Stop Say­ing it’s Not

skluug8 Mar 2022 19:17 UTC
189 points
43 comments10 min readEA link
(skluug.substack.com)

The Case for Su­per­in­tel­li­gence Safety As A Cause: A Non-Tech­ni­cal Summary

HunterJay21 May 2019 5:17 UTC
12 points
9 comments6 min readEA link

Why we may ex­pect our suc­ces­sors not to care about suffering

Jim Buhler10 Jul 2023 13:54 UTC
63 points
31 comments8 min readEA link

The Grabby Values Selec­tion Th­e­sis: What val­ues do space-far­ing civ­i­liza­tions plau­si­bly have?

Jim Buhler6 May 2023 19:28 UTC
47 points
12 comments4 min readEA link

AI Alter­na­tive Fu­tures: Ex­plo­ra­tory Sce­nario Map­ping for Ar­tifi­cial In­tel­li­gence Risk—Re­quest for Par­ti­ci­pa­tion [Linkpost]

Kiliank9 May 2022 19:53 UTC
17 points
2 comments8 min readEA link
No comments.