There seems to be two different conceptual models for AI risk.
The first is a model like in his report “Existential risk from power-seeking AI”, in which he lays out a number of things, which, if they happen, will cause AI takeover.
The second is a model (which stems from Yudkowsky & Bosteom, and more recently in Michael Cohen’s work https://www.lesswrong.com/posts/XtBJTFszs8oP3vXic/?commentId=yqm7fHaf2qmhCRiNA ) where we should expect takeover by malign AGI by default, unless certain things happen.
I personally think the second model is much more reasonable. Do you have any rebuttal?
See also Nate Soares arguing against Joe’s conjunctive breakdown of risk here, and me here.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
There seems to be two different conceptual models for AI risk.
The first is a model like in his report “Existential risk from power-seeking AI”, in which he lays out a number of things, which, if they happen, will cause AI takeover.
The second is a model (which stems from Yudkowsky & Bosteom, and more recently in Michael Cohen’s work https://www.lesswrong.com/posts/XtBJTFszs8oP3vXic/?commentId=yqm7fHaf2qmhCRiNA ) where we should expect takeover by malign AGI by default, unless certain things happen.
I personally think the second model is much more reasonable. Do you have any rebuttal?
See also Nate Soares arguing against Joe’s conjunctive breakdown of risk here, and me here.