A 6 line argument for AGI risk
(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability
(2) An AGI could be sufficiently intelligent that it’s limited by physics and computability but humans can’t be
(3) An AGI will come into existence
(4) If the AGIs goals aren’t the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met
(5) Meeting human goals won’t be instrumentally useful in the long run for an unaligned AGI
(6) It is more morally valuable for human goals to be met than an AGIs goals
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
A 6 line argument for AGI risk
(1) Sufficient intelligence has capitalities that are ultimately limited by physics and computability
(2) An AGI could be sufficiently intelligent that it’s limited by physics and computability but humans can’t be
(3) An AGI will come into existence
(4) If the AGIs goals aren’t the same as humans, human goals will only be met for instrumental reasons and the AGIs goals will be met
(5) Meeting human goals won’t be instrumentally useful in the long run for an unaligned AGI
(6) It is more morally valuable for human goals to be met than an AGIs goals