Because it wouldn’t be a very intelligent move from the AGI. It’d be way easier for an AGI to set its own reward function to infinity by manipulating its own circuitry than it would be to warp the universe to its precise specifications. https://www.academia.edu/22359393/Utility_function_security_in_artificially_intelligent_agents
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Because it wouldn’t be a very intelligent move from the AGI. It’d be way easier for an AGI to set its own reward function to infinity by manipulating its own circuitry than it would be to warp the universe to its precise specifications.
https://www.academia.edu/22359393/Utility_function_security_in_artificially_intelligent_agents