A steel manned version of the best longtermist argument(s) against AI safety as the top priority cause area.
Thanks for the suggestion. For reference, readers interested in this topic can check the posts on AI risk skepticism.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
A steel manned version of the best longtermist argument(s) against AI safety as the top priority cause area.
Thanks for the suggestion. For reference, readers interested in this topic can check the posts on AI risk skepticism.