What directions do you feel have been most successful with regards to AI safety progress over the past several years, and why?
What AI capability developments are the most alarming to you, and what can we do to address them?
What’s the single biggest mistake people excited about working in AI safety can make?
What’s one specific thing someone interested in working in AI safety can do over the near-term?
What’s changed since you published Life 3.0? What did your book get right, and what did it get wrong?
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
What directions do you feel have been most successful with regards to AI safety progress over the past several years, and why?
What AI capability developments are the most alarming to you, and what can we do to address them?
What’s the single biggest mistake people excited about working in AI safety can make?
What’s one specific thing someone interested in working in AI safety can do over the near-term?
What’s changed since you published Life 3.0? What did your book get right, and what did it get wrong?