It seems if we can’t make the basic versions of these tools well aligned with us, we won’t have much luck with future more advanced versions.
Therefore, all AI safety people should work on alignment and safety challenges with AI tools that currently have users (image generators, GPT, etc).
Agree? Disagree?
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Posit: Most AI safety people should work on alignment/safety challenges for AI tools that already have users (Stable Diffusion, GPT)
It seems if we can’t make the basic versions of these tools well aligned with us, we won’t have much luck with future more advanced versions.
Therefore, all AI safety people should work on alignment and safety challenges with AI tools that currently have users (image generators, GPT, etc).
Agree? Disagree?