Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it’s nearly as likely to be net-negative as net-positive given our great uncertainty
Is this for both technical AI work and AI governance work? For both, what are the main ways these interventions are likely to backfire?
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
Is this for both technical AI work and AI governance work? For both, what are the main ways these interventions are likely to backfire?