I know this is an April’s Fools joke, but EAs and AI safety people should do more thinking about how to value-align human organizations while still making them instrumentally effective (see e.g. @Scott Alexander’s A Paradox of Ecclesiology, the social and intellectual movements tag).
Plenty AI safety people have tried to do work in AI, with a, let’s say, mixed track record:
Be too relaxed in organization and orthodoxy and bottom-up in control, you wind up starting the AI race in the first place because the CEO you picked turned out to be a pathological liar and plenty of your new hires more committed to him and acceleration than safety.
Be too strict in organization and orthodoxy and top-down in control, the sole AI safety work you manage to publish is a seven-page Word document with mistaken mathematical signs and the only thing you’re known for is getting linked with eight violent deaths.
… probably there should be a golden mean between the two. (EleutherAI seems to be a rare success story in this area.)
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
I know this is an April’s Fools joke, but EAs and AI safety people should do more thinking about how to value-align human organizations while still making them instrumentally effective (see e.g. @Scott Alexander’s A Paradox of Ecclesiology, the social and intellectual movements tag).
Plenty AI safety people have tried to do work in AI, with a, let’s say, mixed track record:
Be too relaxed in organization and orthodoxy and bottom-up in control, you wind up starting the AI race in the first place because the CEO you picked turned out to be a pathological liar and plenty of your new hires more committed to him and acceleration than safety.
Be too strict in organization and orthodoxy and top-down in control, the sole AI safety work you manage to publish is a seven-page Word document with mistaken mathematical signs and the only thing you’re known for is getting linked with eight violent deaths.
… probably there should be a golden mean between the two. (EleutherAI seems to be a rare success story in this area.)