For instance, how independent is research such as “Building an early warning system for LLM-aided biological threat creation”?
Edit: By “how independent”, I mean something like “how truth seeking is the preparedness team vs. how much are they confirming what’s in OpenAI’s monetary interest (to whatever extent these two aims come into tension)”?
Current theme: default
Less Wrong (text)
Less Wrong (link)
Arrow keys: Next/previous image
Escape or click: Hide zoomed image
Space bar: Reset image size & position
Scroll to zoom in/out
(When zoomed in, drag to pan; double-click to close)
Keys shown in yellow (e.g., ]) are accesskeys, and require a browser-specific modifier key (or keys).
]
Keys shown in grey (e.g., ?) do not require any modifier keys.
?
Esc
h
f
a
m
v
c
r
q
t
u
o
,
.
/
s
n
e
;
Enter
[
\
k
i
l
=
-
0
′
1
2
3
4
5
6
7
8
9
→
↓
←
↑
Space
x
z
`
g
[Question] How independent is the research coming out of OpenAI’s preparedness team?
For instance, how independent is research such as “Building an early warning system for LLM-aided biological threat creation”?
Edit: By “how independent”, I mean something like “how truth seeking is the preparedness team vs. how much are they confirming what’s in OpenAI’s monetary interest (to whatever extent these two aims come into tension)”?