Now, Anthropic, OpenAI, Google DeepMind, and xAI say their most powerful models might have dangerous biology capabilities and thus could substantially boost extremists—but not states—in creating bioweapons.
I think the “not states” part of this is incorrect in the case of OpenAI, whose Deep Research system card said: “Our evaluations found that deep research can help experts with the operational planning of reproducing a known biological threat, which meets our medium risk threshold.”
I haven’t read all of the relevant stuff in a long time but my impression is Bio/Chem High is about uplifiting novices and Critical is about uplifting experts. See PF below. Also note OpenAI said Deep Research was safe; it’s ChatGPT Agent and GPT-5 which it said required safeguards.
I think the “not states” part of this is incorrect in the case of OpenAI, whose Deep Research system card said: “Our evaluations found that deep research can help experts with the operational planning of reproducing a known biological threat, which meets our medium risk threshold.”
I haven’t read all of the relevant stuff in a long time but my impression is Bio/Chem High is about uplifiting novices and Critical is about uplifting experts. See PF below. Also note OpenAI said Deep Research was safe; it’s ChatGPT Agent and GPT-5 which it said required safeguards.
That’s the new PF. The old (December 2023) version defined a medium risk threshold which Deep Research surpassed.
https://cdn.openai.com/openai-preparedness-framework-beta.pdf