Executive summary: OpenAI’s preparedness framework for AI safety makes valuable contributions, especially around communication, clarity, openness to feedback, and emergency response planning. But it could be strengthened by more focus on general intelligence, clearer safeguard requirements, adjusting autonomy thresholds, granting veto power to safety roles, and enhancing security.
Key points:
OpenAI communicated the framework well, with a concern-raising name and clarity that signals risks to policymakers.
Concrete eval examples, risk spectrums, and emergency plans are strengths.
More focus is needed on general intelligence safety levels, not just narrow capabilities.
Safeguard requirements for high-risk models should be specified.
Autonomy thresholds may be too high given deployment plans.
Grant veto power on models to the Safety Advisory Chair and Preparedness head.
Commit to security practices that protect models from theft.
Increase frequency of emergency response drills.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: OpenAI’s preparedness framework for AI safety makes valuable contributions, especially around communication, clarity, openness to feedback, and emergency response planning. But it could be strengthened by more focus on general intelligence, clearer safeguard requirements, adjusting autonomy thresholds, granting veto power to safety roles, and enhancing security.
Key points:
OpenAI communicated the framework well, with a concern-raising name and clarity that signals risks to policymakers.
Concrete eval examples, risk spectrums, and emergency plans are strengths.
More focus is needed on general intelligence safety levels, not just narrow capabilities.
Safeguard requirements for high-risk models should be specified.
Autonomy thresholds may be too high given deployment plans.
Grant veto power on models to the Safety Advisory Chair and Preparedness head.
Commit to security practices that protect models from theft.
Increase frequency of emergency response drills.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.