Executive summary: The author proposes creating a U.S. government body called NAIRA to regulate AI development, promote safety research, and make AI safety a competitive priority between labs in order to mitigate existential risks from advanced AI systems.
Key points:
NAIRA would consist of three sub-bodies: a Government Machinery for administration and enforcement, a RISE committee with stakeholders from government, AI labs, and the public to vote on AI system approvals, and a CARE committee to evaluate AI systems for safety compliance.
AI labs would need CARE and RISE approval before releasing new AI models. RISE would allow competing labs to scrutinize proposed models indirectly to incentivize higher safety standards.
NAIRA would assign safety scores to AI models, enforce penalties for non-compliance, monitor for adverse effects, and provide grants for AI alignment research.
The voting structure of RISE aims to promote inefficiency in the overall AI development pipeline while still allowing progress with proper safety checks.
Labs would be incentivized to participate in RISE to access government resources, keep tabs on competitors, influence policy, gain PR benefits, and receive tax credits for transparency.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author proposes creating a U.S. government body called NAIRA to regulate AI development, promote safety research, and make AI safety a competitive priority between labs in order to mitigate existential risks from advanced AI systems.
Key points:
NAIRA would consist of three sub-bodies: a Government Machinery for administration and enforcement, a RISE committee with stakeholders from government, AI labs, and the public to vote on AI system approvals, and a CARE committee to evaluate AI systems for safety compliance.
AI labs would need CARE and RISE approval before releasing new AI models. RISE would allow competing labs to scrutinize proposed models indirectly to incentivize higher safety standards.
NAIRA would assign safety scores to AI models, enforce penalties for non-compliance, monitor for adverse effects, and provide grants for AI alignment research.
The voting structure of RISE aims to promote inefficiency in the overall AI development pipeline while still allowing progress with proper safety checks.
Labs would be incentivized to participate in RISE to access government resources, keep tabs on competitors, influence policy, gain PR benefits, and receive tax credits for transparency.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.