Executive summary: The post provides thoughts on AI safety policies requested from AI labs by the UK government. It argues the policies are inadequate but some labs like Anthropic and OpenAI are relatively better. It suggests alternative priorities like compute limits, risk assessments, and contingency planning.
Key points:
The UK government’s policy categories seem reasonable but miss key issues like independent risk assessments and contingency planning.
Current AI systems pose unacceptable risks; progress should halt until risks are addressed. But policies help labs acknowledge risks.
Anthropic and OpenAI’s policies seem best, taking risks more seriously. DeepMind’s is much worse. Meta’s is far worse.
Governments should also institute compute limits, monitor chips, halt chip progress, require risk assessments, and develop contingency plans.
Independent risk assessments from actuaries could help determine which labs can continue operating.
If risks appear unaddressable before wide availability, governments need a plan for that scenario now.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The post provides thoughts on AI safety policies requested from AI labs by the UK government. It argues the policies are inadequate but some labs like Anthropic and OpenAI are relatively better. It suggests alternative priorities like compute limits, risk assessments, and contingency planning.
Key points:
The UK government’s policy categories seem reasonable but miss key issues like independent risk assessments and contingency planning.
Current AI systems pose unacceptable risks; progress should halt until risks are addressed. But policies help labs acknowledge risks.
Anthropic and OpenAI’s policies seem best, taking risks more seriously. DeepMind’s is much worse. Meta’s is far worse.
Governments should also institute compute limits, monitor chips, halt chip progress, require risk assessments, and develop contingency plans.
Independent risk assessments from actuaries could help determine which labs can continue operating.
If risks appear unaddressable before wide availability, governments need a plan for that scenario now.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.