Executive summary: The author argues that recent events like Biden’s executive order on AI indicate society will likely regulate AI safety seriously, contrary to past assumptions. This has implications for which problems require special attention.
Key points:
Past narratives often assumed society would ignore AI risks until it was too late, but recent events suggest otherwise.
Biden’s executive order, AI safety summits, open letters, and media coverage indicate serious societal concern over AI risks.
It’s unlikely AI capabilities will appear suddenly without warning signs, allowing time to study risks and regulate.
People likely care about risks like AI deception already, and will regulate them seriously, though not perfectly.
We should reconsider which problems require special attention versus default solutions.
Thoughtful, nuanced policy is needed, not just blanket advocacy. Value drift may be a neglected issue warranting focus.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
I think these arguments push against broad public advocacy work, in favor of more cautious efforts to target regulation well, and make sure that it’s thoughtful. Since I think we’ll likely get strong regulation by default, ensuring that the regulation is effective and guided by high-quality evidence should be the most important objective at this point.
Policymakers will adjust policy strictness in response to evidence about the difficulty of alignment. The important question is not whether the current level of regulation is sufficient to prevent future harm, but whether we have the tools to ensure that policies can adapt appropriately according to the best evidence about model capabilities and alignment difficulty at any given moment in time.
Executive summary: The author argues that recent events like Biden’s executive order on AI indicate society will likely regulate AI safety seriously, contrary to past assumptions. This has implications for which problems require special attention.
Key points:
Past narratives often assumed society would ignore AI risks until it was too late, but recent events suggest otherwise.
Biden’s executive order, AI safety summits, open letters, and media coverage indicate serious societal concern over AI risks.
It’s unlikely AI capabilities will appear suddenly without warning signs, allowing time to study risks and regulate.
People likely care about risks like AI deception already, and will regulate them seriously, though not perfectly.
We should reconsider which problems require special attention versus default solutions.
Thoughtful, nuanced policy is needed, not just blanket advocacy. Value drift may be a neglected issue warranting focus.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Two key points I want to add to this summary:
I think these arguments push against broad public advocacy work, in favor of more cautious efforts to target regulation well, and make sure that it’s thoughtful. Since I think we’ll likely get strong regulation by default, ensuring that the regulation is effective and guided by high-quality evidence should be the most important objective at this point.
Policymakers will adjust policy strictness in response to evidence about the difficulty of alignment. The important question is not whether the current level of regulation is sufficient to prevent future harm, but whether we have the tools to ensure that policies can adapt appropriately according to the best evidence about model capabilities and alignment difficulty at any given moment in time.