Executive summary: The post proposes “Stratified Safety,” a simplified, defensive framework adapted from AI safety that organizes animal advocacy against AI-driven harms into three mutually reinforcing layers—preventing harmful systems, shaping their capabilities, and building resilience when harm occurs—supported by movement-wide capacity building.
Key points:
The author argues that AI will likely intensify existing forms of animal suffering, such as factory farming and wildlife harm, unless advocates adopt a proactive, defensive strategy.
“Stratified Safety” adapts the Swiss cheese model into three layers: preventing the existence of harmful AI systems, shaping AI capabilities to constrain harm, and withstanding harms through resilient infrastructure.
Prevention focuses on policy, regulation, and governance interventions that block or limit harmful AI applications before deployment.
Shaping emphasizes technical interventions during training and deployment, such as benchmarks, data filtering, and human-in-the-loop oversight, to align systems with non-human welfare.
Withstanding assumes some harms will occur and prioritizes resilience measures like transparency systems, physical infrastructure, alternative proteins, and emergency shutdowns.
A foundational component—capacity and field-building—is presented as essential to enable and sustain all three layers over time.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The post proposes “Stratified Safety,” a simplified, defensive framework adapted from AI safety that organizes animal advocacy against AI-driven harms into three mutually reinforcing layers—preventing harmful systems, shaping their capabilities, and building resilience when harm occurs—supported by movement-wide capacity building.
Key points:
The author argues that AI will likely intensify existing forms of animal suffering, such as factory farming and wildlife harm, unless advocates adopt a proactive, defensive strategy.
“Stratified Safety” adapts the Swiss cheese model into three layers: preventing the existence of harmful AI systems, shaping AI capabilities to constrain harm, and withstanding harms through resilient infrastructure.
Prevention focuses on policy, regulation, and governance interventions that block or limit harmful AI applications before deployment.
Shaping emphasizes technical interventions during training and deployment, such as benchmarks, data filtering, and human-in-the-loop oversight, to align systems with non-human welfare.
Withstanding assumes some harms will occur and prioritizes resilience measures like transparency systems, physical infrastructure, alternative proteins, and emergency shutdowns.
A foundational component—capacity and field-building—is presented as essential to enable and sustain all three layers over time.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.