Executive summary: This post argues that s-risk reduction — preventing futures with astronomical amounts of suffering — can be a widely shared moral goal, and proposes using positive, common-ground proxies to address strategic, motivational, and practical challenges in pursuing it effectively.
Key points:
S-risk reduction is broadly valuable: While often associated with suffering-focused ethics, preventing extreme future suffering can appeal to a wide range of ethical views (consequentialist, deontological, virtue-ethical) as a way to avoid worst-case outcomes.
Common ground and shared risk factors: Many interventions targeting s-risks also help with extinction risks or near-term suffering, especially through shared risk factors like malevolent agency, moral neglect, or escalating conflict.
Robust worst-case safety strategy: In light of uncertainty, a practical strategy is to maintain safe distances from multiple interacting s-risk factors, akin to health strategies focused on general well-being rather than specific diseases.
Proxies improve motivation, coordination, and measurability: Abstract, high-stakes goals like s-risk reduction can be more actionable and sustainable if translated into positive proxy goals — concrete, emotionally salient, measurable subgoals aligned with the broader aim.
General positive proxies include: movement building, promoting cooperation and moral concern, malevolence mitigation, and worst-case AI safety — many of which have common-ground appeal.
Personal proxies matter too: Individual development across multiple virtues and habits (e.g. purpose, compassion, self-awareness, sustainability) can support healthy, long-term engagement with s-risk reduction and other altruistic goals.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This post argues that s-risk reduction — preventing futures with astronomical amounts of suffering — can be a widely shared moral goal, and proposes using positive, common-ground proxies to address strategic, motivational, and practical challenges in pursuing it effectively.
Key points:
S-risk reduction is broadly valuable: While often associated with suffering-focused ethics, preventing extreme future suffering can appeal to a wide range of ethical views (consequentialist, deontological, virtue-ethical) as a way to avoid worst-case outcomes.
Common ground and shared risk factors: Many interventions targeting s-risks also help with extinction risks or near-term suffering, especially through shared risk factors like malevolent agency, moral neglect, or escalating conflict.
Robust worst-case safety strategy: In light of uncertainty, a practical strategy is to maintain safe distances from multiple interacting s-risk factors, akin to health strategies focused on general well-being rather than specific diseases.
Proxies improve motivation, coordination, and measurability: Abstract, high-stakes goals like s-risk reduction can be more actionable and sustainable if translated into positive proxy goals — concrete, emotionally salient, measurable subgoals aligned with the broader aim.
General positive proxies include: movement building, promoting cooperation and moral concern, malevolence mitigation, and worst-case AI safety — many of which have common-ground appeal.
Personal proxies matter too: Individual development across multiple virtues and habits (e.g. purpose, compassion, self-awareness, sustainability) can support healthy, long-term engagement with s-risk reduction and other altruistic goals.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.