Executive summary: The author reflectively argues that, given near-term AI-driven discontinuity and extreme uncertainty about post-transition worlds, suffering-focused anti-speciesists should prioritize capacity building, influence, and coalition formation over most medium-term object-level interventions, while focusing especially on preventing worst-case suffering under likely future power lock-in.
Key points:
The author frames the future as split between a pre-transition era with tractable feedback loops and a post-transition era where impact could be astronomically large but highly sign-uncertain.
They argue that most medium-term interventions are unlikely to survive the transition, and that longtermism should be pursued fully or not at all.
Capacity building—movement growth, epistemic infrastructure, coordination, and AI proficiency—is presented as a strategy robust across many possible futures.
Short-term wins can still matter by building credibility, shifting culture, and testing the movement’s ability to exert influence before transition.
The author expects AI-enabled power concentration and lock-in, making future suffering the product of deliberate central planning rather than decentralized accidents.
They suggest prioritizing prevention of worst-case “S-risks,” influencing tech-elite culture (especially in San Francisco), diversifying beyond reliance on frontier labs, and engaging AI systems themselves as future power holders or moral patients.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author reflectively argues that, given near-term AI-driven discontinuity and extreme uncertainty about post-transition worlds, suffering-focused anti-speciesists should prioritize capacity building, influence, and coalition formation over most medium-term object-level interventions, while focusing especially on preventing worst-case suffering under likely future power lock-in.
Key points:
The author frames the future as split between a pre-transition era with tractable feedback loops and a post-transition era where impact could be astronomically large but highly sign-uncertain.
They argue that most medium-term interventions are unlikely to survive the transition, and that longtermism should be pursued fully or not at all.
Capacity building—movement growth, epistemic infrastructure, coordination, and AI proficiency—is presented as a strategy robust across many possible futures.
Short-term wins can still matter by building credibility, shifting culture, and testing the movement’s ability to exert influence before transition.
The author expects AI-enabled power concentration and lock-in, making future suffering the product of deliberate central planning rather than decentralized accidents.
They suggest prioritizing prevention of worst-case “S-risks,” influencing tech-elite culture (especially in San Francisco), diversifying beyond reliance on frontier labs, and engaging AI systems themselves as future power holders or moral patients.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.