Thanks for this, it’s a hugely valuable exploration and an invitation to the community to think beyond the short-term horizon. This mindset feels vital for anyone working at the intersection of AI, economic change and animal welfare.
I feel EA is generally good at identifying neglected problems within existing systems, but there’s a whole category of neglectedness that emerges during transitions—where familiar advocacy approaches might lose traction, where new decision-makers enter the picture, where the very metrics of moral progress could shift. I find this space fascinating and full of opportunity (and risk), as it seems do you!
The deep dives you’ve shared on AI and animal advocacy illustrate this well. It shows how even our most established interventions (corporate campaigns, research, network building) could be fundamentally transformed. But what’s particularly interesting is how these AI-driven changes are happening within our current economic paradigm. When we layer on the possibility of broader economic transitions the complexity multiplies.
We need to understand how values get embedded when paradigms shift. It’s a different kind of tractability analysis: instead of asking “how do we solve this problem now?” we’re asking “how do we ensure this problem remains solvable later?” or even better “how do we design out this problem during the shift?”
Thanks again for this thoughtful piece.
Thanks so much for this thoughtful and clear breakdown, it’s one of the most useful framings I’ve seen for thinking about strategy in the face of paradigm shifts.
The distinction between the “normal(ish)” and “transformed” eras is especially helpful, and I appreciate the caution around assuming continuity in our current levers. The idea that most of today’s advocacy tools may simply not survive or translate post-shift feels both sobering and clarifying. The point about needing a compelling story for why any given intervention’s effects would persist beyond the shift is well taken.
I also found the discussion of moral “lock-ins” particularly resonant. The idea that future systems could entrench either better or worse treatment of animals, depending on early influence, feels like a crucial consideration, especially given how sticky some value assumptions can become once embedded in infrastructure or governance frameworks. There’s probably a lot more to map here about what kinds of decisions are most likely to persist and where contingent choices could still go either way.
I’m exploring some of these questions from a different angle, focusing on how animal welfare might (or might not) be integrated into emerging economic paradigms (I hope to post on this soonish) but this post helped clarify the strategic terrain we’re navigating. Thanks again for putting this together.