I just re-read this, really appreciate the thorough research!
A few thoughts:
A lot of the “optimistic” / hopeful cases are basically saying “it is possible to use this technology for X” without saying what the incentive would be to help. Seems to me that without a gears-level story about why people / AI systems would be incentivized to help animals, they by default will not.
Moral circle expansion is one way to change the incentives, especially moral circle expansion among groups likely to hold power in the future.
But this is more relevant for longer AI timelines, since MCE is slow.
This does not seem exactly right to me:
“People are also more likely to consider animal rights to be a legitimate concern if they are not themselves directly affected by poverty and poor health.”
There are lots of vegetarians in India who are quite poor compared to high-income countries, but who are much more inclined to consider animal rights to be legitimate than most westerners.
It seems more true that poverty makes people less likely to change their values since this requires time and resources for self-reflection and behavioral adjustments. So I agree that MCE may be easier with higher-income populations.
Excited to read your work, Seth. Thanks for sharing