Thanks for writing this Max! The likelihood that my and other advocates’ work could be made completely irrelevant in the next few years has been nagging at me. Because you invited loose thoughts, I wanted to share my reflections on this topic after reading your write-up:
If AI changes the world massively over the next 5-10 years but there’s no sci-fi-style intelligence explosion:
Many/most of the specific interventions that animal advocates are using successfully today will no longer work in a completely different context.
This means we should ‘exploit’ proven strategies as quickly as possible today (hard with funding as a bottleneck)
This means we shouldn’t ‘explore’ new strategies as much that aren’t robust to a radically transformed world
The most robust strategies for a transformed world (it seems to me) are ones that increase moral consideration that empowered agents (humans and AIs) have for animals, as these will lead agents to make more animal-friendly choices in the world, whatever that world looks like
Unfortunately we’re not very good at this as a movement right now! But more efforts to figure it out, particularly ones that are realistic about human psychology, seem needed to me
If we get an intelligence explosion:
As above, but humans will be making far fewer of the important decisions, and so it becomes far more important to increase moral consideration that AIs specifically have for animals (which means it’s more important for us to target advocacy at the specific people/governments influencing the values of the AIs, and less important to do broad public advocacy)
Either way: AI could make an alt-protein end game for factory farming far more technologically viable. We should be doing what we can to create the most favorable starting conditions for AIs / people-advised-by-AIs to choose the alt-protein path (over the intensification of animal ag path for example). One particularly promising thing we could do here is remove regulatory barriers to alt protein scale-up and commercialisation, because if AI makes it technologically possible but the policy lags behind, this could be reason enough for the AIs / people-advised-by-AIs to decide not to pursue this path.
Thanks for writing this Max! The likelihood that my and other advocates’ work could be made completely irrelevant in the next few years has been nagging at me. Because you invited loose thoughts, I wanted to share my reflections on this topic after reading your write-up:
If AI changes the world massively over the next 5-10 years but there’s no sci-fi-style intelligence explosion:
Many/most of the specific interventions that animal advocates are using successfully today will no longer work in a completely different context.
This means we should ‘exploit’ proven strategies as quickly as possible today (hard with funding as a bottleneck)
This means we shouldn’t ‘explore’ new strategies as much that aren’t robust to a radically transformed world
The most robust strategies for a transformed world (it seems to me) are ones that increase moral consideration that empowered agents (humans and AIs) have for animals, as these will lead agents to make more animal-friendly choices in the world, whatever that world looks like
Unfortunately we’re not very good at this as a movement right now! But more efforts to figure it out, particularly ones that are realistic about human psychology, seem needed to me
If we get an intelligence explosion:
As above, but humans will be making far fewer of the important decisions, and so it becomes far more important to increase moral consideration that AIs specifically have for animals (which means it’s more important for us to target advocacy at the specific people/governments influencing the values of the AIs, and less important to do broad public advocacy)
Either way: AI could make an alt-protein end game for factory farming far more technologically viable. We should be doing what we can to create the most favorable starting conditions for AIs / people-advised-by-AIs to choose the alt-protein path (over the intensification of animal ag path for example). One particularly promising thing we could do here is remove regulatory barriers to alt protein scale-up and commercialisation, because if AI makes it technologically possible but the policy lags behind, this could be reason enough for the AIs / people-advised-by-AIs to decide not to pursue this path.
Keen to hear people’s reactions to this :)