I just re-read this, really appreciate the thorough research!
A few thoughts:
A lot of the “optimistic” / hopeful cases are basically saying “it is possible to use this technology for X” without saying what the incentive would be to help. Seems to me that without a gears-level story about why people / AI systems would be incentivized to help animals, they by default will not.
Moral circle expansion is one way to change the incentives, especially moral circle expansion among groups likely to hold power in the future.
But this is more relevant for longer AI timelines, since MCE is slow.
This does not seem exactly right to me:
“People are also more likely to consider animal rights to be a legitimate concern if they are not themselves directly affected by poverty and poor health.”
There are lots of vegetarians in India who are quite poor compared to high-income countries, but who are much more inclined to consider animal rights to be legitimate than most westerners.
It seems more true that poverty makes people less likely to change their values since this requires time and resources for self-reflection and behavioral adjustments. So I agree that MCE may be easier with higher-income populations.
Totally agree on your first point. I guess you could divide positive use cases up into a few different categories:
In some cases, like alt proteins, there are already people/companies intentionally trying to create less exploitative systems with reasonable chance of success, and AI will help them achieve that.
In others, like most alternatives to animal testing methods or many forms of Precision Livestock Farming, AI enables methods that are cheaper, more effective, etc., so those methods will probably end up getting adopted and incidentally helping animals in the process.
But for many other areas, the positive use cases rely on getting governments, public, industry etc. to care about animals. If we think that we’re likely to see transformative AI fairly soon, it probably makes sense to specifically target those kind of MCE efforts at the people who will have most say over AI systems, like governments and AI companies. I’ve explored that a bit in another post.
On your second point: thanks, that’s a good point and I think your suggestion is probably more accurate!
I just re-read this, really appreciate the thorough research!
A few thoughts:
A lot of the “optimistic” / hopeful cases are basically saying “it is possible to use this technology for X” without saying what the incentive would be to help. Seems to me that without a gears-level story about why people / AI systems would be incentivized to help animals, they by default will not.
Moral circle expansion is one way to change the incentives, especially moral circle expansion among groups likely to hold power in the future.
But this is more relevant for longer AI timelines, since MCE is slow.
This does not seem exactly right to me:
“People are also more likely to consider animal rights to be a legitimate concern if they are not themselves directly affected by poverty and poor health.”
There are lots of vegetarians in India who are quite poor compared to high-income countries, but who are much more inclined to consider animal rights to be legitimate than most westerners.
It seems more true that poverty makes people less likely to change their values since this requires time and resources for self-reflection and behavioral adjustments. So I agree that MCE may be easier with higher-income populations.
Hey Benny, thanks for the thoughts!
Totally agree on your first point. I guess you could divide positive use cases up into a few different categories:
In some cases, like alt proteins, there are already people/companies intentionally trying to create less exploitative systems with reasonable chance of success, and AI will help them achieve that.
In others, like most alternatives to animal testing methods or many forms of Precision Livestock Farming, AI enables methods that are cheaper, more effective, etc., so those methods will probably end up getting adopted and incidentally helping animals in the process.
But for many other areas, the positive use cases rely on getting governments, public, industry etc. to care about animals. If we think that we’re likely to see transformative AI fairly soon, it probably makes sense to specifically target those kind of MCE efforts at the people who will have most say over AI systems, like governments and AI companies. I’ve explored that a bit in another post.
On your second point: thanks, that’s a good point and I think your suggestion is probably more accurate!