tl;dr: Lemme know if you have ideas for approaches to animal-inclusive AI that would also rank among the most promising ways to reduce human extinction risk from AI. I think they probably don’t be exist, but it’d be wicked cool if they did.
Most EAs working on AI safety are primarily interested in reducing the risk of human extinction. I agree that this is of astronomical importance, especially when you consider all the wild animal suffering that would continue in our absence.
Many things that would move us toward animal-inclusive AI would also help move us away from extinction risks. But I suspect the majority of those things, while helpful, would not be among the most helpful ways to reduce extinction risk. In other words, we should be wary suspicious convergence; “what is best for one thing is usually not the best for something else.”
I’m working on plans to do more to support a rigorous search for approaches to animal-inclusive AI (or approaches to advancing wild animal welfare science broadly) that would also rank among the most promising ways to reduce human extinction risk from AI. In the meantime, I’d encourage anyone interested in the broader subject to consider this narrower subset, and to reach out to me if they’re excited to work on it more (cameronms@wildanimalinitiative.org).
To be clear, I also think animal-inclusive AI is worth pursuing for its own sake (i.e., working on animal-inclusive AI seems likely to be among the most impactful things you can do to make the world a better place in the set of scenarios where humans don’t go extinct), and I’d be excited to see work on most of the approaches discussed above. In those cases—especially when building coalitions with people who might have different priorities—I think it’s useful to be transparent about the fact that what we’re doing is important, but we don’t think it’s one of the most promising ways to avoid human extinction.
Thanks Cameron! That’s a helpful point that I didn’t really touch on in this post. Great that you’re doing work in that space—I’m really interested to hear more about it so will get in touch.
I’m working on plans to do more to support a rigorous search for approaches to animal-inclusive AI (or approaches to advancing wild animal welfare science broadly) that would also rank among the most promising ways to reduce human extinction risk from AI.
Interesting! I am interesting in discussing this idea further with you.
Could it be the case that another way to think about it is to search within the best approaches to reduce human x-risk, for a subset that is aslo animal inclusive? For example, if working on AI alignment is one of the best ways to reduce human x-risk, then we try to look for the subset within these alignment strategies that are also animal friendly?
Thanks for this post, Max!
tl;dr: Lemme know if you have ideas for approaches to animal-inclusive AI that would also rank among the most promising ways to reduce human extinction risk from AI. I think they probably don’t be exist, but it’d be wicked cool if they did.
Most EAs working on AI safety are primarily interested in reducing the risk of human extinction. I agree that this is of astronomical importance, especially when you consider all the wild animal suffering that would continue in our absence.
Many things that would move us toward animal-inclusive AI would also help move us away from extinction risks. But I suspect the majority of those things, while helpful, would not be among the most helpful ways to reduce extinction risk. In other words, we should be wary suspicious convergence; “what is best for one thing is usually not the best for something else.”
I’m working on plans to do more to support a rigorous search for approaches to animal-inclusive AI (or approaches to advancing wild animal welfare science broadly) that would also rank among the most promising ways to reduce human extinction risk from AI. In the meantime, I’d encourage anyone interested in the broader subject to consider this narrower subset, and to reach out to me if they’re excited to work on it more (cameronms@wildanimalinitiative.org).
To be clear, I also think animal-inclusive AI is worth pursuing for its own sake (i.e., working on animal-inclusive AI seems likely to be among the most impactful things you can do to make the world a better place in the set of scenarios where humans don’t go extinct), and I’d be excited to see work on most of the approaches discussed above. In those cases—especially when building coalitions with people who might have different priorities—I think it’s useful to be transparent about the fact that what we’re doing is important, but we don’t think it’s one of the most promising ways to avoid human extinction.
Thanks Cameron! That’s a helpful point that I didn’t really touch on in this post. Great that you’re doing work in that space—I’m really interested to hear more about it so will get in touch.
Interesting! I am interesting in discussing this idea further with you.
Could it be the case that another way to think about it is to search within the best approaches to reduce human x-risk, for a subset that is aslo animal inclusive? For example, if working on AI alignment is one of the best ways to reduce human x-risk, then we try to look for the subset within these alignment strategies that are also animal friendly?