That’s right that we don’t have any ongoing projects exclusively on the impact of AI on nonhuman biological animals, though much of our research includes that, especially the outer alignment idea of ensuring an AGI or superintelligence accounts for the interests about all sentient beings, including wild and domestic nonhuman biological animals. We also have several empirical projects where we collect data on both moral concern for animals and for AI, such as on perspective-taking, predictors of moral concern, and our recently conducted US nationally representative survey on Artificial Intelligence, Morality, and Sentience (AIMS).
For various reasons discussed in those nonhumans and the long-term future posts and in essays like “Advantages of Artificial Intelligences, Uploads, and Digital Minds” (Sotala 2012), biological nonhuman animals seem less likely to exist in very large numbers in the long-term future than animal-like digital minds. That doesn’t mean we shouldn’t work on the impact of AI on those biological nonhuman animals, but it has made us prioritize laying groundwork on the nature of moral concern and the possibility space of future sentience. I can say that we have a lot of researcher applicants propose agendas focused more directly on AI and biological nonhuman animals, and we’re in principle very open to it. There are far more promising research projects in this space than we can fund at the moment. However, I don’t think Sentience Institute’s comparative advantage is working directly on research projects like CETI or Interspecies Internet that wade through the detail of animal ethology or neuroscience using machine learning, though I’d love to see a blog-depth analysis of the short-term and long-term potential impacts of such projects, especially if there are more targeted interventions (e.g., translating farmed animal vocalizations) that could be high-leverage for EA.
Thanks for the explanation; I do support what SI is doing (researching problems around digital sentience as moral patients, which seems to be an important and neglected area), and your reasoning makes sense!
That’s right that we don’t have any ongoing projects exclusively on the impact of AI on nonhuman biological animals, though much of our research includes that, especially the outer alignment idea of ensuring an AGI or superintelligence accounts for the interests about all sentient beings, including wild and domestic nonhuman biological animals. We also have several empirical projects where we collect data on both moral concern for animals and for AI, such as on perspective-taking, predictors of moral concern, and our recently conducted US nationally representative survey on Artificial Intelligence, Morality, and Sentience (AIMS).
For various reasons discussed in those nonhumans and the long-term future posts and in essays like “Advantages of Artificial Intelligences, Uploads, and Digital Minds” (Sotala 2012), biological nonhuman animals seem less likely to exist in very large numbers in the long-term future than animal-like digital minds. That doesn’t mean we shouldn’t work on the impact of AI on those biological nonhuman animals, but it has made us prioritize laying groundwork on the nature of moral concern and the possibility space of future sentience. I can say that we have a lot of researcher applicants propose agendas focused more directly on AI and biological nonhuman animals, and we’re in principle very open to it. There are far more promising research projects in this space than we can fund at the moment. However, I don’t think Sentience Institute’s comparative advantage is working directly on research projects like CETI or Interspecies Internet that wade through the detail of animal ethology or neuroscience using machine learning, though I’d love to see a blog-depth analysis of the short-term and long-term potential impacts of such projects, especially if there are more targeted interventions (e.g., translating farmed animal vocalizations) that could be high-leverage for EA.
Thanks for the explanation; I do support what SI is doing (researching problems around digital sentience as moral patients, which seems to be an important and neglected area), and your reasoning makes sense!