The good news is that life on Earth has been going better and better for humans over the millennia. For instance, we have technology that make it easy to grow tons and tons of food so lots of people can eat as much as they want. We have cures for lots of previously deadly diseases so lots of us humans can live a very long time. And lots of people live in countries that recognize their rights. We also have a robust international economy that makes it really easy for a large number of people to buy the goods and services they want—and for lots of other people to get paid producing those goods and services!
The bad news is that none of this has translated to things going well for animals. :-( In fact, it has translated to the opposite. Things have been going worse and worse for animals over the millennia. For instance, factory farming, which causes a HUGE amount of suffering for animals, developed very recently in human history, and it developed as a byproduct of humans getting the things they want most (like a great economy, and the ability to produce food cheaply). So we have seen that humans getting more and more of what we want doesn’t translate to animals getting what they need. Of course, humans do also want animals to be treated well, on some level! But humans’ main goals are human-oriented goals. And so when we get more and more ability to achieve our goals, we put those human-oriented goals first, resulting in negative externalities for animals. If AI goes well for humans, it’ll go well for humans. It’ll be aligned with what humans want. And that will mean it’s aligned with prioritizing human interests over all others. Sure, it’ll care about animals a little, the way humans care about animals a little. But it will continue to put human interests first. And that will continue to result in externalities for animals.
The same way people harm animals now (e.g. for food, entertainment, fashion, science, etc.) may continue. And new ways to harm animals may develop that we never could have imagined before AI. For instance, people love having pet dogs. When their pet dogs die, people are sad. People may want to be able to upload their pet dog’s brain to the cloud to hang out with the pet dog when their dog dies. But trying to develop this technology may be a lot of work. AI may do the work by uploading 100,000 dog brains, or 100,000 copies of the same dog brain, to the cloud, and running various tests to see what works best. Perhaps a lot of these dogs will suffer immensely due to some mistake AI made in an early draft or some feature AI failed to include. And perhaps the suffering will be made worse because the dogs don’t have bodies and cannot even able to express their suffering without vocal cords or paws. Eventually, AI may work out the kinks before AI roles out the final keep-your-dead-pet-alive-as-an-app-on-your-phone product. But there’s all that behind-the-scenes suffering in the meantime. Humans care about animals a little. But humans love to turn a blind eye to behind-the-scenes suffering, so humans won’t be too upset about this situation. Then maybe AI realizes humans would like an upgrade to their pet-on-your-phone product. And that means AI needs 100,000 more copies of dog brains to do more experiments. AI that is fully aligned with human interests would realize humans would like the upgrade more than humans would be bothered by the suffering inherent in creating the upgrade. So AI will create the upgrade. This is just an example to illustrate my point. But I think there are lots of ways animals can be caused to suffer that we can’t even imagine right now.
What animals need is for AI to be aligned with animal interests, too—not just human interests.
The good news is that life on Earth has been going better and better for humans over the millennia. For instance, we have technology that make it easy to grow tons and tons of food so lots of people can eat as much as they want. We have cures for lots of previously deadly diseases so lots of us humans can live a very long time. And lots of people live in countries that recognize their rights. We also have a robust international economy that makes it really easy for a large number of people to buy the goods and services they want—and for lots of other people to get paid producing those goods and services!
The bad news is that none of this has translated to things going well for animals. :-( In fact, it has translated to the opposite. Things have been going worse and worse for animals over the millennia. For instance, factory farming, which causes a HUGE amount of suffering for animals, developed very recently in human history, and it developed as a byproduct of humans getting the things they want most (like a great economy, and the ability to produce food cheaply). So we have seen that humans getting more and more of what we want doesn’t translate to animals getting what they need. Of course, humans do also want animals to be treated well, on some level! But humans’ main goals are human-oriented goals. And so when we get more and more ability to achieve our goals, we put those human-oriented goals first, resulting in negative externalities for animals. If AI goes well for humans, it’ll go well for humans. It’ll be aligned with what humans want. And that will mean it’s aligned with prioritizing human interests over all others. Sure, it’ll care about animals a little, the way humans care about animals a little. But it will continue to put human interests first. And that will continue to result in externalities for animals.
The same way people harm animals now (e.g. for food, entertainment, fashion, science, etc.) may continue. And new ways to harm animals may develop that we never could have imagined before AI. For instance, people love having pet dogs. When their pet dogs die, people are sad. People may want to be able to upload their pet dog’s brain to the cloud to hang out with the pet dog when their dog dies. But trying to develop this technology may be a lot of work. AI may do the work by uploading 100,000 dog brains, or 100,000 copies of the same dog brain, to the cloud, and running various tests to see what works best. Perhaps a lot of these dogs will suffer immensely due to some mistake AI made in an early draft or some feature AI failed to include. And perhaps the suffering will be made worse because the dogs don’t have bodies and cannot even able to express their suffering without vocal cords or paws. Eventually, AI may work out the kinks before AI roles out the final keep-your-dead-pet-alive-as-an-app-on-your-phone product. But there’s all that behind-the-scenes suffering in the meantime. Humans care about animals a little. But humans love to turn a blind eye to behind-the-scenes suffering, so humans won’t be too upset about this situation. Then maybe AI realizes humans would like an upgrade to their pet-on-your-phone product. And that means AI needs 100,000 more copies of dog brains to do more experiments. AI that is fully aligned with human interests would realize humans would like the upgrade more than humans would be bothered by the suffering inherent in creating the upgrade. So AI will create the upgrade. This is just an example to illustrate my point. But I think there are lots of ways animals can be caused to suffer that we can’t even imagine right now.
What animals need is for AI to be aligned with animal interests, too—not just human interests.