AGI & Animals: Discussion Thread
This week, we are discussing the statement: “If AGI goes well for humans, it’ll go well for animals”. The announcement post, with a bit more info and a reading list, is here.
What is this thread for?
General discussions about and reactions to the debate statement.
Some of the comments on this thread will be populated directly from the debate banner on the homepage — these will mostly be people explaining why they voted the way they did.
However, you’re also welcome to comment on here directly, with any considerations you’d like to share, or questions you’d like to ask.
How should I understand the debate statement?
Again, our statement is: “If AGI goes well for humans, it’ll go well for animals”
The statement will ultimately mean whatever people interpret it to mean. The key is to explain how you are interpreting the statement in the comment that you attach to your vote. However, I can share a few notes which might pre-empt your questions:
AGI- Artificial General Intelligence. What this exactly is and how transformative it is likely to be to the world economy and our ways of life is likely to be a crux in this debate. As such, I won’t be offering a definition.
Goes well- Likewise, what it means for AGI to go well is likely to be a live element of the discussion. For example, ‘going well’ might mean humans are still in control of AI tools, or it might mean that humans are replaced by more beneficent machines. I’ll leave this up to you.
Animals- I’m talking about non-human animals. I’m specifically naming animals rather than ‘other minds’ to signal that this conversation isn’t primarily about digital minds.
Message me or comment in the thread with me tagged if you have any questions.
Seems like AGI will lead to ASI and ASI will show us more valuable ways to use all the land and matter that currently support animal suffering. The ways we use those probably won’t involve animals or suffering at all.
The good news is that life on Earth has been going better and better for humans over the millennia. For instance, we have technology that make it easy to grow tons and tons of food so lots of people can eat as much as they want. We have cures for lots of previously deadly diseases so lots of us humans can live a very long time. And lots of people live in countries that recognize their rights. We also have a robust international economy that makes it really easy for a large number of people to buy the goods and services they want—and for lots of other people to get paid producing those goods and services!
The bad news is that none of this has translated to things going well for animals. :-( In fact, it has translated to the opposite. Things have been going worse and worse for animals over the millennia. For instance, factory farming, which causes a HUGE amount of suffering for animals, developed very recently in human history, and it developed as a byproduct of humans getting the things they want most (like a great economy, and the ability to produce food cheaply). So we have seen that humans getting more and more of what we want doesn’t translate to animals getting what they need. Of course, humans do also want animals to be treated well, on some level! But humans’ main goals are human-oriented goals. And so when we get more and more ability to achieve our goals, we put those human-oriented goals first, resulting in negative externalities for animals. If AI goes well for humans, it’ll go well for humans. It’ll be aligned with what humans want. And that will mean it’s aligned with prioritizing human interests over all others. Sure, it’ll care about animals a little, the way humans care about animals a little. But it will continue to put human interests first. And that will continue to result in externalities for animals.
The same way people harm animals now (e.g. for food, entertainment, fashion, science, etc.) may continue. And new ways to harm animals may develop that we never could have imagined before AI. For instance, people love having pet dogs. When their pet dogs die, people are sad. People may want to be able to upload their pet dog’s brain to the cloud to hang out with the pet dog when their dog dies. But trying to develop this technology may be a lot of work. AI may do the work by uploading 100,000 dog brains, or 100,000 copies of the same dog brain, to the cloud, and running various tests to see what works best. Perhaps a lot of these dogs will suffer immensely due to some mistake AI made in an early draft or some feature AI failed to include. And perhaps the suffering will be made worse because the dogs don’t have bodies and cannot even able to express their suffering without vocal cords or paws. Eventually, AI may work out the kinks before AI roles out the final keep-your-dead-pet-alive-as-an-app-on-your-phone product. But there’s all that behind-the-scenes suffering in the meantime. Humans care about animals a little. But humans love to turn a blind eye to behind-the-scenes suffering, so humans won’t be too upset about this situation. Then maybe AI realizes humans would like an upgrade to their pet-on-your-phone product. And that means AI needs 100,000 more copies of dog brains to do more experiments. AI that is fully aligned with human interests would realize humans would like the upgrade more than humans would be bothered by the suffering inherent in creating the upgrade. So AI will create the upgrade. This is just an example to illustrate my point. But I think there are lots of ways animals can be caused to suffer that we can’t even imagine right now.
What animals need is for AI to be aligned with animal interests, too—not just human interests.
Hi! There’s no labels on the slider bar so it’s initially unclear which side is agree vs disagree.
AGI could, in principle, find solutions for the key problems that animals face, but I would argue the main issue is that it won’t automatically enlighten humans.
A couplet different potential mechanisms could help farmed animals:
Solving cultivated meat or brainless animals
Creating better welfare technologies (e.g. solving all disease issues on current farms)
Generating enough societal wealth to make welfare improvements like lowering stocking density trivial
More abstractly, people generally care about welfare so it will be one of the things that an aligned AGI optimizes for. However, it wont be optimal for animals because AGI won’t be directly optimizing for welfare. For example, most people don’t think it’s wrong to eat meat, and we might still not want do things like beneficial vaccines or genetic edits.
Wild animals, less clear though!