This is a great post and (in my opinion) a super important topic—thanks for writing it up! We (at the Charity Entrepreneurship office) were actually talking about this today and funnily enough, made similar points you listed above why it might not be a problem (e.g. it’s too infeasible to colonise space with animals). Generally though we agreed that it could be a big problem and it’s not obvious how things are going to play out.
A potentially important thing we spoke about but isn’t mentioned above is how aligned would future artificial general intelligence to the moral value of animals. AGI alignment is probably going to be affected by the moral values of the humans working on AI alignment, and there is a potential concern that a superintelligent AGI might have similar feelings towards animal welfare relative to most of the human population, which is largely indifference at their suffering. This might mean we design superintelligent AGI that is okay with using animals as resources within their calculations, rather than intelligent and emotional beings who have the capacity to suffer. This could, potentially, lead to factory farming scenarios worse than what we have today, as AGI would ruthlessly optimise for production with zero concern for animal welfare, which some farmers would at least consider nowadays. Not only could the moment-to-moment suffering of animals be potentially worse, this could be a stable state that is “locked-in” for long periods of time, depending on the dominance of this AGI and the values that created it. In essence, we could lock-in centuries (or longer) of intensely bad suffering for animals in some Orwellian scenario where AGI doesn’t include animals as morally relevant actors.
There are obviously some other important factors that will drive the calculations of this AGI if/when designing or implementing food production systems, namely: cost of materials, accessibility, ability to scale, etc. This might mean that animal products are naturally a worse option relative to plant-based or cultivated counterparts but in the cases where it is more efficient to use animal-based products (which will also be improved in efficiency by AGI), the optimisation of this by AGI could be extremely concerning for animal suffering.
Obviously I’m not sure how likely this is to happen, but the outcome seems extremely bad so it’s probably worth putting some thought into it, as I’m not sure what is happening currently. It was just a very distressing conclusion to come to that this could happen but I’m glad to see other people are thinking about this (and hopefully more will join!)
I think worry about factory farm AI being overall negative, and much less likely overall positive. First, it might reduce diseases, but that also means factory farms can therefore keep animals more crowdedly because they have better disease control. Second, AI would decrease the cost of animal products, causing more demand, and therefore increase the number of animals farmed. Third, lower prices mean animal products will be harder to be replaced by alternatives. Fourth, I argue that AI that are told to improve or satisfice animal welfare cannot do so rubustly. Please refer to my comment above to James Ozden.
Hey James (Ozden), I am really glad that CE discussed this! I thought about them too, so wonder if you and CE would like to discuss? (CE rejected my proposal on AI x animals x longtermism, but I think they made the right call, these ideas were too immature and under-researched to set up a new charity!)
I now work as Peter Singer’s RA (contractor) at Princeton, on AI and animals. We touched on AI alignment, and we co-authored a paper on speciesist algorithmic bias in AI systems (language models, search algorithms), with two other professors, which might be relevant.
I also looked at other problems which might look like quasi-AI-alignment for animals problems. (or maybe, they are not quasi?)
For example, some AI systems are given the tasks to “tell” the mental states (+/-, scores) of farmed animals and zoo animals, and some of them will, in the future, be given the further task of satisficing/maximizing (I believe they won’t “maximize, they are satistficing for animal “welfare” due to legal and commercial concerns). A problem is that, the “ground truths” labels in the training datasets of these AI are, as far as I know, all labelled by humans (not the animals! Obviously. Also remember that among humans, the one chosen to label such data likely have interests in factory farming). This causes a great problem. What these welfare maximizing (let’s charitably think they will be do this instead of satisficing) systems will be optimizing are the scores attached to the physical parameters chosen to be given scores of. For example, if the AI system is told to look for “positive facial expressions” defined by “animal welfare experts”, which actually was something people trained AI on, the AI system would have a tendency to hack the reward by maximizing the instances the pigs have these “positive facial expressions, without true regard to welfare. If the systems get sophisticated enough, toy examples for human-AI alignment like an ASI controlling the facial muscles of humans to maximize the number of human smiles, could actually happen in factory farms. The same could happen even if the systems are told to minimize “negative expressions”—the AI could find ways to make the animals hide their pain and suffering.
If we keep using human labellers for “ground truths” of animals’ interests, preferences, welfare. There will be two alignment problems. 1. How to align human definitions and labels with the animals’ actual interests/preferences? 2. The human-AI alignment problem we usually talk about. (And if there is a mesa-optimizer problem in such systems, we have 3!)
There’s a kind of AI systems which might break this partially. There’s a few projects out there trying to decipher the “languages” of rats, whales, or animals generally. While there are huge potentials, it’s not only positive for me. Setting aside 10+ other philosophical problems I identified with “deciphering animal language”, I want to discuss the quasi-alignment problem I see here: Let’s say the approach is to use ML to group the patterns in animals’ sounds. To “decipher animal language”, at some point the human researchers still have to use their judgement to decide that a certain sound pattern means something in a human language. For example if the same sound pattern appears every time the rats are not fed, the researchers might conclude that this pattern means “hungry”. But that’s still the same problem, the interpretation what the animals actually expressed was done by humans first, before going to the AI. What if the rats are actually not saying “hungry”, but “feed me?”, or “hangry”, we might carry the prejudice that rats are not as sophisticated as that, but what if they are?
Wait, I don’t know why I wrote so much, but anyway, thank you if you have read so far :)
I haven’t read this fully (yet! will respond soon) but very quick clarification—Charity Entrepreneurship weren’t talking about this as an organisation. Rather, there’s a few different orgs with a bunch of individuals who use the CE office and happened to be talking about it (mostly animal people in this case). So I wouldn’t expect CE’s actual work to reflect that conversation given it only had one CE employee and 3 others who weren’t!
Great to learn about your paper Fai, I didn’t know about it till now, and this topic is quite interesting. I think when longtermism talks about the far future it’s usually “of humanity” that follows and this always scared me, because I was not sure either this is speciesist or is there some silent assumption that we should also care about sentient beings. I don’t think there were animal-focused considerations in Toby Ord’s book (I might be wrong here) and similar publications? I would gladly then read your paper. I quickly jumped to the conclusion of it, and it kinds of confirm my intuitions in regards to AI (but also long-term future work in general): “Up to now, the AI fairness community has largely disregarded this particular dimension of discrimination. Even more so, the field of AI ethics hitherto has had an anthropocentric tailoring. Hence, despite the longstanding discourse about AI fairness, comprising lots of papers critically scrutinizing machine biases regarding race, gender, political orientation, religion, etc., this is the first paper to describe speciesist biases in various common-place AI applications like image recognition, language models, or recommender systems. Accordingly, we follow the calls of another large corpus of literature, this time from animal ethics, pointing from different angles at the ethical necessity of taking animals directly into consideration [48,155–158]...”
Thanks Fai, I think you’re right. Somehow I didn’t notice James’s comment. James thanks for the clarification, I haven’t seen this risk before. Especially this part
This might mean we design superintelligent AGI that is okay with using animals as resources within their calculations, rather than intelligent and emotional beings who have the capacity to suffer.
I just thought that AI would take care of animal health in general, like the exact amount of food, humidity, water, etc. But I didn’t think about the raw calculations made by the AI.
This is a great post and (in my opinion) a super important topic—thanks for writing it up! We (at the Charity Entrepreneurship office) were actually talking about this today and funnily enough, made similar points you listed above why it might not be a problem (e.g. it’s too infeasible to colonise space with animals). Generally though we agreed that it could be a big problem and it’s not obvious how things are going to play out.
A potentially important thing we spoke about but isn’t mentioned above is how aligned would future artificial general intelligence to the moral value of animals. AGI alignment is probably going to be affected by the moral values of the humans working on AI alignment, and there is a potential concern that a superintelligent AGI might have similar feelings towards animal welfare relative to most of the human population, which is largely indifference at their suffering. This might mean we design superintelligent AGI that is okay with using animals as resources within their calculations, rather than intelligent and emotional beings who have the capacity to suffer. This could, potentially, lead to factory farming scenarios worse than what we have today, as AGI would ruthlessly optimise for production with zero concern for animal welfare, which some farmers would at least consider nowadays. Not only could the moment-to-moment suffering of animals be potentially worse, this could be a stable state that is “locked-in” for long periods of time, depending on the dominance of this AGI and the values that created it. In essence, we could lock-in centuries (or longer) of intensely bad suffering for animals in some Orwellian scenario where AGI doesn’t include animals as morally relevant actors.
There are obviously some other important factors that will drive the calculations of this AGI if/when designing or implementing food production systems, namely: cost of materials, accessibility, ability to scale, etc. This might mean that animal products are naturally a worse option relative to plant-based or cultivated counterparts but in the cases where it is more efficient to use animal-based products (which will also be improved in efficiency by AGI), the optimisation of this by AGI could be extremely concerning for animal suffering.
Obviously I’m not sure how likely this is to happen, but the outcome seems extremely bad so it’s probably worth putting some thought into it, as I’m not sure what is happening currently. It was just a very distressing conclusion to come to that this could happen but I’m glad to see other people are thinking about this (and hopefully more will join!)
Dear James,
Thank you so much for this thoughtful response!
It is wonderful to know that people are having conversations about these issues.
You make a really great point about the risk of AGI locking in humans’ current attitude towards animals. That is super scary.
Sincerely,
Alene
alene thank you for this topic, I was thinking about this but never thought that this might realy happen. I just hope that some data about AI taking care more about farmed animals than humans do https://www.vox.com/22528451/pig-farm-animal-welfare-happiness-artificial-intelligence-facial-recognition, will be true in the future. But I also hope that Farming animals will change soon somehow or will end.
I think worry about factory farm AI being overall negative, and much less likely overall positive. First, it might reduce diseases, but that also means factory farms can therefore keep animals more crowdedly because they have better disease control. Second, AI would decrease the cost of animal products, causing more demand, and therefore increase the number of animals farmed. Third, lower prices mean animal products will be harder to be replaced by alternatives. Fourth, I argue that AI that are told to improve or satisfice animal welfare cannot do so rubustly. Please refer to my comment above to James Ozden.
Hey James (Ozden), I am really glad that CE discussed this! I thought about them too, so wonder if you and CE would like to discuss? (CE rejected my proposal on AI x animals x longtermism, but I think they made the right call, these ideas were too immature and under-researched to set up a new charity!)
I now work as Peter Singer’s RA (contractor) at Princeton, on AI and animals. We touched on AI alignment, and we co-authored a paper on speciesist algorithmic bias in AI systems (language models, search algorithms), with two other professors, which might be relevant.
I also looked at other problems which might look like quasi-AI-alignment for animals problems. (or maybe, they are not quasi?)
For example, some AI systems are given the tasks to “tell” the mental states (+/-, scores) of farmed animals and zoo animals, and some of them will, in the future, be given the further task of
satisficing/maximizing (I believe they won’t “maximize, they are satistficing for animal “welfare” due to legal and commercial concerns). A problem is that, the “ground truths” labels in the training datasets of these AI are, as far as I know, all labelled by humans (not the animals! Obviously. Also remember that among humans, the one chosen to label such data likely have interests in factory farming). This causes a great problem. What these welfare maximizing (let’s charitably think they will be do this instead of satisficing) systems will be optimizing are the scores attached to the physical parameters chosen to be given scores of. For example, if the AI system is told to look for “positive facial expressions” defined by “animal welfare experts”, which actually was something people trained AI on, the AI system would have a tendency to hack the reward by maximizing the instances the pigs have these “positive facial expressions, without true regard to welfare. If the systems get sophisticated enough, toy examples for human-AI alignment like an ASI controlling the facial muscles of humans to maximize the number of human smiles, could actually happen in factory farms. The same could happen even if the systems are told to minimize “negative expressions”—the AI could find ways to make the animals hide their pain and suffering.
If we keep using human labellers for “ground truths” of animals’ interests, preferences, welfare. There will be two alignment problems. 1. How to align human definitions and labels with the animals’ actual interests/preferences? 2. The human-AI alignment problem we usually talk about. (And if there is a mesa-optimizer problem in such systems, we have 3!)
There’s a kind of AI systems which might break this partially. There’s a few projects out there trying to decipher the “languages” of rats, whales, or animals generally. While there are huge potentials, it’s not only positive for me. Setting aside 10+ other philosophical problems I identified with “deciphering animal language”, I want to discuss the quasi-alignment problem I see here: Let’s say the approach is to use ML to group the patterns in animals’ sounds. To “decipher animal language”, at some point the human researchers still have to use their judgement to decide that a certain sound pattern means something in a human language. For example if the same sound pattern appears every time the rats are not fed, the researchers might conclude that this pattern means “hungry”. But that’s still the same problem, the interpretation what the animals actually expressed was done by humans first, before going to the AI. What if the rats are actually not saying “hungry”, but “feed me?”, or “hangry”, we might carry the prejudice that rats are not as sophisticated as that, but what if they are?
Wait, I don’t know why I wrote so much, but anyway, thank you if you have read so far :)
I haven’t read this fully (yet! will respond soon) but very quick clarification—Charity Entrepreneurship weren’t talking about this as an organisation. Rather, there’s a few different orgs with a bunch of individuals who use the CE office and happened to be talking about it (mostly animal people in this case). So I wouldn’t expect CE’s actual work to reflect that conversation given it only had one CE employee and 3 others who weren’t!
Oh okay, thanks for the clarification!
Great to learn about your paper Fai, I didn’t know about it till now, and this topic is quite interesting. I think when longtermism talks about the far future it’s usually “of humanity” that follows and this always scared me, because I was not sure either this is speciesist or is there some silent assumption that we should also care about sentient beings. I don’t think there were animal-focused considerations in Toby Ord’s book (I might be wrong here) and similar publications? I would gladly then read your paper. I quickly jumped to the conclusion of it, and it kinds of confirm my intuitions in regards to AI (but also long-term future work in general):
“Up to now, the AI fairness community has largely disregarded this particular dimension of discrimination. Even more so, the field of AI ethics hitherto has had an anthropocentric tailoring. Hence, despite the longstanding discourse about AI fairness, comprising lots of papers critically scrutinizing machine biases regarding race, gender, political orientation, religion, etc., this is the first paper to describe speciesist biases in various common-place AI applications like image recognition, language models, or recommender systems. Accordingly, we follow the calls of another large corpus of literature, this time from animal ethics, pointing from different angles at the ethical necessity of taking animals directly into consideration [48,155–158]...”
Thanks Fai, I think you’re right. Somehow I didn’t notice James’s comment. James thanks for the clarification, I haven’t seen this risk before. Especially this part
I just thought that AI would take care of animal health in general, like the exact amount of food, humidity, water, etc. But I didn’t think about the raw calculations made by the AI.