AI and animal welfare: am I missing something?
Animal welfare is what brought me to EA. I spent several years working for animal advocacy organisations, and the EA ideals to do with rigorous thinking about where effort makes the biggest difference was something I believed in fully.
This post is me thinking aloud, not staking a firm position. I’d genuinely welcome pushback from people who know this space better than I do.
The framing of the problem is a bit odd to me
The AI x animals argument, as I understand it: AI systems are making decisions that affect how we use animals. Those systems don’t adequately represent animal welfare. If we can get welfare into the benchmarks/constitutions of AI labs, we can shift outcomes for animals at huge scale before they get locked in. Okay.
But nobody is ‘ignoring’ animal welfare; they’re just indifferent. AI systems are being built to do exactly what they were designed to do, which is to faithfully execute human preferences. And those are, in aggregate, to eat cheap meat, conduct research on living organisms when it’s convenient, and prioritise cost and efficiency in agricultural supply chains. AI is reflecting the values of the humans. I don’t think you can sneak those values in, unless there are specific opportunities to tweak things here and there before they get cemented.
Are there such opportunities? So far, I can’t break this down to anything tangible. ‘If we don’t do anything, the systems will become entrenched and determine animal outcomes for decades to come’ - what systems? What outcomes? Who, where? Can someone give me a few clear examples of tractable situations?
Naming the situation isn’t enough. I suspect there are situations that represent a theoretical fork in the road: The EU AI Act is a real regulatory framework being implemented now. Procurement systems at major food companies are being built. Agricultural AI platforms from John Deere and Bayer are being deployed. But how is any of this tractable—what are you hoping orgs/grantees/EA people can do about those things?
It seems like the best we could hope for is to effect a thin layer of consideration on top of a reality (the collective attitudes of humans towards animals) that will bypass our efforts the moment it conflicts with something humans really do care about. Like profit.
The point I’ve seen raised about Claude’s constitution only containing one line regarding animal welfare, first of all seems arbitrary (it doesn’t matter how many words there are; only what the words say), and secondly, merely reflects the real situation; our attitudes, as a whole. Focusing on ‘making AI go better for animals’ by convincing AI labs to suddenly care seems to be addressing the symptom rather than the cause.
Where I think AI might help
What if, instead of trying to push concern for farmed or wild animals in to AI labs situationally, we can use AI to make traditional animal agriculture obsolete? Maybe that’s where funding should go; at least, there are clearly definable ways that AI could catalyse that outcome:
Cell culture optimisation is an enormous search space; finding the exact combination of nutrients, temperatures, and growth factors that make cells proliferate efficiently. AI can model and run simulated experiments at a speed that wet lab trial and error cannot match.
Scaffold design, one of the hardest unsolved problems in cultivated meat, involves getting cells to grow in three-dimensional structures that actually resemble meat texture. AI can help design and test scaffolding materials and geometries by modelling cell behaviour computationally.
I’ve also read that AI can optimise production processes in ways that could drive costs down dramatically.
Each of these seem like a more robust theory of change to me than ‘do something to prevent detrimental lock-in’.
What I’m uncertain about
The regulatory and scaling challenges that cultivated meat faces are large and I’m not qualified to assess them fully. I’m aware cultivated meat has had a difficult few years commercially and faces active political opposition in some markets. I don’t know if those are terminal or temporary problems.
It’s also possible I’m underestimating the leverage of getting welfare into AI systems; maybe one well-placed benchmark really does shift how frontier labs think, and that ripples out in ways I’m not aware of.
If that’s so, then can someone tell me, in plain English, what that looks like? I.e ‘[lab] is currently planning [this development]. If we do [this action], we can change it to [this outcome], which will mean [x number] of animals experience [less suffering, presumably].’
At the very least, if there is a much clearer plan for impact for ‘making AI go better for animals’, then I think it ought to be communicated more concretely than what I’ve seen so far, to avoid people in the space either writing posts like this, or just kind of going along with the trend—even if they don’t understand it.
TLDR: I don’t understand the tractability of ‘make AI go better for animals’ except for where it may speed up our path to cultivated meat adoption, which isn’t mentioned in any of the ‘make AI go better for animals’ stuff that I’ve read.
Hi Siobhan! Great question and I’m genuinely glad you raised it. I’m the Exec Director of Sentient Futures which is trying to build out the AIxAnimals field and your confusion probably represents a failure on our part to properly communicate our ideas and the field’s progress. I can mostly speak to why we as an org have decided not to work directly on AI applications or cultivated meat.
A couple (non exhaustive) points:
There are two different concept you are talking about here: Applied AI vs Frontier AI
The applied side: (i.e. accelerating cultivated, replacing animal testing, etc) takes a lot of industry expertise and has been tried and is still being tried outside of the EA space. There are a lot of capital, status quo, cultural, regulatory, etc barriers on this front that would take a lot of work and funding to overcome. With the constraints of talent and funding in the AIxAnimals space, I think this would be near impossible for this group of people to pivot there, especially since with the funding constraints of alt proteins, even far more experienced food scientists are finding themselves unemployed. We’ve tried to support these efforts by platforming them in our conferences and newsletters, but it would be spreading ourselves quite thin and we do not have the expertise in any of these areas.
There are also other applied AI interventions that we’ve platformed such as inter-species communication, precision livestock farming, and AI to replace animal testing. We’ve known for a while that working on these interventions directly is not our comparative advantage.
The frontier side: (i.e. getting AI models, developers, and regulations to consider animals more. I think this is more tractable and EAs are better positioned to do this at our current stage for a couple reasons:
I don’t think people are indifferent: AI folks at labs care more about animals than your average person. Many millions of dollars have already been donated to animal welfare efforts from AI lab employees (not public, but verified) with potentially hundreds of millions to come. Some are internal champions who really care, but just need a robust benchmark or red teaming case to pitch to their colleagues. Many are open to incorporating consideration for animals as long as it doesn’t run up against some other tradeoff like capabilities, human safety, or substantial profit. The work of the AIxAnimals field is to find these win-win situations where possible and package them up in a way that the frontier labs are happy to make a change.
Specifically the EU AI Act—a group of animal folks got together and participated in the working group consultations for the code of practice and were successful in getting non-human welfare added as a systemic risk consideration in the final language. It was a short window of time and if we had waited, then we would have missed this opportunity to set a policy precedent and build upon other things like the TFEU Article 13. This was just done with a small team of mostly volunteers or people taking time out of their normal jobs to draft arguments and attend meetings. Making sure that it is enforced is going to take a lot more work that we don’t currently have capacity for. But the language precedent is there to build on and use as a foundation. There was essentially no opposition push back (unlike with cultivated or any other animal protection laws) because the animal industries are not paying attention to frontier AI regulation. So this is quite tractable, but I suspect it won’t be for long.
Cultivated meat has a lot of enemies and things working against it—I do think fighting cultivated meat bans is one of the more important things that we could be doing for animals (see the lessons from this wargame we played) in the long term. But it seems like you can squash one and then another ban comes up, and it’s just this perpetual game of whack-a-mole. There are many people who are skilled at policy who I respect and donate to that are taking care of this. I do think they could use more help, but I don’t think it is as neglected or as underfunded. I think they also have more mainstream appeal and accessibility. For example, many governments and schools do give funding for alt protein R&D. Important, tractable, not as neglected. The AIxAnimals path goes to one that’s less traveled and where there are not a bunch of institutions already trying to protect the status quo. I don’t think it would be wise for everyone to go all in on cultivated because there is no guarantee that even if it was technically successful, it would get cultural adoption. We need to diversify our interventions.
Frontier and applied AI holds the promise of solving wild animal suffering. Most animal suffering is not man-made, and the current pace of research into reducing wild animal suffering is just a drop in the bucket compared to the scale of the problem. More advanced AI does hold the promise of being able to process large amounts of information and make complicated trade-off decisions and immediate actions. Right now, AI is awful at reasoning about wild animal welfare and defaults to ideals of conservation, biodiversity, and nonintervention. One concrete thing we’ve been working on is increasing the training data of animal perspectives and how AI intervention could be positive for animals. See this https://hyperstition.sentientfutures.ai/
If you want a concrete BOTEC about the value of aligning AI to animal welfare, check out this one by @MichaelDickens : https://mdickens.me/2026/03/24/alignment-to-animals_BOTEC/
For a broad overview of what AI means for animal advocacy (both applied and frontier), I’d highly recommend watching this interview with Lewis Bollard.
This is what I have for now. I am not sure if that is a satisfying answer for you because it is just such a nascent field and we are trying to figure it out as we go, but I really appreciate you raising it.
Sounds like this is in reaction to yesterday’s launch of the Falcon Fund. I’m very excited this fund is happening, and I am personally donating to.
I appreciate you thinking out loud! As Constance said, I think this points to a lack of communication around the concrete ideas. I think Sentient Futures has done a phenomenal job in bringing attention to this whole area, and to me the whole point of the Falcon Fund is to start turning all the thinking into concrete projects.
My background is in ML in the alternative protein space. So I am also very excited about the prospect of AI helping the development of cultivated meat. However, at this stage, I think the AI-cultivated space suffers even moreso from the exact problem you’re describing where there is no well-defined problem. I wrote about this here if you’d like to discuss it more. I don’t think AI can model and run simulated experiments in this space. One day it might be able to, but that is much more likely going to be downstream of other general advances in the AI-sciences, and won’t be developed in the cultivated industry itself. We don’t have the money, talent, or data throughput to make that happen. I actually think what we should be aiming for in the short-medium term is something that looks like speeding up wet lab trial and error, I think there are a lot of gains to be had from designing experiments better and deploying something that looks like self-driving labs as soon as possible. If you have specific ideas for ideas that we can try testing now, I would love to talk.
In general I agree that we need to put more thought into the AI-cultivated plan.
On AIxAnimals I think there are some pretty clear things we need to do, which are also described on the Falcon Fund page.
Having an observatory/watchdog organization is really critical. We currently have no view on how AI is being used in industries that will impact animals. That will include when they run factory farms, do scientific research in areas like precision livestock farming, and help with ecological management. We need to see what’s going on in order to make decisions.
Having benchmarks is similarly important to actually have visibility into how these systems behave that are going to be very important. Then yes, hill-climbing on these numbers would be great.
I am sympathetic to your skepticism, especially that things that limit profitability will be hard. But I think we can afford to try here. Human values are not the same things as what the markets provide. Humans stated preferences are to like animals. Every ballot measure presented in any state in the US, whether led by Republicans or Democrats, has passed. It is one of the few bipartisan issues. I think a lot of the issues of animal welfare are due to people not being informed. In which case AI can be a powerful tool in aligning people’s behavior with their values, which is something we should encourage.
It is going to be very hard to be concrete about what effect we see in the real world until AI systems become industrially relevant. Until then the primary job is to observe as much as we can and make sure we’re moving things in the right direction. We can do this by having a benchmark, and seeing if the change in model spec language improves the benchmark.
This is true in a way. A deeper problem is that we don’t know what values AI is reflecting. If you talk to an LLM, it will express some values, but it gives inconsistent answers depending on what questions you ask. We have no way of knowing whether its expressed values reflect its “true” values, if it has any. And we don’t know how things will change as AI becomes increasingly powerful.