I start caring about animal welfare at the age of 12. Nearly a decade later, I discovered effective altruism and decided to start down an earning to give career path as a physician and bootstrapped my own telehealth company. Once I started trying to decide where to donate, I was quite shocked by the limitations of the existing organizations and communities in the EA/Animal space and switched into doing meta-work through co-founding Hive and later increasing my moral circle to include all kinds of nonhumans including artificial minds and founding Sentient Futures (formerly AI for Animals).
Constance Li
Specifically about AI in farming, I talk about that a bit in this presentation about the tradeoffs between efficiency, welfare, and the middle ground of health. Also I compare 2 different scenarios where it is either the pro-animal people or the industry that gets the first mover advantage and how those might play out.
Hi Siobhan! Great question and I’m genuinely glad you raised it. I’m the Exec Director of Sentient Futures which is trying to build out the AIxAnimals field and your confusion probably represents a failure on our part to properly communicate our ideas and the field’s progress. I can mostly speak to why we as an org have decided not to work directly on AI applications or cultivated meat.
A couple (non exhaustive) points:There are two different concept you are talking about here: Applied AI vs Frontier AI
The applied side: (i.e. accelerating cultivated, replacing animal testing, etc) takes a lot of industry expertise and has been tried and is still being tried outside of the EA space. There are a lot of capital, status quo, cultural, regulatory, etc barriers on this front that would take a lot of work and funding to overcome. With the constraints of talent and funding in the AIxAnimals space, I think this would be near impossible for this group of people to pivot there, especially since with the funding constraints of alt proteins, even far more experienced food scientists are finding themselves unemployed. We’ve tried to support these efforts by platforming them in our conferences and newsletters, but it would be spreading ourselves quite thin and we do not have the expertise in any of these areas.
There are also other applied AI interventions that we’ve platformed such as inter-species communication, precision livestock farming, and AI to replace animal testing. We’ve known for a while that working on these interventions directly is not our comparative advantage.
The frontier side: (i.e. getting AI models, developers, and regulations to consider animals more. I think this is more tractable and EAs are better positioned to do this at our current stage for a couple reasons:
I don’t think people are indifferent: AI folks at labs care more about animals than your average person. Many millions of dollars have already been donated to animal welfare efforts from AI lab employees (not public, but verified) with potentially hundreds of millions to come. Some are internal champions who really care, but just need a robust benchmark or red teaming case to pitch to their colleagues. Many are open to incorporating consideration for animals as long as it doesn’t run up against some other tradeoff like capabilities, human safety, or substantial profit. The work of the AIxAnimals field is to find these win-win situations where possible and package them up in a way that the frontier labs are happy to make a change.
Specifically the EU AI Act—a group of animal folks got together and participated in the working group consultations for the code of practice and were successful in getting non-human welfare added as a systemic risk consideration in the final language. It was a short window of time and if we had waited, then we would have missed this opportunity to set a policy precedent and build upon other things like the TFEU Article 13. This was just done with a small team of mostly volunteers or people taking time out of their normal jobs to draft arguments and attend meetings. Making sure that it is enforced is going to take a lot more work that we don’t currently have capacity for. But the language precedent is there to build on and use as a foundation. There was essentially no opposition push back (unlike with cultivated or any other animal protection laws) because the animal industries are not paying attention to frontier AI regulation. So this is quite tractable, but I suspect it won’t be for long.
Cultivated meat has a lot of enemies and things working against it—I do think fighting cultivated meat bans is one of the more important things that we could be doing for animals (see the lessons from this wargame we played) in the long term. But it seems like you can squash one and then another ban comes up, and it’s just this perpetual game of whack-a-mole. There are many people who are skilled at policy who I respect and donate to that are taking care of this. I do think they could use more help, but I don’t think it is as neglected or as underfunded. I think they also have more mainstream appeal and accessibility. For example, many governments and schools do give funding for alt protein R&D. Important, tractable, not as neglected. The AIxAnimals path goes to one that’s less traveled and where there are not a bunch of institutions already trying to protect the status quo. I don’t think it would be wise for everyone to go all in on cultivated because there is no guarantee that even if it was technically successful, it would get cultural adoption. We need to diversify our interventions.
Frontier and applied AI holds the promise of solving wild animal suffering. Most animal suffering is not man-made, and the current pace of research into reducing wild animal suffering is just a drop in the bucket compared to the scale of the problem. More advanced AI does hold the promise of being able to process large amounts of information and make complicated trade-off decisions and immediate actions. Right now, AI is awful at reasoning about wild animal welfare and defaults to ideals of conservation, biodiversity, and nonintervention. One concrete thing we’ve been working on is increasing the training data of animal perspectives and how AI intervention could be positive for animals. See this https://hyperstition.sentientfutures.ai/
If you want a concrete BOTEC about the value of aligning AI to animal welfare, check out this one by @MichaelDickens : https://mdickens.me/2026/03/24/alignment-to-animals_BOTEC/
For a broad overview of what AI means for animal advocacy (both applied and frontier), I’d highly recommend watching this interview with Lewis Bollard.
This is what I have for now. I am not sure if that is a satisfying answer for you because it is just such a nascent field and we are trying to figure it out as we go, but I really appreciate you raising it.
Hey! Are you going to offer an API? Would love to set this up as a filtered feed on my Slack community.
depends a lot on how much control AIs end up having, their values/reasoning, and which (if any) humans end up getting power
Announcing the Sentient Futures spring 2026 AI×Animals Fellowship
AI, Animals & Digital Minds NYC 2025: Retrospective
Thank you for this writeup! Coincidentally, I also did a recent announcement about a gap of people pushing forward on nonhuman welfare risk reduction through the EU AI Act Code of Practice.
AGI × Animals Wargame
AI, Animals, & Digital Minds NYC 2025
Hey Ariel! Just to give you a quick update, AI for Animals is rebranding to Sentient Futures to better reflect our overall goal of making the future go well for all sentient beings and avoid committing ourselves to a specific theory of change.
Also, we are running a wargame to figure out how different interventions might end up playing out for animals in the context of AGI. That will give us some insight into what strategies could be promising. In the meantime, we are fairly certain that capacity building (growing a collaborative movement, refining ideas, creating networks with dense talent, and nurturing stakeholder relationships) to be able to adopt/test a promising intervention quickly is a fairly good bet.
AI, Animals, & Digital Minds 2025: Retrospective
It depends on what you find yourself wanting to do more… it definitely helps to come into a network trying to do something specific so that you can get a “lay of the land” and know who/what is helpful or not. Knowing how resources pan out for you is useful for knowing how they would pan out for others.
Road to AnimalHarmBench
… that video was sick!
Love this post, but I don’t love the title, “The Need to Optimize EA Content to Show Up in AI Chats.” I feel like I’m being told what to do and there is a part of me that is resistant and feels like, “I don’t NEED to do anything!” 😅
I much prefer something more like this section heading as the title:
AEO Can Help Effective Organizations Amplify Their Impactor maybe something like, “A Guide to Answer Engine Optimization (AEO) for EA Comms”
The content is great though and helps uncover some of the inner workings of AEO, which can often be a source of disinformation. Good job!
Thanks for this great analysis Lewis and Emma! I am curious if cage-free enforcement campaigns will continue to be a funding priority for OP after the 2025 deadlines and how it compares to other interventions like the ones given in this recent post, Systems Change 101.
I think this is a good quote from that post:In animal welfare, cage-free campaigns help millions of animals but don’t necessarily address the economic incentives and cultural narratives that will lead to future animal suffering.
While I do think that cage-free has led to some major improvements in economic incentives (though not enough because otherwise companies and states wouldn’t be as resistant to fulfilling the commitments) and cultural narratives (“cage-free” for eggs is as recognizable as “cruelty-free” for animal testing), I suspect there may be diminishing marginal returns in terms of cost-effectiveness (people already know what “cage-free” is and the companies that were easy to convince have already been campaigned against).
Thanks Sofia!
Yeah, this is a reason why I love communities like Hive. I can always signpost people towards joining the community or posting on the help request channel first to see if the community can help them. If they are unwilling to take the extra 10 minutes to access a free resource, then yeah, it seems like they’re probably unlikely to follow through on whoever I introduce them to as well.
Absolutely agreed. One effective approach is to create requirements that provide inherent value and enable community feedback. For instance, ask them to write up their ideas in a public forum post and include an ask about whether others are working on similar projects or would find the work valuable.
This serves as both a filter (if others people respond and say this work wouldn’t be useful or someone else is already covering it, then you have a graceful way to say no) and be a useful contribution to the community.
I’m not entirely certain that this is the case. There is always a tradeoff on what you choose to spend your time on. We aren’t trying to convince the public or even other EAs or animal advocates as our main ToC. We are trying to grow more fertile ground for the people who are already interested and convinced to be able to have the connections, knowledge, and resources they need in order to pursue their own interventions. A lot of our comms happens inside our Slack community (which is intentionally high friction to join) and even then, most of it is in private channels. Here are the Slack stats from the last month: 9,388 Messages from members, 4%In public channels, 32% In private channels, 64% In direct messages.
And it seems like there are already some external communication pieces coming out from groups like Animal Ethics (see this short documentary) for animal advocates and @Max Taylor is writing a book for the public.
We are pretty heads down on the operations of field building, which is much more manageable with a smaller, niche audience to start off with. Even then, we have more inbound interest than we can handle (we were only able to accept ~100/300 applicants to our fall AIxAnimals course and then had to work pretty hard to expand that to ~200/300 for this spring). I’ve experienced the mistake too many times where I’ve tried to advertise more widely or engage with people that have lower context on what I’m working on and it is really time consuming and has less leverage than say putting on a well organized conference (see this retro) that pulls in people who already have high context and then get ideas and connections to do things like the documentary or the book I mentioned above.
It may not seem obvious that this is a good field building technique, but I think focusing in on an existing core community rather than external communications is much higher leverage given the current capacity constraints we have. It’s like making sure the grass is mature before you invite a bunch of people to come play in your park.
You and other folks are very welcome to come to our project incubator showcase next week! For 8 weeks, ~40 groups of mentors and mentees have been working on projects to push the frontier of tech and nonhuman welfare. It is a totally new program so it has taken up a lot of our time.
Then we are rolling right into our 6th conference and our 1st in-person residency program.
I think when inbounds start to dry up, that would be a signal we need to focus more on external comms, but right now it seems like there are others that are happy to take on that job and there is a lot of active work being done to narrow in on AIxAnimals interventions, making any official comms about reasoning that we put out at risk of getting stale pretty fast. That’s a big part of why the website sounds so generic.
That said, once things calm down and we have stable, repeating programs, I do think our website copy could use some refreshing.
Are there any other win-win situations have been found and packaged so far, beyond the EU AI Act?
Nothing as concrete. Just other things that build the field like people having counterfactual value or having a bunch of conversations, some of which change people’s minds. Another potential policy thing (which other EA’s seem to hate because they think it is low impact and not very counterfactual) is trying to make sure animals are included in safety regulations for self driving vehicles.