AGI & Animals: Discussion Thread
This week, we are discussing the statement: “If AGI goes well for humans, it’ll go well for animals”. The announcement post, with a bit more info and a reading list, is here.
What is this thread for?
General discussions about and reactions to the debate statement.
Some of the comments on this thread will be populated directly from the debate banner on the homepage — these will mostly be people explaining why they voted the way they did.
However, you’re also welcome to comment on here directly, with any considerations you’d like to share, or questions you’d like to ask.
How should I understand the debate statement?
Again, our statement is: “If AGI goes well for humans, it’ll go well for animals”
The statement will ultimately mean whatever people interpret it to mean. The key is to explain how you are interpreting the statement in the comment that you attach to your vote. However, I can share a few notes which might pre-empt your questions:
AGI- Artificial General Intelligence. What this exactly is and how transformative it is likely to be to the world economy and our ways of life is likely to be a crux in this debate. As such, I won’t be offering a definition.
Goes well- Likewise, what it means for AGI to go well is likely to be a live element of the discussion. For example, ‘going well’ might mean humans are still in control of AI tools, or it might mean that humans are replaced by more beneficent machines. I’ll leave this up to you.
Animals- I’m talking about non-human animals. I’m specifically naming animals rather than ‘other minds’ to signal that this conversation isn’t primarily about digital minds.
Message me or comment in the thread with me tagged if you have any questions.
- “Path to Victory” by (LessWrong; 29 Mar 2026 6:23 UTC; 25 points)
- AGI & Animals Symposium (Thursday 5-7pm UK) by (24 Mar 2026 10:08 UTC; 21 points)
- 's comment on AGI & Animals Symposium (Thursday 5-7pm UK) by (26 Mar 2026 17:02 UTC; 19 points)
- Which is better for sentient beings: an “ethical” AI or a corrigible AI? by (28 Mar 2026 19:39 UTC; 18 points)
- “Path to Victory” by (29 Mar 2026 6:25 UTC; 13 points)
I think Aidan Kankyoku’s framing of the debate question from his post is very helpful:
“Even if we think AI will be the decisive factor determining future animal welfare, should we bother with animal-specific interventions in AI? Or can we trust the usual human-centric alignment efforts to take care of animals?”
I’d love to see more discussion on the question considered in this way.
I think there are plenty of crucial sign-flipping considerations pointing both ways (sec. 1 of my post), and that our takes certainly fail to account for some of them, in ways that likely make these takes irrelevant.
And even if someone’s evaluation somehow does not omit a single crucial consideration, they have to make opaque judgment calls on how to weigh up the conflicting pieces of (theoretical and empirical) evidence. I see little reason to believe such judgment calls would do better than chance.
Clarification on what my “0% Agree” means: I confidently disagree that we should believe it’d go well for animals (sec. 1 of my post), but I don’t think we should believe the opposite either. I think our cause prio should not rely on any assumption on this question (sec. 2 of my post).
It’s true that technological progress so far has been largely good for humans and bad for animals (due to factory farms, but the effects on wild animals complicate this a lot).
But I also think human values towards animals have improved compared to how they were historically, and e.g., house and work animals are likely treated better on the whole now than in the past. So I think there’s been some moral progress, but this improvement has been dominated by technology simultaneously making animal food production much more cost-effective (and, as a side-effect, more suffering-producing).
I think eventually technological progress will make it cheaper to act on animal-friendly values, because I’m guessing the taste/price/convenience/friction of animal meat is hitting diminishing returns, while there’s much more room for improvement with non-animal-based foods. So I think there will be a crossover point at some point, sort of the way it was with the Enlightenment and the industrial revolution, where the economic effects of technology first made many people probably worse off, but eventually the better values won out and people were left better off on the whole.
I separately also think AGI and especially ASI, if aligned with human values, can wisely advise us on good courses of actions and help improve our values and promote good values, which would also help. ASI could also help a lot with wild animal welfare, where we are currently quite at a loss.
The final debate week vote:
I’m seeing a bit of a majority on the “disagree” side, but it’s fascinating how much uncertainty there is here overall. This seemed like a really good debate week topic for this reason.
Also interesting to notice clusters of votes around (1) mostly disagree and (2) slightly agree, and a large gap around (3) mostly agree.
Most animals are wild animals, so the answer to this question should focus on them. It seems to me that the answer largely depends on how we understand “goes well for humans”, and what we expect the counterfactual to be.
So in what are the possible scenarios?
AGI empowers humans to make their own decisions, and to make better decisions. I expect this would greatly accelerate progress toward helping wild animals. This would be great.
AGI replaces human decision-making. It then either:
Reasons further from a starting point of human values, removing biases and inconsistencies—which I think would lead it to care more about animals.
Or it could just lock in current human values.
And what’s the counterfactual?
A continuation of the world as it is today: one where humanity gradually cares more and more about animal welfare, and in which there is at least a potential for caring about wild animals to be normalized. In this case, scenarios 1 and 2(a) seem good, but 2(b) seems very bad.
A world in which the WAW movement fails. In this case even 2(b) doesn’t look that bad, but 1 and 2(a) seem very good.
I’m not sure if this is complete. I’m also not sure how to assign probabilities—I don’t think I know enough about AGI. But tentatively, I expect scenario 2 to be most likely, with (a) and (b) roughly equal, and counterfactual 1 to be most likely. For that reason I’m going with 20% likely to be good.
But I want to say that I would not take a 20% bet of winning everything vs losing everything, and this feels very close. I think this is a terrible gamble and we shouldn’t do it. I hope that the debate results won’t be understood as EAs saying that this is a bet worth taking.
I can imagine a future where most animals are farmed animals. I’m not saying it’s particularly likely, but if humans spread to other planets, I think we’re more likely to take factory farming with us than take nature with us. Farmed animals should be part of this convo imo.
Copying my response from your other comment:
Does that mean you think it’s likely that we will spread to other planets without spreading ecosystems? If we spread ecosystems it seems likely that we would also spread at least some wild animals. And I think we have good reasons to do so—to promote good atmospheres and other ecosystem services.
I feel pretty skeptical that humans capable of going to other galaxies would not have realized the inefficiencies of meat and would still not have made competitive substitutes.
This. I do not see off-world animal farming as a real issue. It’s such an energy and resource inefficient way of making food. Indeed, a prerequisite or a proxy indicator for Earth-independent sustainable civilization seems to be extremely good efficiency in food production. You can’t possibly be on Mars or make an interstellar ship and still have a thousand cows in it for making some cheese.
There are two ways to interpret this claim.
One is to interpret this claim as causal—“the things that cause AGI to go well for humans also cause AGI to go well for animals”.
In general, my concern here is something like “AGI gets aligned primarily based on 2025-era human values by imitation learning and doesn’t magically converge on my ideal philosophy”. I think what happens to animals after that would be fairly contingent on human moral evolution.
Another is to interpret the claim as evidentiary—“what happens to animals conditional only on things going well for humans by any means?” In this sense, we likely do have ensuing transformative economic growth and rapid technological advancement, which likely shifts society away from factory farming. Especially as society becomes primarily digital and/or spacefaring, factory farming likely is not economically suited for that. Though I think this is far from guaranteed.
I’m hedging between both these interpretations, which is why I end up somewhere in the middle.
Are you setting aside wild animals?
No
Oh good, I have no objection then. Well played.
I actually would agree with the inverse of this statement:
“If AI goes well for animals, it’ll go well for humans”
We are interdependent beings. And yet survival—particularly the contemporary late-capitalist understanding of survival—is treated as zero-sum. This is common amongst social movements: To view success and justice for one group as coming at the expense of another. And while the reality may be that in one snapshot of time, it looks that one is benefitting more than another, if we zoom out and understand how things undulate, it becomes clear that on the whole, when we lift up others, it is mutually beneficial.
I think that when we care and construct a world that honors the most vulnerable, we create a better world for ourselves. However, I disagree with the causality of this statement because “human” ends, as they are currently interpreted by systems of power, are exclusionary of animal interests.
I think this depends on whether farmed or wild animal welfare matters more. I don’t have an answer, so let’s treat it as 50⁄50.
If wild animals matter more, what could happen? On the upside, AGI might enable us to help wild animals. On the downside, it might lead to humans creating biospheres on other planets, which would increase the suffering of wild animals by many orders of magnitude.
If farmed animals matter more, the upside could be that AGI enables us to substitute farmed animals completely (cultivated meat, etc.). The downside could be that people get richer and want to eat more meat, or that AGI changes the production of farmed animals in a way that increases suffering.
Again, I don’t know whether the upside or downside in each scenario is more likely. Let’s say each is 50⁄50 again. I think this makes 1) EV negative and 2) EV positive, with the aggregate being slightly EV negative.
Nitpick, but it seems unfair to consider this an upside rather than the mere absence of a downside, since the relevant counterfactual scenario, in expectation (if no AI safety work) is a misaligned AI that takes over and probably ends animal farming as it kills or disempowers humans.
AI safety cannot take the credit for a potential future reduction or end of farmed animal suffering if it preserves humanity, without which animal farming would not exist to begin with.
It seems unfortunately plausible that despite technological progress toward alternatives to meat, humans have a revealed terminal preference for animal suffering, which mean that short of extinction we are on a default trajectory to astronomical suffering.
I’m very uncertain. My main crux is something like: What is the most likely ‘AGI’?
If we are talking increased productivity/ efficiency, I’d expect that things get worse for animals for a while, and then get better as incentives continue to push us to non-animal agriculture.
If we are thinking of an intelligence explosion/ uncontrollable machine god then my expectations matter for nought, and my vote is a settled 0%
If we are thinking of a controllable intelligence explosion/ machine god, then animals might be in the worst position—since revealed human preferences don’t seem to be great for animals so far.
Reminder that you can kick off sub-debates within this discussion thread. Just highlight some text in a comment and then click ‘Insert poll’:
This’d be especially useful as a way to find (and discuss) the cruxes that are driving the different views on this debate.
Very uncertain on this one, mainly a matter of “I just don’t see why it would” and a strong default to “technological process has largely been bad for animals.”
I do think the “better” AI goes for humans (or broadly, the more “extreme” the outcome is), the more likely it is that factory farming would basically disappear incidentally.
However, I think a large range of possible futures where AI goes well for humans are (comparably) normal scenarios, in which I just don’t have any strong reason to believe that they would go well for animals.
I think the answer to this question is too many branches down a tree of possible futures to meaningfully predict. What happens at multiple branch points could swing this either way. If I have time I’ll share more about what I mean.
A world of intelligence too cheap to meter is more teleological: constraints and tradeoffs that exist now are washed away, and what matters is mainly what people ultimately value. And more people ultimately value animal welfare than animal diswelfare. The main game is wild animals, and the ~only way for things to go well for them is if we build an ASI that can eventually reshape the natural world to be less suffering filled. I think it is very unlikely farmed animal suffering is exported to other galaxies in a major way, because at technological maturity animals will not be the mose efficient way to meet human material needs.
What about the risk we spread wild animal suffering to other planets?
Good point, that seems like a big risk! I expect the fraction of sentience that is (post)human or digital to be quite high, especially compared to today, in the intergalactic future. But improving values wrt wild animals seems important.
depends a lot on how much control AIs end up having, their values/reasoning, and which (if any) humans end up getting power
Here’s how I’m thinking about this:
From the perspective of non-human animals, humanity looks a lot like an unaligned superintelligence. We closely resemble the “paperclip maximizer” thought experiment, where the “paperclips” are narrow human goals. Over millennia, we’ve become incredibly good at optimizing for those goals, but in the process we systematically exclude other sentient beings out of the moral circle and override their most basic interests for benefits that are often trivial.
Given this reality, without a fundamental shift in our ethics, superintelligence is more likely to scale our existing biases than to correct them. A more powerful optimizer does not automatically become more benevolent; it just becomes more effective at pursuing the same goals. And higher intelligence and capability do not by themselves fix moral blind spots.
This is precisely the insight that drives concern about AI alignment. We do not assume that more capable and intelligent AI systems will automatically act in ways that are good for us. (Even though they do not have anywhere near as bad a track record as humanity does toward animals.) If “automatic benefit” were a real thing, AI alignment would be a niche concern rather than a central one. We would just accelerate progress and trust that everything else would sort itself out. But we do not believe that, and for good reason.
If we take this insight seriously, we should also apply it symmetrically. The core alignment problem may not just be between humans and AI, but between humans and the rest of sentient life. And it would be dangerously Panglossian to assume that AGI will automatically solve animal suffering. Based on humanity’s track record of causing massive harm despite our increasing capabilities, it is irresponsible to default to optimism about AGI “naturally” improving things without a justification that matches what is at stake.
Extra thought 1:
And the thing that worries me most about human alignment? Permanent lock-in. If we reach advanced AI systems without deliberately including concern for all sentient beings, we risk locking in a future where today’s exclusions last for a very long time. Once such systems are embedded in infrastructure, institutions, and potentially self-improving AI, their underlying value structures may become extremely difficult to change.
A historical analogy makes this clear. Think about the Industrial Revolution, a massive event that empowered humanity. If it had happened in a society that cared about animal welfare (maybe a vegetarian country?), trillions of farmed animals could have been spared extreme suffering (cramped spaces, painful procedures and deaths for basically trivial human gain.) Early ethical choices really do shape the fate of huge numbers of sentient beings.
Moreover, the stakes are far greater this time. Humanity will remain a tiny outlier, yet one that would hold disproportionate power over a vastly larger number of sentient beings in the future. So, misalignments now could ripple across astronomical numbers of individuals, turning a large-scale moral failure into a potentially permanent, cosmic-scale one.
Extra thought 2:
It’s sadly all too common for us to push animals to the very bottom of the priority list, thinking, “Once we fix all our problems, we’ll start worrying about extreme animal suffering.” So I’m really glad to see this discussion happening!
My position statement
As a suffering-focused ethicist who generally rejects moral aggregation across individuals (I am most sympathetic to painism), I have a higher bar for “AGI going well for humans” for humans than many others do; it’s not clear to me that previous technological advances went well for humans
Agricultural revolution’s “luxury trap”: going from hunting-gathering to farming allowed humans to consolidate unprecedented wealth and power, but at the cost of the wellbeing/welfare/rights of very many humans
Perhaps similar arguments can be made for the industrial and digital revolutions
Even AGI Omelas is not an instance of AGI going well
“AGI going well” necessarily leaves many humans the stated preference to help animals (e.g. abolishing animal exploitation and solving wild animal suffering), and it certainly gives us the means and opportunity to do so
I happen to think that AGI going well for humans is unlikely, even by the lights of someone who is more upside-focused
We’re on track for creating something that is more intelligent than us (better at understanding the world and achieving goals within it) – and probably something with awareness, autonomy, agency, and the capacity for recursive self-improvement and self-replication – without understanding how it works, how to make it do what we want, or what it is we even want it to do
So, between normative and empirical claims, I believe a world in which AGI goes well for humans is a very small fraction of the possibility space
And when I try to think about what this AGI-going-well-for-humans world looks like, mostly I don’t really know, but it seems likely that in this world:
We retain and develop our moral wisdom (the most fundamental tenet of which is plausibly “non-maleficence and compassion towards all sentient beings”)
And we have the means to enact this moral wisdom
So, we abolish animal exploitation and solve wild animal suffering
Thus, AGI goes well for animals as well as humans!
On priors technology has not been good for animals. So weakly lean against. But could go either way.
The poll defines “probably” as 70% chance. In this post, I wrote that I thought there was a ~70% chance that AGI would go well for animals.
I guess that means I believe there’s a 50% chance that there’s a 70% chance that AI goes well for animals? So I should vote in the exact middle of the spectrum?
I’m reading “goes well for humans” as including “goes well for human values broadly, accounting for further refinement of human values”.
Not by default. I think humans by in large, don’t care much about animals enough for this to work
Originally, I voted that I slightly agreed, assuming AGI would accelerate developments like cultured meat. Upon further reflection, I realized that advances in technology over the past couple of centuries have been overwhelmingly good for humans but arguably devastating for animals, particularly through factory farming. That makes me lean toward slight disagreement: more powerful technology hasn’t automatically meant better outcomes for animals, and there’s no guarantee AGI will be the exception.
I don’t have a singular directional intuition about this.
I’m not sure AGI isn’t already here.
There are some scenarios where AGI liberates us from constraints and others where it enables humans to extend their dominance over animals. Who can say?
In the meantime, AI does not absolve us of helping animals.
Something about this topic creates a semantic stop sign for people whose opinions I otherwise find interesting. So even if the subject interesting in the abstract, I’m afraid here and now it sometimes leads to worse discussions.
If we take things as they stand at the moment, AGI going well for humans doesn’t translate to AGI going well for animals. There is however a world where AGI has been trained in such a manner that it recognises sentience as its metric for moral consideration, which in turn would result in AGI going well for humans and for animals alike.
Say we define “going well” simply as “better than the status quo”, for a human, “going well” might mean radical life extension or a post-scarcity economy. With the status quo for animals (especially factory farmed animals) currently being incredibly low, “going well” might just mean a slight improvement in living conditions such as slightly less cramped cages. In that sense, human and animal interests are currently very unaligned.
Assuming AGI solves problems humanity can’t (such as war, mental illness, climate change, resource limitations), it likely requires some level of control over human actions. If we lose our autonomy to an AGI but it ends factory farming, did things “go well”? Or does the loss of what makes us human outweigh the reduction in suffering?
Without timely interventions such as training AGI to recognise sentience, we could just easily slide into a “Hyper-Efficiency” trap when it comes to the welfare of farmed animals:
Precision Suffering: AI could monitor disease and stress just enough to pack animals into even higher densities without them dying, maximizing profit while ignoring the pain threshold.
Automated Sentience: AGI could bring trillions of insects or fish into existence in zero-welfare, fully automated systems.
Genetic Masking: We could see AI-accelerated breeding for “pain-insensitive” animals, which might hide visible distress while the internal sentient experience remains one of suffering.
Humane-washing: Minor AI fixes, like identifying a single sick cow, could be used as a “Welfare Facade” to hide systemic cruelty from the public.
On the flip side, the positive path is equally transformative. AI could render slaughterhouses obsolete by making cultivated meat cheaper and more accessible than farmed meat. Even more interestingly, AI “translation” of animal vocalizations could create a social breakthrough. If we can decode and prove an animal’s expressed distress in real-time, it becomes much harder for human society to ignore their sentience.
If AGI/TAI is powerful enough to cause a fundamental break in our current legal and social systems, then animal advocacy needs to shift. For example, should the long-term goal of animal advocacy be to influence the “Constitutions” of labs like OpenAI, Anthropic, and DeepMind to ensure that “sentience” (the ability to feel), rather than “intelligence,” is the metric for moral consideration?
As things stand, AGI going well for humans does not automatically translate to it going well for animals. We could easily build a human utopia that functions as an animal nightmare. To avoid that, we need to ensure AI is trained to recognize sentience as a non-negotiable variable in its moral calculus.
I take “AGI goes well” to imply a wealthy and technologically advanced society. I think that could mean:
- Very cheap and delicious meat alternatives.
- Factory farming waning as it reaches inefficiencies and bottlenecks, not able to compete with the above.
- More demand for higher-welfare options like free-range and local produce.
But it also seems possible that we “lock in” factory farming and scale it further and that AGI adopts speciesist views.
Very uncertain, I don’t find myself strongly disagreeing with claims across the spectrum.
If AI is successfully aligned to “human values”, that would include animal agriculture and conservationist ideology, perpetuating and potentially expanding nonhuman animal suffering to other planets even while humans thrive.
TL;DR: I don’t think there’s sufficient evidence to make such a claim.
It could go either way, but because the statement is phrased positively, I disagree. I think it’s more likely to improve the conditions of non-human animals than not, because I think it may accelerate lab-grown meat (dairy, eggs, etc.) technology to the point where it becomes cheaper than farmed meat, in which case the conditions of animals will considerably improve. However, if this doesn’t occur, AGI could further increase animal farming and efficiency, considerably worsening the conditions of non-human animals. While in the past, technology heralded expansions of the moral sphere, I think the case of non-human animals may be psychologically very difficult in comparison. If AGI is independent of humans but still aligned with their interests (which is itself improbable), we could see it trying to improve the conditions of non-human animals for the same reasons we do, but this is probably unlikely. Of course, the proposition makes no position on the likelihood that it will go well for animals, human or otherwise. It’s unlikely AGI will significantly improve the conditions of wild animals, as humans aren’t likely to have the incentive to, and alignment with human interests means that the AGI probably won’t either.
I would estimate my disagreement at roughly 90% to 95%.
Default human values are largely indifferent or actively hostile to the suffering of non-human animals.
Humanity currently oversees massive amounts of animal suffering through factory farming &habitat destruction.
If an AGI would be perfectly aligned to make things “go well” for humans, it will likely prioritize human flourishing, economic growth + resource acquisition. If human preferences do not drastically shift toward minimizing animal suffering, an AGI will have no inherent reason to protect animals, & might simply optimize the systems that currently exploit them.
A scenario where AGI goes exceptionally well for humans often includes escaping Earth, avoiding extinction & engaging in massive space colonization. From my perspective, this is a prime driver of astronomical suffering (s-risks).
Humans often romanticize nature. If humanity uses AGI to terraform other planets or seed life across the galaxy, they might intentionally or accidentally spread wild animal suffering on an astronomical scale.
A highly advanced, human-aligned AGI might run countless simulations of Earth’s evolutionary history for scientific or entertainment purposes. Tomasik has written extensively on the catastrophic moral implications if these simulated animals possess sentience & experience pain. That’s not so improbable given enough time imo.
Suffering-focused ethics prioritizes the prevention and reduction of extreme suffering over the promotion of happiness or human survival at all costs.
A future that “goes well” for humanity typically implies human survival, joy, and unfettered expansion.
For me a future only goes objectively “well” if the total amount of extreme suffering is minimized. Therefore, a human utopia built alongside, or simply ignoring, the continuous suffering of biological or digital animals would be viewed as a profound moral failure.
The small percentage of agreement would stem from the idea that if humans are wiped out by a misaligned AGI, animals might also be destroyed in the process (e.g., if the AGI harvests all biological matter on Earth). If AGI goes well for humans, animals at least avoid that specific instrumental convergence scenario. Furthermore, human prosperity could eventually lead to moral circle expansion, where humans use AGI to actively intervene in nature to reduce wild animal suffering, but I view this as highly contingent.
I’d really like it if AI resulted in amazing plant based or cultured meat, and that the general abundance coming from AI means that people can focus their thinking on morality, not just making their lives go okay.
BUT, so far, new tech and improved economical situations have caused farmed animal suffering to get worse.
So I have a big uncertainty, but lean disagree.
I’m quite uncertain, but in general I don’t think it’s been the case that “if X technology goes well for humans, it’ll go well for animals”. I think in some key cases, it’s been the exact opposite, actually—e.g., industrialization leading to the rise of factory farming and killing/causing suffering to many more animals.
However, I also don’t think that AGI is going to be quite different from most technologies, at least in some ways (and definitely as it goes past AGI to ASI), and so I’m quite uncertain about how “going well for humans” might positively impact “going well for animals” in this specific case.
But I still see AGI as mostly being a technology developed by humans for human purposes, so it will be guided as such. And humans still predominantly use other animals as resources (for food, testing, raw materials, etc.). So, I think the default trajectory would probably be negative unless there is significant effort invested in helping AGI go well for nonhumans specifically.
Vibes, I have no idea, I hope someone convinces me with good takes
Hi! There’s no labels on the slider bar so it’s initially unclear which side is agree vs disagree.
Oh no, thanks so much for flagging this! Toby was on holiday today unfortunately, so I’ve just updated it.
Fair call disappearing after dropping the debate slider to avoid the upcoming bedlam...
AGI could, in principle, find solutions for the key problems that animals face, but I would argue the main issue is that it won’t automatically enlighten humans.
The epoch of superintelligence will not result in any meaningful improvements in animal welfare. Previous epochs of humanity, marked by transformative advancements such as the industrial and digital revolutions, have failed to yield meaningful improvements in animal welfare. If anything, these shifts created novel pathways for animal exploitation or rendered existing models, such as animal husbandry, vastly more lethal and efficient through the rise and continued development of factory farming. Unless there is a massive societal-level dietary shift, which I think is unlikely or a fundamental change in how society views animal bodies as exploitable resources free for the taking, AGI will likely follow this historical precedent. Like other transformative technologies before it, AGI will be leveraged to further optimise efficiency and, by extension, the lethality of existing egregious practices.
I am extremely uncertain on this point. While there is a possibility that an aligned AI could be immensely beneficial for animals, I believe this is an outcome we absolutely cannot take for granted.
Broadly speaking, it is difficult to assess such a scenario without knowing the specific form an ‘aligned’ AI will take and what a world where humans coexist with an AGI or ASI will actually look like. As some have pointed out, if this AI were to simply ‘lock in’ current human values indefinitely, it would likely be really bad for animals.
It seems probable, however, that in the long run, an aligned AI scenario would eliminate our need for animals in most current capacities (factory farming, drug testing, etc.). Therefore, AI could potentially improve things for animals directly harmed by humans, though this would still depend on our willingness to move past these practices once they are no longer necessary.
But the real long-term stakes likely lie with wild animals, and here the risks appear far more significant. If we model an aligned AI on current human values, it might seek to preserve Nature for as long as possible, on our planet and potentially others (perhaps even ‘seeding’ other planets with wildlife). Given the immense scale of natural suffering, this could result in damage of colossal proportions.
My definition of going well for humans: for the existing population, there would be a re-allocation of resources. Food and water would be rationally distributed based on basic needs, and once basic needs are all met, based on wealth (or the ability to generate progress for the society).
With this as a premise, I think 1) there would be no need for factory farming, and 2) the welfare of all, including animals, would be rebalanced.
For point 2), I very much think the lack of welfare of animals is reflective of the lack of welfare in humans themselves
I worry I’m too pessimistic in general, but the world economy (and general living standards), have improved significantly over time, and farmed animal welfare seems to be a lot worse. That seems to be evidence to me that amazing technological progress won’t be sufficient for animal welfare progress.
“Goes well for humans” (i.e for a very long time) worlds are mostly worlds where AGI is fully theoretically and empirically aligned with a CEV-shaped alignment target, which for me logically requires animal welfare. (I also currently believe those worlds to be implausible because no company seems focused on this)
I struggle to imagine any deliberative or reflective-preference oriented process that does not give the right answer to the animal welfare question. If it doesn’t care about non-human animals, then it means animals are not sentient, or that the CEV is misaligned with human interests and some humans will die because they don’t check the right boxes (and sentience isn’t a box), or that morality is weird and it’s actually fine to torture sentient beings (possible but implausible).
There are other worlds where “goes well for humans” means corrigible and aligned on some unaltered human values. In those worlds, I expect animals to take a blow on the short term, and possibly on the very long if the principal does not care about animal suffering. I also expect humanity to do other morally wrong things that they don’t suspect to be wrong, and die counterfactually way sooner.
See my post:
We are AGI to animals now, and we may be net negative as a species. See my short post for intuition pump
https://forum.effectivealtruism.org/posts/et6tWRzgXHBHRciuN/quick-pig-based-intuition-pump-on-superintelligence-and?utm_campaign=post_share&utm_source=link
The timelines where agi goes well would probably 10x the resources required to improve animal welfare. It will probably be similar to just “buying shrimp stunners” for the shrimp farmers who are indifferent.
What does “going well” mean?
It seems plausible that many things could be a lot better, like making factory farming obsolete. Does that mean that animals are no longer experiencing extreme suffering? What is our baseline for animal welfare?
hedonic utilitarianism, an aligned superintelligence solves metaethics and fills the universe with hedonium.
I don’t think this is guaranteed, but if AI goes well and is used to develop better tasting, healthier, and more cost effective cultivated meat, it will be more likely to be adopted and will reduce reliance on factory farmed animals.
On the other hand, wild animals could be preserved as they are or spread throughout the galaxy, potentially increasing net suffering.
I think AI going well would leave a lot of this up to human choice and is therefore uncertain.
I appreciate there’s a lot of nuance in the question but some rough thoughts:
Number of ways AGI goes well for humans and good for animals < number of ways AGI goes well for humans and bad for animals.
Moral expansion or inclusion of animals is not obvious to me to be guaranteed (in near or long term future). And I think there’s a lot of people today (eg other cultures and generations) who don’t or negligibly value animal welfare.
Short of a AGI utopia with unlimited resources, I think there will be tradeoffs between animals and human considerations where animals will nearly always be second. eg it seems plausible that AGI will consider a +10 human wellbeing outcome even if it includes a −11 animals wellbeing outcome.
Moral lock-in, like others have mentioned
My thoughts are based on if we only did human values alignment
From @Tristan Katz :
Does WAW dwarf FAW in expectation? Or is FAW still important to consider in this discussion?
Most animals today are wild animals, but for the answer to this question to focus on them, most future animals would also have to be wild.
Fwiw my intuition is that most future animals will be wild because it seems more likely that we terraform by seeding ecosystems than that we export energy inefficient factory farming. That said:
a) I feel uncertain about that position.
b) The post-AGI future will be pretty weird, and our distinction of wild vs farmed animals probably won’t map neatly onto future sentient beings.
Even granting that the overwhelming majority are wild animals, this doesn’t necessarily imply we should focus on them. We have to factor in the welfare difference between the two (welfare ranges and quality of life in practice).
Yes
Not necessarily, because S-risks may be more important in expectation (e.g. a malevolent or vindictive ASI tiles the universe with extremely energy-efficient animal-like beings of pure suffering).
I can imagine a future where most animals are farmed animals. I’m not saying it’s particularly likely, but if humans spread to other planets, I think we’re more likely to take factory farming with us than take nature with us. Farmed animals should be part of this convo.
So does that mean you think it’s likely that we will spread to other planets without spreading ecosystems? If we spread ecosystems it seems likely that we would also spread at least some wild animals. And I think we have good reasons to do so—to promote good atmospheres and other ecosystem services.
I feel pretty skeptical that humans capable of going to other galaxies would not have realized the inefficiencies of meat and would still not have made competitive substitutes.
(Copied from my Symposium position statement)
If I accept conventional assumptions in EA Animal welfare[1], AGI will be negative for animals in expectation. On the other hand, AGI being good for humans makes it worse for animals in expectation. However, both rogue AGI and human-friendly AGI seem positive for animals in most scenarios: it just happens that the “bad” scenarios seem much worse than the “good” scenario.
Why is that? AGI, whether rogue or human-aligned, may not decide to keep other planets free of biological animals (though it seems like a bigger risk for human-aligned AGI). And EA Animal Welfare advocates generally believe that the likelihood that wild animal welfare is negative makes such spreading of biological animals too risky.
A small chance of this decision being made outweighs the positives. This seems very unlikely with rogue AGI (0.1%, perhaps much less), but it could still dominate the scales in my view. An AGI that is more human-friendly seems at least one order of magnitude more likely to terraform other planets.[2]
That said, this doesn’t flip the sign of AI safety work. This judgment is lightly held; digital minds (human-like or animal-like) are a larger portion of welfare patients in expectation; and I have no idea of what the counterfactuals are. Thus, I don’t treat this as an action-guiding beliefs.
To caveat, I think terraforming is still relatively unlikely in human-friendly scenarios because biodiversity becomes less instrumentally valuable post-AGI, so memes that would favor the existence of wild animal populations would lose in popularity. Even in human lock-in scenarios, the values that control AGI won’t favor deep ecology.
How about farmed animals? Even in precision Livestock Farming’s best and worst cases, suffering in factory farms shifts by a few orders of magnitude at most.[3] AGI makes the end of factory farming through developing alternatives more likely, though I’m more convinced by “biological food systems become unnecessary or unrecognizable” than “clean meat wins”. In the vast majority of scenarios, wild animals would be the most numerous moral patients.[4]
However, again, alien counterfactuals probably messes all of this up. If biological beings from other planets can colonize planets in our sphere of influence, then I have to put myself at 0%.
Farmed animal welfare is negative, wild animal welfare is negative, “good” and “bad” relate to expected total welfare
Though what that looks like is still underdefined.
However, precision livestock farming offers massive near-term risks and opportunities for farmed animals, and interest in this area appears justified.
Human-friendly AGI could decide to only keep animals under human control, but that would probably not lead to massive animal populations.
Due to Value Lock-in, TAI poses a time constraint for farmed animal social progress.
I do not expect most issues to be resolved before this time, due to technological limitations, heightened barriers to social change relative to historic movements, and increasing developing world meat consumption.
If we open this up to wild animals rather than just farmed, net-negative outcomes are much more assured.
AGI going well for humans in my mind suggests we all get uplifted as much as we like into some post-scarcity utopia, and if that happens I can’t imagine animals getting a different outcome. It may be delayed, it may look different, we may not even have direct control or understanding of it, but it seems implausible that for some reason superintelligence deems humans uniquely important compared to other biological sentience.
Factory farming has gone up along with the same forces of economic expansion that have made things go better for humans over the last 80 years. I don’t see any fundamental reason that AI would change these trends.
I don’t want to be a pessimist here, so I slightly moved my avatar to the right… I hope it will be good for animals...
I expect that factory farming will become even more harmful as a result of AGI
Value lock-in is the central variable. If AGI leads to lock-in of current human values, then humans may suvive while animals keep suffering.
If by “AGI goes well” we also include the continuation of things like moral progress (which current AI existential safety work does NOT address!), then the two are indeed aligned.
Successful CEV likely to lead to improved outcomes for animals
I think it 70% likely will go well for animals, but that’s not enough to obviate the need for animal-specific alignment efforts. Full take: https://forum.effectivealtruism.org/posts/skdp9uB4AoyN2fnuu/animal-welfare-is-just-part-of-ai-alignment-now-and-both
We haven’t done so well for animals thus far. that said, I hope that super-intelligence will respect all intelligences
I don’t really know, but my starting model would be… unless AGI is applying utilitarian models, it would likely rate human welfare well above and beyond animal one, in enough orders of magnitude as to make any animal welfare insignificant. The developments could allow for an end of farmed meat and the like, but that would also make the need to have animals as such… mostly redundant? You might have reservations for animals… Dunno.
Two ways it goes well for animals:
1. As incomes rise globally, initially it’s worse for animals because demand for meat rises. But once incomes rise from high to very high, desire for high-welfare standard meat increases and factory farming is eventually outlawed (not everywhere, but almost).
2. Economic development spurred on by AGI leads to further displacement of wild habitats, reducing wild animal suffering.
If AGI takes on the same values as humanity as a whole, factory-farming will continue, this means it would not go well for the animals.
AGI “goes well” or not is to me an X-risk question. Which makes me read this question as:
> If we survive AGI, are animals likely to be better off than if they were extinct?
To which I answer: Probably yes.
So far, much of technological development seems to have gone well for humans—for example, in developed nations, we have never had to do less hard manual labour, or had access to more information. That has not led to an improvement in the quality of non-human animal lives. In fact, we have seen exactly the opposite. AGI is likely to amplify this effect unless we make a significant conscious and coordinated effort to steer it another direction.
Slightly leaning toward that moral progress in that area would become so cheap that people accept it.
ES: not professional, not sure
IMO if AGI goes well for humans then at least it would have a decent grasp for general ethics, which includes animal welfare. AGI that hasn’t got good ethics wouldn’t benefit human, they’d just paperclip around. Since I have a short-ish timeline, i think somewhat-ethical and empowered AGI will benefit animals more than speciest HGIs.
No particular strong reason, this is my intuition but curious to see people’s reasoned takes.
Seems like AGI will lead to ASI and ASI will show us more valuable ways to use all the land and matter that currently support animal suffering. The ways we use those probably won’t involve animals or suffering at all.
The good news is that life on Earth has been going better and better for humans over the millennia. For instance, we have technology that make it easy to grow tons and tons of food so lots of people can eat as much as they want. We have cures for lots of previously deadly diseases so lots of us humans can live a very long time. And lots of people live in countries that recognize their rights. We also have a robust international economy that makes it really easy for a large number of people to buy the goods and services they want—and for lots of other people to get paid producing those goods and services!
The bad news is that none of this has translated to things going well for animals. :-( In fact, it has translated to the opposite. Things have been going worse and worse for animals over the millennia. For instance, factory farming, which causes a HUGE amount of suffering for animals, developed very recently in human history, and it developed as a byproduct of humans getting the things they want most (like a great economy, and the ability to produce food cheaply). So we have seen that humans getting more and more of what we want doesn’t translate to animals getting what they need. Of course, humans do also want animals to be treated well, on some level! But humans’ main goals are human-oriented goals. And so when we get more and more ability to achieve our goals, we put those human-oriented goals first, resulting in negative externalities for animals. If AI goes well for humans, it’ll go well for humans. It’ll be aligned with what humans want. And that will mean it’s aligned with prioritizing human interests over all others. Sure, it’ll care about animals a little, the way humans care about animals a little. But it will continue to put human interests first. And that will continue to result in externalities for animals.
The same way people harm animals now (e.g. for food, entertainment, fashion, science, etc.) may continue. And new ways to harm animals may develop that we never could have imagined before AI. For instance, people love having pet dogs. When their pet dogs die, people are sad. People may want to be able to upload their pet dog’s brain to the cloud to hang out with the pet dog when their dog dies. But trying to develop this technology may be a lot of work. AI may do the work by uploading 100,000 dog brains, or 100,000 copies of the same dog brain, to the cloud, and running various tests to see what works best. Perhaps a lot of these dogs will suffer immensely due to some mistake AI made in an early draft or some feature AI failed to include. And perhaps the suffering will be made worse because the dogs don’t have bodies and cannot even able to express their suffering without vocal cords or paws. Eventually, AI may work out the kinks before AI roles out the final keep-your-dead-pet-alive-as-an-app-on-your-phone product. But there’s all that behind-the-scenes suffering in the meantime. Humans care about animals a little. But humans love to turn a blind eye to behind-the-scenes suffering, so humans won’t be too upset about this situation. Then maybe AI realizes humans would like an upgrade to their pet-on-your-phone product. And that means AI needs 100,000 more copies of dog brains to do more experiments. AI that is fully aligned with human interests would realize humans would like the upgrade more than humans would be bothered by the suffering inherent in creating the upgrade. So AI will create the upgrade. This is just an example to illustrate my point. But I think there are lots of ways animals can be caused to suffer that we can’t even imagine right now.
What animals need is for AI to be aligned with animal interests, too—not just human interests.
A couplet different potential mechanisms could help farmed animals:
Solving cultivated meat or brainless animals
Creating better welfare technologies (e.g. solving all disease issues on current farms)
Generating enough societal wealth to make welfare improvements like lowering stocking density trivial
More abstractly, people generally care about welfare so it will be one of the things that an aligned AGI optimizes for. However, it wont be optimal for animals because AGI won’t be directly optimizing for welfare. For example, most people don’t think it’s wrong to eat meat, and we might still not want do things like beneficial vaccines or genetic edits.
Wild animals, less clear though!
If AGI goes well for humans, this will likely mean a lot of technological development. This would likely include technologies allowing for products equal to or superior on the dimensions humans like, that don’t have the animal welfare entailments. I realize that there have been some arguments that people would still prefer products created through suffering even if alternatives could be just as cheap, satisfying, and convenient, but I think that attitudes would change in the medium to long-term if those conditions were met.
I dont agree to the fact that AGI will go well for humans and hence I am disagreeing. And if it doesnt go well for humans, it wont go well for animals as well. Here the logic is simple, humans are also animals.