On the longtermist case for working on farmed animals [Uncertainties & research ideas]
I also considered the following title for this post, which might be more fitting: Does expanding moral circles to one type of being also expand them to other types of beings?
Summary
Some people think that longtermists should prioritise work focused on farmed animals in the near-term future. The argument for this typically includes the premises that the vast majority of all the suffering and wellbeing that ever occurs might be experienced by beings which humans might have little to no moral concern for (e.g., artificial sentient beings), and that some work focused on farmed animals could increase the chance that humans have moral concern for those beings.
I find this argument and conclusion plausible, but also quite speculative. Part of my uncertainty has to do with whether expanding moral circles to include farmed animals would also expand them to include the relevant other types of beings, and whether it’d do so more effectively than other actions would.
Below, I outline several ideas for research projects that someone could do to reduce our uncertainty on those points. These include reviews of relevant literature, expert elicitation, surveys, experiments, and historical research.
Note: This post is adapted from a doc I wrote around July 2020. Since then, I’ve become somewhat less confident about the importance of the ideas raised and projects suggested here (though I still think they might be important). And if I were writing this from scratch now, I’d probably frame it somewhat differently.
I wrote this post in a personal capacity, and it does not necessarily represent the views of any of my employers.
My thanks to Tobias Baumann and Megan Kinniment for helpful comments on an earlier draft of this post.
A longtermist argument for farmed animal welfare work
Some people think that longtermists should prioritise work aimed at reducing near-term farmed animal suffering, and some people indeed seem to be doing such work for longtermist reasons. For example, I think this roughly describes the Sentience Institute’s views. (See also posts tagged Non-Humans and the Long-Term Future.)
I think that that position is typically based on something like the following argument (see, e.g., the post Why I prioritize moral circle expansion over artificial intelligence alignment):
Premise 1: It’s plausible that the vast majority of all the suffering and wellbeing that ever occurs will occur more than a hundred years into the future and will be experienced by beings towards which humans might “by default” have little to no moral concern (e.g., wild animals on terraformed planets; artificial sentient beings).
Premise 2: If Premise 1 is true, it could be extremely morally important to—either now or in the future—expand moral circles such that they’re more likely to include those types of beings.
Premise 3: Such moral circle expansion (MCE) may be not just important but also urgent. This is because there may be a “value lock-in” relatively soon, for example due to some ways the development of transformative artificial intelligence (TAI) may play out.
Premise 4: If more people’s moral circles expand to include farm animals and/or if factory farming is ended, that would increase the chances that future actors’ moral circles will include all sentient beings (or at least all the very numerous beings).
Conclusion: Work that supports the expansion of people’s moral circles to include farm animals and/or supports the ending of factory farming could therefore be both (a) extremely morally important and (b) urgent.
(Of course, one could also arrive at the same conclusion via different arguments, including ones that entirely focus on the intrinsic significance of near-term suffering and wellbeing. See also this comment thread. Also, one could use a longtermist argument similar to the above one in order to argue for focusing on near-term wild animal welfare work; I’ll return to this point below.)
Uncertainties about Premise 4
Personally, I find the four premises above plausible, along with the conclusion.
However, each of those premises also seems quite speculative. This is often hard to avoid, especially in longtermism. But it seems to me that there are tractable ways to improve our knowledge regarding Premise 4 (and related matters), and that the value of information from doing so would be very high. So I’ve generated some ideas for research on Premise 4 which I think it might be worthwhile for someone to pursue, which I’ll describe below.[1]
Essentially, I’d be very confident that Premise 4 was true if moral circles were effectively “unidimensional”. However, it seems to make more sense to think of moral circles as multidimensional, such that a person’s moral circle can expand along one dimension without expanding along others, and that two people could have differently shaped “moral circles”, without it being clear whose is “larger”. (See Moral circles: Degrees, dimensions, visuals for elaboration on these points.)
Thus, it seems plausible that expanding a person’s moral circle to include farm animals doesn’t bring the “boundary” of that person’s moral circles any “closer” to including whatever class of beings we’re ultimately concerned about (e.g., wild animals or artificial sentient beings). Furthermore, even if expanding a person’s moral circle to include farm animals does achieve that outcome, it seems plausible that that the outcome would be better achieved by expanding moral circles along other dimensions (e.g., by doing concrete wild animal welfare work, advocating for caring about all sentient beings, or advocating for caring about future artificial sentient beings).[2]
Research on these points could have major implications for which research and interventions should be prioritised by people focused on animal welfare work, MCE, and/or benefitting non-humans in the long-term future.
Additionally, research on these points could suggest that some people who aren’t focused on animal welfare work, MCE, and/or benefitting non-humans in the long-term future should focus on those things. This would occur if the research ends up providing further support for Premise 4 and/or revealing more cost-effective interventions for long-term-relevant MCE than near-termist work on farmed animal welfare.
(I’m not the first person to have considered roughly these sorts of points. For example, there are somewhat similar ideas in this talk by Jacy Reese.)
How could we reduce those uncertainties?
I think it might be worthwhile for someone to conduct research aimed at answering the question of how interventions that expand moral circles along certain dimensions (or to certain types of beings) spill over into expanding moral circles along other dimensions (or to other types of beings).
This question could be tackled:
Relatively directly, though that might require expensive experiments; or
Somewhat indirectly, by investigating how expansions of moral circles (whether or not they’re caused by “interventions”) along certain dimensions spill over into expanding moral circles along other dimensions; or
Even more indirectly, by addressing the purely correlational question of how well the size of a person or group’s moral circles along one dimension predicts the size of their moral circles along another dimension
Ideally, this research would focus on:
The types of interventions that EAs (or related groups) are most likely to actually consider supporting
The types of “audiences” these interventions are most likely to target (e.g., the general public, AI researchers)
The types of beings those interventions are most likely to directly focus on (e.g., farmed animals)
The types of beings it might be ultimately most important to expand moral circles to include (e.g., wild animals, artificial sentience)
One could also investigate how results differ depending on differences in intervention types, types of audiences, and types of beings. This could inform decisions about things like:
Whether to prioritise clean meat research or advocacy against speciesism
Whether to target thought leaders, tech researchers, or the general public
Whether to focus on expanding moral circles to include farm animals, insects, wild animals, “all sentient beings”, or artificial sentient beings
Note that:
I won’t be pursuing those questions myself, as I’m busy with other projects.
It’s possible that some of the work I propose here is already being done.
Sketches of more specific possible research projects
1. Reviews of literature relevant to the above questions
The relevant literature might be mostly from psychology, history, sociology, and effective altruism.
For example, I know of at least one paper relevant to the extent to which inclusion of some entities in one’s moral circles predicts inclusion of other entities. I suspect there are also others. And the research on secondary transfer effects seems relevant too (my thanks to Jamie Harris for drawing my attention to that).
For another example, I suspect some writings on the history of MCE would contain clues as to how often each of the following things occur:
Expansion along one dimension leads to expansion along another
Expansion along many dimensions happens near-simultaneously due to some other underlying cause (e.g., economic growth)
Expansion occurs along one or more dimensions without occurring along (important) other dimensions
2. Expert elicitation focused on the above questions
This elicitation could be done via surveys and/or via interviews.
The most relevant experts might be psychologists, sociologists, and historians who’ve published relevant research. Other types of people who might be relevant include EAs, animal advocates, futurists, and philosophers who’ve done relevant work.
3. Surveys focused on the above questions
These surveys would likely consist mostly of things like rating scale questions, though with at least some boxes for open-ended responses.
3a. Surveys simply focused on what types of entities people currently include in their moral circles (or related matters, like what entities they empathise with or eat).
Such surveys could provide evidence about Premise 4 because, if a person’s inclusion of one type of entity predicts their inclusion of other types of entities, this would push in favour of the hypothesis that moral circles are “effectively unidimensional”.
That said, that wouldn’t strongly indicate that expansion along one dimension will spill over into expansion along other dimensions. This is because the correlations could reflect how people’s moral circles started out (e.g. due to a genetic predisposition towards generalised empathy), rather than how they expanded.
(I think Lucius Caviola’s thesis “How We Value Animals: The Psychology of Speciesism” would be relevant here, though I haven’t actually read it myself.)
3b. Surveys asking people to recall what entities they included in their moral circles (or related matters) at various times.
This could provide slightly better evidence about how expansion along one dimension may or may not lead to expansion along other dimensions. But I don’t think I’d want to put much weight on self-reported distant memories.
3c. Longitudinal surveys on what entities people include in their moral circles (or related matters) at different times.
4. Experiments focused on the above questions
4a. Between-subjects experiments in which some participants are shown arguments, videos, or information which is intended to expand their moral circles to include a particular type of entity, and all participants are asked about which entities they include in their moral circles. The entities participants would be asked about would include ones not focused on by the arguments, videos, or information.
4b. Within-subjects experiments similar to the above, but with the intervention delivered to all participants, and participants being asked about their moral circles both beforehand and afterwards.
(These experiments are relevant inasmuch as Premise 4 focuses on expanding individual people’s moral circles to include farmed animals, rather than on ending factory farming as a whole. But some people might think the “active ingredient” in improving what values ultimately get “locked-in” in ensuring factory farming as a whole is ended by the time a value lock-in occurs. It would of course be very hard to run an experiment to relatively directly test that hypotheses.
5. Historical research focused on the above questions
Here I mean historical research other than just literature reviews, since I already discussed literature reviews earlier.
5a. Case studies.
For example, one could investigate the factors that seem to have led to a particular instance of MCE, and how many dimensions moral circles appear to expand along during that instance.
5b. Historical research using a more quantitative, macro approach, examining broader trends.
Some possible arguments against doing those research projects
Perhaps other people already have done or are doing similar research
Perhaps we can already confidently dismiss the above longtermist argument for farmed animal welfare work, without needing to do this research
Perhaps this research would give “pessimistic” results even if that argument was true, because the relevant effects on ultimate moral circles would only occur at a point such as the end of factory farming
Perhaps it’d be better to do some other research project focused on Premise 4 or one of the other premises
It would likely be hard to determine which of correlations these research projects identify are causal
Perhaps people’s current, self-reported attitudes on (e.g.) artificial sentient beings would bear little resemblance to their later, revealed attitudes
Possible next steps
If someone was interested in doing the sort of research I’ve proposed here, they might wish to take roughly the following steps, in roughly this order:
Comment on this post
Contact the Sentience Institute and/or me to discuss ideas
Do research along the lines of the first two project ideas listed above (partly to help orient themselves to the general area)
Do research along the lines of the third project idea
Maybe do research along the lines of the fourth and fifth project ideas, depending on whether that still seems worthwhile at that stage
See also Notes on EA-related research, writing, testing fit, learning, and the Forum.
Footnotes
[1] To be clear, I’m not saying I’m much more confident that the other premises are true than that Premise 4 is true, nor that research on the other premises wouldn’t be worthwhile. It’s just that I didn’t immediately see tractable ways to investigate those other premises. But see here for some other points about why moral circle expansion may be overrated and how one might investigate that matter.
[2] See also the post Beware surprising and suspicious convergence.
- A central directory for open research questions by 19 Apr 2020 23:47 UTC; 163 points) (
- Humanities Research Ideas for Longtermists by 9 Jun 2021 4:39 UTC; 151 points) (
- Why the expected numbers of farmed animals in the far future might be huge by 4 Mar 2022 19:59 UTC; 134 points) (
- Longtermism and animals: Resources + join our Discord community! by 31 Jan 2023 10:45 UTC; 102 points) (
- Some history topics it might be very valuable to investigate by 8 Jul 2020 2:40 UTC; 91 points) (
- Sentience Institute 2021 End of Year Summary by 26 Nov 2021 14:40 UTC; 66 points) (
- How much current animal suffering does longtermism let us ignore? by 21 Apr 2022 9:10 UTC; 40 points) (
- What are the highest impact questions in the behavioral sciences? by 7 Apr 2021 11:35 UTC; 37 points) (
- Prioritization Questions for Artificial Sentience by 18 Oct 2021 14:07 UTC; 30 points) (
- EA Updates for May 2021 by 30 Apr 2021 16:50 UTC; 29 points) (
- 24 Jul 2020 4:18 UTC; 16 points) 's comment on Moral circles: Degrees, dimensions, visuals by (
- 13 May 2021 21:23 UTC; 8 points) 's comment on Animal Welfare Fund: Ask us anything! by (
- 13 May 2021 21:25 UTC; 4 points) 's comment on Animal Welfare Fund: Ask us anything! by (
Thanks for this post Michael, I think I agree with everything here! Though if anyone thinks we can “confidently dismiss the above longtermist argument for farmed animal welfare work, without needing to do this research” I’d be interested to hear why.
I just wanted to note that Sentience Institute is pursuing some of this sort of research, but (1) we definitely won’t be able to pursue all of these things any time soon, (2) not that much of our work focuses specifically on these cause prioritisation questions—we often focus on working out how to make concrete progress on the problems, assuming you agree that MCE is important. That said, I think a lot of research can achieve both goals. E.g. my colleague, Ali, is finishing up a piece of research that fits squarely in “4a. Between-subjects experiments… focused on the above questions” currently titled “The impact of perspective taking on attitudes and prosocial behaviours towards non-human outgroups.” And the more explicit cause prioritisation research would still fit neatly within our interests. SI is primarily funding constrained, so if any funders reading this are especially interested in this sort of research, they should feel free to reach out to us.
Thanks for this note! Agreed. My email is jamie@sentienceinstitute.org if anyone does want to discuss these ideas or send me draft writeups for review.
It’s good to hear that SI are already doing some of this research!
I also appreciate you clearly highlighting that there’s still room for others to contribute, and providing your email so people can get in touch.
I don’t personally think we can already confidently dismiss that longtermist argument for farmed animal welfare work.
(But that claim is vague. Here’s an attempt at operationalising it: I am not currently 95%+ confident that, after 10 years of relevant cause prioritisation research, we’d think farmed animal welfare work should get less than 1% of the total longtermist portfolio of resources.)
But I think I’d see it as reasonable if someone else did feel more confident that we can dismiss that argument. Essentially, there are just so many things longtermists could prioritise, and I think it’d be reasonable to think that:
the existing arguments for focusing on farmed animals are very weak
the arguments for focusing on other things are much stronger
those things are sufficiently strongly true that we may as well focus on other cause prioritisation questions or just more object-level work on current longtermist priorities, rather than on further investigating whether farmed animal work should be a top priority
To expand on / rephrase that a bit, I think it would be reasonable for someone to make roughly the following claims:
There are a staggeringly large number of things that theoretically could absorb a substantial fraction of longtermist resources. So our prior credence that something chosen entirely at random should absorb a substantial fraction of longtermist resources should be very low.
It’s true that farmed animal welfare work wasn’t just randomly chosen, but rather highlighted as a potential top priority by a substantial portion of effective altruists. And the basic importance, tractability, and neglectedness arguments seem reasonable. But that was all basically from a near-termist perspective, so it’s still relatively close to “randomly chosen” if we now adopt a longtermist perspective, unless we have some specific argument why it would be a top priority from a longtermist perspective.
That argument could perhaps just focus on the premise that things that are good for the near-term future are often good for the long-term future, perhaps combined with the idea that predicting anything else about what would be good for the long-term future would be extremely hard. This could suggest that it wouldn’t be at all suspicious for neartermist priorities to also be longtermist priorities. But I haven’t yet seen any proper attempt to outline and defend that premise.
We could instead use something like the four-premise argument given in this post. But each premise hasn’t received a very rigorous defence as of yet, and it seems that various counterpoints could be raised against each.
Also, it seems that that basic argument might offer similarly much support to the idea that we should prioritise work on wild animal welfare, artificial sentience, explicit moral advocacy that isn’t primarily focused on farmed animals, or something else like that.
Meanwhile, there are various other potential longtermist priorities that have received fairly rigorous defences and that seem to face less compelling counterpoints.
I think I basically believe those claims. But, as noted, I still don’t feel we should confidently dismiss the idea that work on farmed animals should get a nontrivial portion of longtermist resources. This is partly due to the plausibility of the argument given in this posts, and partly simply because I think we’re dealing with extremely complicated questions and haven’t been thinking about them for very long, so we should remain quite uncertain and open to a range of ideas.
Thanks for writing this Michael, I would love to see more research in this area.
This is definitely an important point.
This is very speculative, but part of me wonders if the best thing to advocate for is (impartial) utilitarianism. This would, if done successfully, expand moral circles across all relevant boundaries including farm animals, wild animals and artificial sentience, and future beings. Advocacy for utilitarianism would naturally include “examples”, such as ending factory farming, so it wouldn’t have to be entirely removed from talk of farmed animals. I’m quite uncertain if such advocacy would effective (or even be good in expectation), but it is perhaps an option to consider.
(Of course this all assumes that utilitarianism is true/the best moral theory we currently have).
Another way to approach this is to ensure that people who are already interested in learning about utilitarianism are able to find high-quality resources that explicitly cover topics like the idea of the expanding moral circle, sentiocentrism/pathocentrism, and the implications for considering the welfare of geographically distant people, other species, and future generations.
Improving educational opportunities of this kind was one motivation for writing this section on utilitarianism.net: Chapter 3: Utilitarianism and Practical Ethics: The Expanding Moral Circle.
When I read your comment, I thought “I think you’ve correctly highlighted one reason we might want to focus on advocating for impartial utilitarianism or for moral concern for ‘all sentient beings’, but I think there are many other considerations that are relevant and that could easily tip the balance in favour of some other framing. E.g., it’s also good for a framing to be easy to understand and get behind, and relatively unlikely to generate controversy.”
So then I decided to try to come up with considerations/questions relevant to which framing for MCE advocacy would be best (especially from a longtermist perspective). Here’s my initial list:
Which existing or potential future beings actually are moral patients?
And how much moral weight and capacity for welfare does/will each have?
And how numerous is/will each type of being be?
Which framing will spread the most? I.e., which framing is most memetically fit (most memorable, most likely to be shared, etc.)?
Which framing will be most convincing?
Which framing will generate the least opposition, the lowest chance of PR issues, or similar?
E.g., perhaps two framings are both likely to be quite convincing for ~10% of people who come across them, while causing very little change in the beliefs or behaviours of most people who come across them, but one framing is also likely to cause ~10% of people to think the person using that framing is stupid, sanctimonious, and/or immoral. That would of course push against using the latter, more controversial framing.
Which framing will be most likely to change actual behaviours, and especially important ones?
Which framing is most likely to be understood and transmitted correctly?
See also The fidelity model of spreading ideas
And to what extent would each framing “fail gracefully” when understood/transmitted incorrectly (i.e., how much would the likely misinterpretations worsen people’s beliefs or behaviours)?
Which framing would be easiest to adjust given future changes in our understanding about moral patienthood, moral weight, expected numbers of various future beings, etc.?
This seems like an argument in favour of “all sentient beings” over something like “people in all places and all times” or “all animals”, at least if we’re more confident that sentience is necessary and sufficient for moral patienthood than that being a person or being an animal is.
I think one can think about this consideration in two ways:
Correcting course: We’d ideally like a framing that doesn’t overly strongly fix in place some specific views we might later realise were wrong.
Maintaining momentum: We’d ideally like a framing that allows us to adjust it later in a way that can preserve and redirect the supportive attitudes or communities that have by then built up around that framing.
E.g., perhaps we could have our primary framing be “all animals”, but ensure we always prominently explain that we’re using this framing because we currently expect all animals are sentient and nothing else is, that we might be wrong about that, and that really we think sentience is key. Then if we later decide to exclude some animals or include some non-animals, this could seem like a refinement of the basic ideas rather than an unappealing lurch in a new direction.
I’m sure other considerations/questions could be generated, and that these ones could be productively rephrased or reorganised. And maybe there’s an existing list that I haven’t seen that covers this territory better than this one does.
I also think this is plausible, though I should also note that I don’t currently have a strong view on:
whether that’s a better bet than other options for moral advocacy
how valuable the best of those actions are relative to other longtermist actions
Readers interested in this topic might want to check out posts tagged moral advocacy / values spreading, and/or the sources collected here on the topic of “How valuable are various types of moral advocacy? What are the best actions for that?” (this collection is assocated with my post on Crucial questions for longtermists).
Great post, thanks for writing this!
I would be most excited about projects 3c and 4a, since I think we could draw the strongest conclusions from them by directly asking about artificial sentience, not-so-intelligent artificial sentience (more like nonhuman animals) in particular and more neglected animals (wild animals, invertebrates) and possibly infer causation.
For 3c specifically, I’d want to see how people’s attitudes towards artificial sentience and neglected animals change in response to major animal welfare events, e.g. animal advocacy/welfare media attention in general, or specifically ballot initiatives, new legislation, corporate commitments, undercover investigations, etc.. I think we’d need to collect a lot of data to do this, though.
This also might be relevant, for moral circle expansion towards farmed animals from humans, although I not sure we can assume causality rather than just a common cause (e.g. liberal/progressive values): https://www.washingtonpost.com/politics/2019/07/26/who-supports-animal-rights-heres-what-we-found/
Some extra thoughts: Part of why various longtermist priorities are neglected by society is arguably that people’s moral circles don’t fully/adequately include (far) future generations of humans. (See also Moral circles: Degrees, dimensions, visuals.)
I think this has two interesting implications.
Firstly, this implies that another way in which these research projects might inform longtermists is by providing evidence about the extent to which various actions might have the positive side effect of expansion moral circles to include (far) future humans. E.g., if these research projects suggest that work on farmed animals is likely to also strongly expand moral circles to artificial sentience and wild animals, this is weak evidence that such “spillovers” are common and thus that various actions aimed at causing MCE to humans could also have substantial benefits in terms of increasing the resources allocated to extinction risk reduction or similar things. (See also Extinction risk reduction and moral circle expansion: Speculating suspicious convergence.)
Secondly, this implies that it might be valuable to pursue research projects similar to those proposed in this post, but with a specific focus on expanding moral circles to (far) future humans. For example, one could conduct expert interviews that include questions about how advocacy related to farm animals, wild animals, or “all sentient beings” might or might not affect attitudes towards (far) future humans.
Thanks for writing this, even accounting for suspicious convergence (which you were right to flag), it just seems really plausible that improving animal welfare now could turn out to be important from a longtermist perspective, and I’d be really excited to hear about more research in this field happening.
Is this just something you already believed, or are you indicating that this post updated you a bit more towards believing this?
I initially assumed you meant the latter, which I found slightly surprising, though on reflection it seems reasonable.
Why I found it surprising: When I wrote the original version of this post in 2020, I was actually coming at it mainly from an angle of “Here’s an assumption which seems necessary for the standard longtermist case for working on farmed animals, but which is usually not highlighted or argued for explicitly, and which seems like it could easily be wrong.” So I guess I would’ve assumed it’d mostly cause people to update slightly away from believing that longtermist case for working on farmed animals. (But only slightly; this post mainly raises questions rather than strong critiques.)
But I guess it really depends on the reader: While some people are familiar with and at least somewhat bought into that longtermist case for working on farmed animals but have probably paid insufficient attention to the fact that Premise 4 might be wrong, some other people haven’t really encountered a clear description of that longtermist case, and some people mostly discuss longtermism as if it is necessarily about humans. So for some people, I think it’d make sense for this post to update them towards that longtermist case for working on farmed animals.
I already believed it and had actually been recently talking to someone about it, so I was surpsied and pleased to come across the post, but couldn’t find a phrasing which said this which didn’t just sound like I was saying “oh yeah thanks for writing up my idea”. Sorry for the confusion!