Thanks for writing this Michael, I would love to see more research in this area.
Thus, it seems plausible that expanding a person’s moral circle to include farm animals doesn’t bring the “boundary” of that person’s moral circles any “closer” to including whatever class of beings we’re ultimately concerned about (e.g., wild animals or artificial sentient beings). Furthermore, even if expanding a person’s moral circle to include farm animals does achieve that outcome, it seems plausible that that the outcome would be better achieved by expanding moral circles along other dimensions (e.g., by doing concrete wild animal welfare work, advocating for caring about all sentient beings, or advocating for caring about future artificial sentient beings).[2]
This is definitely an important point.
This is very speculative, but part of me wonders if the best thing to advocate for is (impartial) utilitarianism. This would, if done successfully, expand moral circles across all relevant boundaries including farm animals, wild animals and artificial sentience, and future beings. Advocacy for utilitarianism would naturally include “examples”, such as ending factory farming, so it wouldn’t have to be entirely removed from talk of farmed animals. I’m quite uncertain if such advocacy would effective (or even be good in expectation), but it is perhaps an option to consider.
(Of course this all assumes that utilitarianism is true/the best moral theory we currently have).
Another way to approach this is to ensure that people who are already interested in learning about utilitarianism are able to find high-quality resources that explicitly cover topics like the idea of the expanding moral circle, sentiocentrism/pathocentrism, and the implications for considering the welfare of geographically distant people, other species, and future generations.
When I read your comment, I thought “I think you’ve correctly highlighted one reason we might want to focus on advocating for impartial utilitarianism or for moral concern for ‘all sentient beings’, but I think there are many other considerations that are relevant and that could easily tip the balance in favour of some other framing. E.g., it’s also good for a framing to be easy to understand and get behind, and relatively unlikely to generate controversy.”
So then I decided to try to come up with considerations/questions relevant to which framing for MCE advocacy would be best (especially from a longtermist perspective). Here’s my initial list:
Which existing or potential future beings actually are moral patients?
Which framing will spread the most? I.e., which framing is most memetically fit (most memorable, most likely to be shared, etc.)?
Which framing will be most convincing?
Which framing will generate the least opposition, the lowest chance of PR issues, or similar?
E.g., perhaps two framings are both likely to be quite convincing for ~10% of people who come across them, while causing very little change in the beliefs or behaviours of most people who come across them, but one framing is also likely to cause ~10% of people to think the person using that framing is stupid, sanctimonious, and/or immoral. That would of course push against using the latter, more controversial framing.
Which framing will be most likely to change actual behaviours, and especially important ones?
Which framing is most likely to be understood and transmitted correctly?
And to what extent would each framing “fail gracefully” when understood/transmitted incorrectly (i.e., how much would the likely misinterpretations worsen people’s beliefs or behaviours)?
Which framing would be easiest to adjust given future changes in our understanding about moral patienthood, moral weight, expected numbers of various future beings, etc.?
This seems like an argument in favour of “all sentient beings” over something like “people in all places and all times” or “all animals”, at least if we’re more confident that sentience is necessary and sufficient for moral patienthood than that being a person or being an animal is.
I think one can think about this consideration in two ways:
Correcting course: We’d ideally like a framing that doesn’t overly strongly fix in place some specific views we might later realise were wrong.
Maintaining momentum: We’d ideally like a framing that allows us to adjust it later in a way that can preserve and redirect the supportive attitudes or communities that have by then built up around that framing.
E.g., perhaps we could have our primary framing be “all animals”, but ensure we always prominently explain that we’re using this framing because we currently expect all animals are sentient and nothing else is, that we might be wrong about that, and that really we think sentience is key. Then if we later decide to exclude some animals or include some non-animals, this could seem like a refinement of the basic ideas rather than an unappealing lurch in a new direction.
I’m sure other considerations/questions could be generated, and that these ones could be productively rephrased or reorganised. And maybe there’s an existing list that I haven’t seen that covers this territory better than this one does.
part of me wonders if the best thing to advocate for is (impartial) utilitarianism
I also think this is plausible, though I should also note that I don’t currently have a strong view on:
whether that’s a better bet than other options for moral advocacy
how valuable the best of those actions are relative to other longtermist actions
Readers interested in this topic might want to check out posts tagged moral advocacy / values spreading, and/or the sources collected here on the topic of “How valuable are various types of moral advocacy? What are the best actions for that?” (this collection is assocated with my post on Crucial questions for longtermists).
Thanks for writing this Michael, I would love to see more research in this area.
This is definitely an important point.
This is very speculative, but part of me wonders if the best thing to advocate for is (impartial) utilitarianism. This would, if done successfully, expand moral circles across all relevant boundaries including farm animals, wild animals and artificial sentience, and future beings. Advocacy for utilitarianism would naturally include “examples”, such as ending factory farming, so it wouldn’t have to be entirely removed from talk of farmed animals. I’m quite uncertain if such advocacy would effective (or even be good in expectation), but it is perhaps an option to consider.
(Of course this all assumes that utilitarianism is true/the best moral theory we currently have).
Another way to approach this is to ensure that people who are already interested in learning about utilitarianism are able to find high-quality resources that explicitly cover topics like the idea of the expanding moral circle, sentiocentrism/pathocentrism, and the implications for considering the welfare of geographically distant people, other species, and future generations.
Improving educational opportunities of this kind was one motivation for writing this section on utilitarianism.net: Chapter 3: Utilitarianism and Practical Ethics: The Expanding Moral Circle.
When I read your comment, I thought “I think you’ve correctly highlighted one reason we might want to focus on advocating for impartial utilitarianism or for moral concern for ‘all sentient beings’, but I think there are many other considerations that are relevant and that could easily tip the balance in favour of some other framing. E.g., it’s also good for a framing to be easy to understand and get behind, and relatively unlikely to generate controversy.”
So then I decided to try to come up with considerations/questions relevant to which framing for MCE advocacy would be best (especially from a longtermist perspective). Here’s my initial list:
Which existing or potential future beings actually are moral patients?
And how much moral weight and capacity for welfare does/will each have?
And how numerous is/will each type of being be?
Which framing will spread the most? I.e., which framing is most memetically fit (most memorable, most likely to be shared, etc.)?
Which framing will be most convincing?
Which framing will generate the least opposition, the lowest chance of PR issues, or similar?
E.g., perhaps two framings are both likely to be quite convincing for ~10% of people who come across them, while causing very little change in the beliefs or behaviours of most people who come across them, but one framing is also likely to cause ~10% of people to think the person using that framing is stupid, sanctimonious, and/or immoral. That would of course push against using the latter, more controversial framing.
Which framing will be most likely to change actual behaviours, and especially important ones?
Which framing is most likely to be understood and transmitted correctly?
See also The fidelity model of spreading ideas
And to what extent would each framing “fail gracefully” when understood/transmitted incorrectly (i.e., how much would the likely misinterpretations worsen people’s beliefs or behaviours)?
Which framing would be easiest to adjust given future changes in our understanding about moral patienthood, moral weight, expected numbers of various future beings, etc.?
This seems like an argument in favour of “all sentient beings” over something like “people in all places and all times” or “all animals”, at least if we’re more confident that sentience is necessary and sufficient for moral patienthood than that being a person or being an animal is.
I think one can think about this consideration in two ways:
Correcting course: We’d ideally like a framing that doesn’t overly strongly fix in place some specific views we might later realise were wrong.
Maintaining momentum: We’d ideally like a framing that allows us to adjust it later in a way that can preserve and redirect the supportive attitudes or communities that have by then built up around that framing.
E.g., perhaps we could have our primary framing be “all animals”, but ensure we always prominently explain that we’re using this framing because we currently expect all animals are sentient and nothing else is, that we might be wrong about that, and that really we think sentience is key. Then if we later decide to exclude some animals or include some non-animals, this could seem like a refinement of the basic ideas rather than an unappealing lurch in a new direction.
I’m sure other considerations/questions could be generated, and that these ones could be productively rephrased or reorganised. And maybe there’s an existing list that I haven’t seen that covers this territory better than this one does.
I also think this is plausible, though I should also note that I don’t currently have a strong view on:
whether that’s a better bet than other options for moral advocacy
how valuable the best of those actions are relative to other longtermist actions
Readers interested in this topic might want to check out posts tagged moral advocacy / values spreading, and/or the sources collected here on the topic of “How valuable are various types of moral advocacy? What are the best actions for that?” (this collection is assocated with my post on Crucial questions for longtermists).