When I read your comment, I thought “I think you’ve correctly highlighted one reason we might want to focus on advocating for impartial utilitarianism or for moral concern for ‘all sentient beings’, but I think there are many other considerations that are relevant and that could easily tip the balance in favour of some other framing. E.g., it’s also good for a framing to be easy to understand and get behind, and relatively unlikely to generate controversy.”
So then I decided to try to come up with considerations/questions relevant to which framing for MCE advocacy would be best (especially from a longtermist perspective). Here’s my initial list:
Which existing or potential future beings actually are moral patients?
Which framing will spread the most? I.e., which framing is most memetically fit (most memorable, most likely to be shared, etc.)?
Which framing will be most convincing?
Which framing will generate the least opposition, the lowest chance of PR issues, or similar?
E.g., perhaps two framings are both likely to be quite convincing for ~10% of people who come across them, while causing very little change in the beliefs or behaviours of most people who come across them, but one framing is also likely to cause ~10% of people to think the person using that framing is stupid, sanctimonious, and/or immoral. That would of course push against using the latter, more controversial framing.
Which framing will be most likely to change actual behaviours, and especially important ones?
Which framing is most likely to be understood and transmitted correctly?
And to what extent would each framing “fail gracefully” when understood/transmitted incorrectly (i.e., how much would the likely misinterpretations worsen people’s beliefs or behaviours)?
Which framing would be easiest to adjust given future changes in our understanding about moral patienthood, moral weight, expected numbers of various future beings, etc.?
This seems like an argument in favour of “all sentient beings” over something like “people in all places and all times” or “all animals”, at least if we’re more confident that sentience is necessary and sufficient for moral patienthood than that being a person or being an animal is.
I think one can think about this consideration in two ways:
Correcting course: We’d ideally like a framing that doesn’t overly strongly fix in place some specific views we might later realise were wrong.
Maintaining momentum: We’d ideally like a framing that allows us to adjust it later in a way that can preserve and redirect the supportive attitudes or communities that have by then built up around that framing.
E.g., perhaps we could have our primary framing be “all animals”, but ensure we always prominently explain that we’re using this framing because we currently expect all animals are sentient and nothing else is, that we might be wrong about that, and that really we think sentience is key. Then if we later decide to exclude some animals or include some non-animals, this could seem like a refinement of the basic ideas rather than an unappealing lurch in a new direction.
I’m sure other considerations/questions could be generated, and that these ones could be productively rephrased or reorganised. And maybe there’s an existing list that I haven’t seen that covers this territory better than this one does.
When I read your comment, I thought “I think you’ve correctly highlighted one reason we might want to focus on advocating for impartial utilitarianism or for moral concern for ‘all sentient beings’, but I think there are many other considerations that are relevant and that could easily tip the balance in favour of some other framing. E.g., it’s also good for a framing to be easy to understand and get behind, and relatively unlikely to generate controversy.”
So then I decided to try to come up with considerations/questions relevant to which framing for MCE advocacy would be best (especially from a longtermist perspective). Here’s my initial list:
Which existing or potential future beings actually are moral patients?
And how much moral weight and capacity for welfare does/will each have?
And how numerous is/will each type of being be?
Which framing will spread the most? I.e., which framing is most memetically fit (most memorable, most likely to be shared, etc.)?
Which framing will be most convincing?
Which framing will generate the least opposition, the lowest chance of PR issues, or similar?
E.g., perhaps two framings are both likely to be quite convincing for ~10% of people who come across them, while causing very little change in the beliefs or behaviours of most people who come across them, but one framing is also likely to cause ~10% of people to think the person using that framing is stupid, sanctimonious, and/or immoral. That would of course push against using the latter, more controversial framing.
Which framing will be most likely to change actual behaviours, and especially important ones?
Which framing is most likely to be understood and transmitted correctly?
See also The fidelity model of spreading ideas
And to what extent would each framing “fail gracefully” when understood/transmitted incorrectly (i.e., how much would the likely misinterpretations worsen people’s beliefs or behaviours)?
Which framing would be easiest to adjust given future changes in our understanding about moral patienthood, moral weight, expected numbers of various future beings, etc.?
This seems like an argument in favour of “all sentient beings” over something like “people in all places and all times” or “all animals”, at least if we’re more confident that sentience is necessary and sufficient for moral patienthood than that being a person or being an animal is.
I think one can think about this consideration in two ways:
Correcting course: We’d ideally like a framing that doesn’t overly strongly fix in place some specific views we might later realise were wrong.
Maintaining momentum: We’d ideally like a framing that allows us to adjust it later in a way that can preserve and redirect the supportive attitudes or communities that have by then built up around that framing.
E.g., perhaps we could have our primary framing be “all animals”, but ensure we always prominently explain that we’re using this framing because we currently expect all animals are sentient and nothing else is, that we might be wrong about that, and that really we think sentience is key. Then if we later decide to exclude some animals or include some non-animals, this could seem like a refinement of the basic ideas rather than an unappealing lurch in a new direction.
I’m sure other considerations/questions could be generated, and that these ones could be productively rephrased or reorganised. And maybe there’s an existing list that I haven’t seen that covers this territory better than this one does.