I appreciate you highlighting that longtermists doesnāt necessarily entail ultimately focusing on humans.
But I think itād be better to broaden your discussion to humans, non-human animals, and other types of non-humans that might be moral patients(e.g., artificial sentiences)
And I think you imply that, when it comes to non-humans, we must necessarily focus on suffering-reduction
I think itād be better to broaden your discussion so that itās open to other goals regarding non-humans, such as increasing happiness
---
A nitpick I have with Greaves and MacAskillās paper is that they donāt mention non-human animals.
This was also a nitpick I had. Hereās a relevant part of the notes I wrote on their paper (which I should be posting in full soon):
It seems like the paper implicitly assumes that humans are the only moral patients
I think it makes sense for the paper to focus on humans, since it makes sense for many papers to tackle just one thorny issue at a time
But I think it wouldāve been good for the paper to at least briefly acknowledge that this is just a simplifying assumption
Perhaps just in a footnote
Otherwise the paper is kind-of implying that the authors really do take it as a given that humans are the only moral patients (which I think wouldnāt actually match the authorsā views)
---
āLongtermists shouldnāt ignore non-human animals. It is plausible that there are things we can do to address valid concerns about the suffering of non-human animals in the far future. More research into the tractability of certain interventions could have high expected value.ā
I appreciate you highlighting that longtermism doesnāt have to focus on humans. But I personally think your framing is still narrower than it should be: You could be interpreted as implying that longtermists should focus on either humans, or on the suffering of non-human animals, or on some combination of those goals.
But I think itās also quite important to consider other possible moral patients that are neither humans nor animals, such as artificial sentiences. (Perhaps arguably some artificial sentiences would be considered by some people to be effectively humans or non-human animals, but this may not be the case, and other artificial sentiences might be more starkly different.)
And it could also be a moral priority to decrease bad things other than suffering among non-humans, such as death or a lack of freedom. (This would of course require that utilitarianism be false, or at least that we be quite uncertain about it.) And it could also be a moral priority to increase good things for non-humans (e.g., allow there to be large numbers of happy non-human beings).
Tobias Baumann suggests that expanding the moral circle to include non-human animals might be a credible longtermist intervention, as a good long-term future for all sentient beings may be unlikely as long as people think it is right to disregard the interests of animals for frivolous reasons such as the taste of meat. Non-human animals are moral patients that are essentially at our will, and it seems plausible that there are non-extinction attractor states for these animals.
I do think that this is all plausible. But I think people have sometimes jumped on this option too quickly, with too little critical consideration. See also this section of a post and this doc.
Also, I think āNon-human animals are moral patientsā (emphasis added) is too strong; Iām not sure we should be practically certain that any nonhuman animals are moral patients, and I definitely donāt think we should be practically certain that all are (e.g., insects, crabs).
To be clear, Iām vegan, and broadly supportive of people focusing on animal welfare, and I think due to moral uncertainty /ā expected value society should pay far more attention to animals than it does. But I still think itās quite unclear which animals are moral patients. And my impression is that people whoāve looked into this tend to feel roughly similar (see e.g. Muehlhauserās report).
For example, just as with humans, non-human animal extinction and non-extinction are both attractor states. It is plausible that the extinction of both farmed and wild animals is better than existence, as some have suggested that wild animals tend to experience far more suffering than pleasure and it is clear that factory-farmed animals undergo significant suffering. Therefore causing non-human animal extinction may have high value. Even if some non-human animals do have positive welfare, it may be better to err on the side of caution and cause them to go extinct, making use of any resources or space that is freed up to support beings that have greater capacity for welfare and that are at lower risk of being exploited e.g. humans (although this may not be desirable under certain population axiologies).
My impression is that people interested in wild animal welfare early on jumped a bit too quickly to being confident that wild animal lives are net negative, possibly because one of the pioneers of this area (Brian Tomasik) is morally suffering-focused (hence the early arguments tended to focus on suffering).
I donāt have a strong view on whether wild animal lives tend to be net negative, but it seems to me that more uncertainty is warranted.
I donāt think this undermines the idea that maybe longtermists should focus on non-humans, but it suggests that maybe itās unwise to place much much emphasis on reducing suffering and maybe even causing extinction than on other options (e.g., improving their lives or increasing their population). I think we should currently see both options as plausible priorities.
tl;dr: Itās plausible to me that the future will involve far more nonbiological sentience (e.g., whole brain emulations) than biological sentience, which might make farm animals redundant and wild animals vastly outnumbered.
You write:
The second reason is an interesting possibility. It could be the case if perhaps there arenāt a vast number of expected non-human animals in the future. It does indeed seem possible that farmed animals may cease to exist in the future on account of being made redundant due to cultivated meat, although I certainly wouldnāt be sure of this given some of the technical problems with scaling up cultivated meat to become cost-competitive with cheap animal meat. Wild animals seem highly likely to continue to exist for a long time, and currently vastly outnumber humans. Therefore it seems that we should consider non-human animals, and perhaps particularly wild animals, when aiming to ensure that the long-term future goes well.
Iām not sure why you think itās highly likely that wild animals will continue to exist for a long-time, and in particular in large numbers relative to other types of beings (which you seem to imply, though you donāt state it outright)? It seems plausible to me that the future will involve something like massive expansion into space by (mostly) humans, whole-brain emulations, or artificial sentiences, without spreading wild animals to these places. (We might spread simulated wild animals without spreading biological ones, but we also might not.)
Relatedly, I think another reason farmed animals might be made redundant is that humanity may simply move from biological to digital form, such that there is no need to eat actual food of any kind. (Of course, it seems hard to say whether thisāll happen, but over a long time-scale I wouldnāt say itās highly unlikely.)
tl;dr for this comment:
I appreciate you highlighting that longtermists doesnāt necessarily entail ultimately focusing on humans.
But I think itād be better to broaden your discussion to humans, non-human animals, and other types of non-humans that might be moral patients (e.g., artificial sentiences)
See also posts tagged Non-Humans and the Long-Term Future
And I think you imply that, when it comes to non-humans, we must necessarily focus on suffering-reduction
I think itād be better to broaden your discussion so that itās open to other goals regarding non-humans, such as increasing happiness
---
This was also a nitpick I had. Hereās a relevant part of the notes I wrote on their paper (which I should be posting in full soon):
It seems like the paper implicitly assumes that humans are the only moral patients
I think it makes sense for the paper to focus on humans, since it makes sense for many papers to tackle just one thorny issue at a time
But I think it wouldāve been good for the paper to at least briefly acknowledge that this is just a simplifying assumption
Perhaps just in a footnote
Otherwise the paper is kind-of implying that the authors really do take it as a given that humans are the only moral patients (which I think wouldnāt actually match the authorsā views)
---
I appreciate you highlighting that longtermism doesnāt have to focus on humans. But I personally think your framing is still narrower than it should be: You could be interpreted as implying that longtermists should focus on either humans, or on the suffering of non-human animals, or on some combination of those goals.
But I think itās also quite important to consider other possible moral patients that are neither humans nor animals, such as artificial sentiences. (Perhaps arguably some artificial sentiences would be considered by some people to be effectively humans or non-human animals, but this may not be the case, and other artificial sentiences might be more starkly different.)
And it could also be a moral priority to decrease bad things other than suffering among non-humans, such as death or a lack of freedom. (This would of course require that utilitarianism be false, or at least that we be quite uncertain about it.) And it could also be a moral priority to increase good things for non-humans (e.g., allow there to be large numbers of happy non-human beings).
I do think that this is all plausible. But I think people have sometimes jumped on this option too quickly, with too little critical consideration. See also this section of a post and this doc.
Also, I think āNon-human animals are moral patientsā (emphasis added) is too strong; Iām not sure we should be practically certain that any nonhuman animals are moral patients, and I definitely donāt think we should be practically certain that all are (e.g., insects, crabs).
To be clear, Iām vegan, and broadly supportive of people focusing on animal welfare, and I think due to moral uncertainty /ā expected value society should pay far more attention to animals than it does. But I still think itās quite unclear which animals are moral patients. And my impression is that people whoāve looked into this tend to feel roughly similar (see e.g. Muehlhauserās report).
My impression is that people interested in wild animal welfare early on jumped a bit too quickly to being confident that wild animal lives are net negative, possibly because one of the pioneers of this area (Brian Tomasik) is morally suffering-focused (hence the early arguments tended to focus on suffering).
I donāt have a strong view on whether wild animal lives tend to be net negative, but it seems to me that more uncertainty is warranted.
See also the EAG talk Does suffering dominate enjoyment in the animal kingdom? | Zach Groff.
I donāt think this undermines the idea that maybe longtermists should focus on non-humans, but it suggests that maybe itās unwise to place much much emphasis on reducing suffering and maybe even causing extinction than on other options (e.g., improving their lives or increasing their population). I think we should currently see both options as plausible priorities.
tl;dr: Itās plausible to me that the future will involve far more nonbiological sentience (e.g., whole brain emulations) than biological sentience, which might make farm animals redundant and wild animals vastly outnumbered.
You write:
Iām not sure why you think itās highly likely that wild animals will continue to exist for a long-time, and in particular in large numbers relative to other types of beings (which you seem to imply, though you donāt state it outright)? It seems plausible to me that the future will involve something like massive expansion into space by (mostly) humans, whole-brain emulations, or artificial sentiences, without spreading wild animals to these places. (We might spread simulated wild animals without spreading biological ones, but we also might not.)
Relatedly, I think another reason farmed animals might be made redundant is that humanity may simply move from biological to digital form, such that there is no need to eat actual food of any kind. (Of course, it seems hard to say whether thisāll happen, but over a long time-scale I wouldnāt say itās highly unlikely.)
For arguments for and against these sorts of points Iām making, see Should Longtermists Mostly Think About Animals? and the comments there.