In the interests of furthering the debate, I’ll quickly offer several additional arguments that I think can favour global health over animal welfare.
Simulation Argument
The Simulation Argument says that it is very likely we are living in an ancestor simulation rather than base reality. Given that it is likely human ancestors that the simulators are interested in fully simulating, other non-human animals are likely to not be simulated to the same degree of granularity and may not be sentient.
I’ll note that a way to avoid this is to adopt Maximin rather than Expected Value as the decision function, as was suggested by John Rawls in A Theory of Justice.
Incommensurability
In moral philosophy there’s a concept called incommensurability, that some things are simply not comparable. Some might argue that human and animal experiences are incommensurable, that we cannot know what it is like to be bat, for instance.
Balance of Categorical Responsibilities
There is in philosophies like Confucianism, notions like Filial Piety that support a kind of hierarchy of moral circles, such that family strictly dominates the state and so on. In the extreme, this leads to a kind of ethical egoism that I don’t think any altruist would subscribe to, but which seems a common way of thinking among laypeople and conservatives in particular. I don’t suggest this option but I mention it as an extreme case.
Utilitarianism in contrast tends to take the opposite extreme of equalizing moral circles to the point of complete impartiality towards every individual, the greatest good for the greatest number. This creates a kind of demandingness that would require us to sacrifice pretty much everything in service of this, our lives devoted entirely to something like shrimp welfare.
Rather than taking either extreme, it’s possible to balance things according to the idea that we have separate, categorical responsibilities to ourselves, to our family, to our nation, to our species, and to everyone else, and to put resources into each category so that none of our responsibilities are neglected in favour of others, a kind of meta or group impartiality rather than individual impartiality.
I’ve always heard “pinpricks vs torture” or the Omelas story interpreted as an example of the overwhelming badness of extreme suffering, rather than against scope sensitivity. I’ve heard it cited in favor of animal welfare! As one could see from the Dominion documentary, billions of animals live lives of extreme suffering. Omelas could be interpreted to argue that this suffering is even more important than is otherwise assumed.
I think it’s hard to say what the simulation argument implies for this debate one way or the other, since there are many more (super speculative) considerations:
If consciousness is an illusion or a byproduct of certain kinds of computations which would arise in any substrate, then we should expect animals to be conscious even in the simulation.
I’ve heard some argue that the simulators would be interested in the life trajectories of particular individuals, which could imply that only a few select humans would be conscious, and nobody else. (In history, we tell the stories of world-changing individuals, neglecting those of every other individual. In video games, often only the player and maybe a select few NPCs are given rich behavior.)
The simulators might be interested in seeing what the pre-AGI world may have looked like, and will terminate the simulation once we get AGI. In that case, we’d want to go all-in on suffering reduction, which would probably mean prioritizing animals.
I agree with you that many claim the moral value of animal experiences is incommensurate with that of human experiences, and that categorical responsibilities would generally also favor humans.
In the interests of furthering the debate, I’ll quickly offer several additional arguments that I think can favour global health over animal welfare.
Simulation Argument
The Simulation Argument says that it is very likely we are living in an ancestor simulation rather than base reality. Given that it is likely human ancestors that the simulators are interested in fully simulating, other non-human animals are likely to not be simulated to the same degree of granularity and may not be sentient.
Pinpricks vs. Torture
This is a trolley problem scenario. It’s also been discussed by Eliezer Yudkowsky as the Speck of Dust in 3^^^3 People’s Eyes vs. One Human Being Tortured For 50 Years case. It’s also been analogously made in the famous short story The Ones Who Walk Away From Omelas by Ursula LeGuin. The basic idea is to question whether scope sensitivity is justified.
I’ll note that a way to avoid this is to adopt Maximin rather than Expected Value as the decision function, as was suggested by John Rawls in A Theory of Justice.
Incommensurability
In moral philosophy there’s a concept called incommensurability, that some things are simply not comparable. Some might argue that human and animal experiences are incommensurable, that we cannot know what it is like to be bat, for instance.
Balance of Categorical Responsibilities
There is in philosophies like Confucianism, notions like Filial Piety that support a kind of hierarchy of moral circles, such that family strictly dominates the state and so on. In the extreme, this leads to a kind of ethical egoism that I don’t think any altruist would subscribe to, but which seems a common way of thinking among laypeople and conservatives in particular. I don’t suggest this option but I mention it as an extreme case.
Utilitarianism in contrast tends to take the opposite extreme of equalizing moral circles to the point of complete impartiality towards every individual, the greatest good for the greatest number. This creates a kind of demandingness that would require us to sacrifice pretty much everything in service of this, our lives devoted entirely to something like shrimp welfare.
Rather than taking either extreme, it’s possible to balance things according to the idea that we have separate, categorical responsibilities to ourselves, to our family, to our nation, to our species, and to everyone else, and to put resources into each category so that none of our responsibilities are neglected in favour of others, a kind of meta or group impartiality rather than individual impartiality.
Thanks for the comment!
I’ve always heard “pinpricks vs torture” or the Omelas story interpreted as an example of the overwhelming badness of extreme suffering, rather than against scope sensitivity. I’ve heard it cited in favor of animal welfare! As one could see from the Dominion documentary, billions of animals live lives of extreme suffering. Omelas could be interpreted to argue that this suffering is even more important than is otherwise assumed.
I think it’s hard to say what the simulation argument implies for this debate one way or the other, since there are many more (super speculative) considerations:
If consciousness is an illusion or a byproduct of certain kinds of computations which would arise in any substrate, then we should expect animals to be conscious even in the simulation.
I’ve heard some argue that the simulators would be interested in the life trajectories of particular individuals, which could imply that only a few select humans would be conscious, and nobody else. (In history, we tell the stories of world-changing individuals, neglecting those of every other individual. In video games, often only the player and maybe a select few NPCs are given rich behavior.)
The simulators might be interested in seeing what the pre-AGI world may have looked like, and will terminate the simulation once we get AGI. In that case, we’d want to go all-in on suffering reduction, which would probably mean prioritizing animals.
I agree with you that many claim the moral value of animal experiences is incommensurate with that of human experiences, and that categorical responsibilities would generally also favor humans.