If there is a human being that currently scores 10 out of 100 and a mouse that currently scores 9 out of 10, prioritarianism and egalitarianism imply, all else equal, that we ought to increase the welfare of the mouse before increasing the welfare of the human.
To clarify, this is if we’re increasing their welfare by the same amount, right? Prioritarianism and egalitarianism wouldn’t imply that it’s better for the mouse to be moved to 10 than for the human to be moved to 100.
Tatjana Višak (2017: 15.5.1 and 15.5.2) argues that any welfare theory that predicts large differences in realized welfare between humans and nonhuman animals must be false because, given a commitment to prioritarianism[52] or egalitarianism,[53] such a theory of welfare would imply that we ought to direct resources to animals that are almost as well-off as they possibly could be.
It seems like the opposite could be true in theory with an antifrustrationist or negative account of welfare where the max is 0, if an individual human’s welfare is harder to maximize, say, given the more varied and/or numerous preferences or stronger interests we have (e.g. future-oriented preferences), although in practice, average nonhuman animal life for many species, wild or farmed, does seem to involve more suffering (per second) to me.
To clarify, this is if we’re increasing their welfare by the same amount, right? Prioritarianism and egalitarianism wouldn’t imply that it’s better for the mouse to be moved to 10 than for the human to be moved to 100.
Right. The claim is that the prioritarian and the egalitarian would prefer to move the mouse from 9⁄10 to 10⁄10 before moving the human from 10⁄100 to 11⁄100. Kagan argues this is the wrong result, but because he doesn’t want to throw out distributive principles altogether, he thinks the best move is to appeal to differences in moral status between the mouse and the human.
To clarify, this is if we’re increasing their welfare by the same amount, right? Prioritarianism and egalitarianism wouldn’t imply that it’s better for the mouse to be moved to 10 than for the human to be moved to 100.
It seems like the opposite could be true in theory with an antifrustrationist or negative account of welfare where the max is 0, if an individual human’s welfare is harder to maximize, say, given the more varied and/or numerous preferences or stronger interests we have (e.g. future-oriented preferences), although in practice, average nonhuman animal life for many species, wild or farmed, does seem to involve more suffering (per second) to me.
Right. The claim is that the prioritarian and the egalitarian would prefer to move the mouse from 9⁄10 to 10⁄10 before moving the human from 10⁄100 to 11⁄100. Kagan argues this is the wrong result, but because he doesn’t want to throw out distributive principles altogether, he thinks the best move is to appeal to differences in moral status between the mouse and the human.