I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn’t it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don’t like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn’t matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare.
Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying “X is lexicographically preferable to Y but Y has positive value”, and “Y has no value”?
From SEP: “A lexicographic preference relation gives absolute priority to one good over another. In the case of two-goods bundles, A≻B if a1>b1, or a1=b1 and a2>b2. Good 1 then cannot be traded off by any amount of good 2.”
I think in practice most people have ethical frameworks where they have lexicographic preferences, regardless of whether they are happy making other decisions using a cardinal utility framework.
I suspect most animal welfare enthusiasts presented with the possibility of organising a bullfight wouldn’t respond with “well how big is the audience?”. I don’t think their reluctance to determine whether bullfighting is ethical or not based on the value of estimated utility tradeoffs reflects either a rejection of the possibility of human welfare or a specieist bias against humans.
I like your framing though. Taken to its logical conclusion, you’re implying:
(1) Some people have strong lexographic preferences for X over improved Y
(2) Insisting that the only valid ethical decision making framework is a mathematical total utilitarian one in which everything of value must be assigned cardinal weights implies that to maintain this preference they must reject the possibility of Y having value
(3) Acting within this framework implies they should also be indifferent to Y in all other circumstances, including when there is no tradeoff
(4) Demanding people with lexographic preferences for X shut up and multiply is likely to lead to lower total utility for Y. And more generally a world in which everyone acts as if any possibility they assign any value to may be multiplied and traded off against their own value sounds like a world in which most people will opt to care about as few possibilities as possible.
I also disagree with those comments, but can you provide more argument for your principle? If I understand correctly, you are suggesting the principle that X can be lexicographically[1] preferable to Y if and only if Y has zero value. But, conditional on saying X is lexicographically preferable to Y, isn’t it better for the interests of Y to say that Y nevertheless has positive value? I mean, I don’t like it when people say things like no amount of animal suffering, however enormous, outweighs any amount of human suffering, however tiny. But I think it is even worse to say that animal suffering doesn’t matter at all, and there is no reason to alleviate it even if it could be alleviated at no cost to human welfare.
Maybe your reasoning is more like this: in practice, everything trades off against everything else. So, in practice, there is just no difference between saying “X is lexicographically preferable to Y but Y has positive value”, and “Y has no value”?
From SEP: “A lexicographic preference relation gives absolute priority to one good over another. In the case of two-goods bundles, A≻B if a1>b1, or a1=b1 and a2>b2. Good 1 then cannot be traded off by any amount of good 2.”
I think in practice most people have ethical frameworks where they have lexicographic preferences, regardless of whether they are happy making other decisions using a cardinal utility framework.
I suspect most animal welfare enthusiasts presented with the possibility of organising a bullfight wouldn’t respond with “well how big is the audience?”. I don’t think their reluctance to determine whether bullfighting is ethical or not based on the value of estimated utility tradeoffs reflects either a rejection of the possibility of human welfare or a specieist bias against humans.
I like your framing though. Taken to its logical conclusion, you’re implying:
(1) Some people have strong lexographic preferences for X over improved Y
(2) Insisting that the only valid ethical decision making framework is a mathematical total utilitarian one in which everything of value must be assigned cardinal weights implies that to maintain this preference they must reject the possibility of Y having value
(3) Acting within this framework implies they should also be indifferent to Y in all other circumstances, including when there is no tradeoff
(4) Demanding people with lexographic preferences for X shut up and multiply is likely to lead to lower total utility for Y. And more generally a world in which everyone acts as if any possibility they assign any value to may be multiplied and traded off against their own value sounds like a world in which most people will opt to care about as few possibilities as possible.