I think in practice most people have ethical frameworks where they have lexicographic preferences, regardless of whether they are happy making other decisions using a cardinal utility framework.
I suspect most animal welfare enthusiasts presented with the possibility of organising a bullfight wouldn’t respond with “well how big is the audience?”. I don’t think their reluctance to determine whether bullfighting is ethical or not based on the value of estimated utility tradeoffs reflects either a rejection of the possibility of human welfare or a specieist bias against humans.
I like your framing though. Taken to its logical conclusion, you’re implying:
(1) Some people have strong lexographic preferences for X over improved Y
(2) Insisting that the only valid ethical decision making framework is a mathematical total utilitarian one in which everything of value must be assigned cardinal weights implies that to maintain this preference they must reject the possibility of Y having value
(3) Acting within this framework implies they should also be indifferent to Y in all other circumstances, including when there is no tradeoff
(4) Demanding people with lexographic preferences for X shut up and multiply is likely to lead to lower total utility for Y. And more generally a world in which everyone acts as if any possibility they assign any value to may be multiplied and traded off against their own value sounds like a world in which most people will opt to care about as few possibilities as possible.
I think in practice most people have ethical frameworks where they have lexicographic preferences, regardless of whether they are happy making other decisions using a cardinal utility framework.
I suspect most animal welfare enthusiasts presented with the possibility of organising a bullfight wouldn’t respond with “well how big is the audience?”. I don’t think their reluctance to determine whether bullfighting is ethical or not based on the value of estimated utility tradeoffs reflects either a rejection of the possibility of human welfare or a specieist bias against humans.
I like your framing though. Taken to its logical conclusion, you’re implying:
(1) Some people have strong lexographic preferences for X over improved Y
(2) Insisting that the only valid ethical decision making framework is a mathematical total utilitarian one in which everything of value must be assigned cardinal weights implies that to maintain this preference they must reject the possibility of Y having value
(3) Acting within this framework implies they should also be indifferent to Y in all other circumstances, including when there is no tradeoff
(4) Demanding people with lexographic preferences for X shut up and multiply is likely to lead to lower total utility for Y. And more generally a world in which everyone acts as if any possibility they assign any value to may be multiplied and traded off against their own value sounds like a world in which most people will opt to care about as few possibilities as possible.