I think this could fit well with illusionism about consciousness and moral subjectivism or error theory, which are my best guesses for theories of consciousness and metaethics, respectively. The idea could be that there is no objective cardinal intensity for pain in humans or other animals; cardinal intensities are illusory beliefs we impose. We can also then impose them on behalf of other animals, basically as we believe we would if we had introspective access to their experiences, or if they had capacities for reasoning and report like ours. This is in line with Frankish’s normative interpretation of illusionism, where introspection and an actual illusion isn’t necessary for consciousness; it’s what a hypothetical sophisticated introspective system connected to the system of interest in the right way would believe, e.g. here and here.
To be clear, though, contrary to Frankish, and more in line with Humphrey and (I think?) Muehlhauser, I actually do think mammals and birds, at least, have consciousness illusions, including illusions (beliefs) that pain is bad and some pains are worse (more intense) than others, etc., and I suspect such illusions are necessary for counting as moral patients at all. I think Frankish’s standard for introspection and belief are too high. However, I’m not sure there’s a cardinal structure there. For an individual to impose cardinal structure subjectively themselves, they may need to be able to understand multiplication and ratios, and be inclined to actually make ratio judgements about pain intensities. Probably only humans would meet this bar.
Or, maybe animals’ preferences already impose cardinal structure, whether based on tradeoffs between duration and intensity, or based on gambles, both types of measures we use for QALY ratings in humans. It could end up being that, like in humans, there are multiple (somewhat disagreeing) candidates for cardinal measures of pain intensity, depending on how you ask: direct ratings (like visual analogue scale, a scale we explain to raters how to interpret cardinally), time tradeoffs, (standard) gambles, or versions corrected for biases. There could be no fact of the matter which is “correct”, although gambles are more consistent with expected utility theory.
Furthermore, humans rarely think about relative intensities of pains in cardinal terms or rate them on cardinal scales or think about these tradeoffs when they’re in pain, so imposing that structure when they don’t make such judgements could really usually mean imagining hypothetical judgements. If we entertain these hypothetical judgements for humans, then it seems more intuitive that we can do so for other animals, too (although this may require imagining much more remote hypotheticals, if we’re imagining them reporting ratings on a ratio scale).
I am not familiar with the authors you cite so I will refrain from commenting on their specific proposals until I have read them. I speculate that my comment below is not particularly sensitive to their views; I am a realist about morality and phenomenal consciousness but nevertheless believe that what you are suggesting is a constructive way forward.
So long as it is transparent, I definitely think it would be reasonable to assign relative numerical weights to Welfare Footprint’s categories according to how much you yourself value preventing them. The weights you use might be entirely based on moral commitments, or might partly be based on empirical beliefs about their relative cardinal intensities (if you believe they exist), or even animals’ preferences (if you believe the cardinal intensities do not exist or believe that preferences are what really matter). Unless one assigns lexical priority to the most severe categories, we have to make a prioritization decision somehow, and assigning weights at least makes the process legible.
What do you think about using functional definitions of intensity like Welfare Footprint Project’s, human cardinal ratings of intensity under those definitions, and then just extending those same ratings to other animals, like Reminding myself just how awful pain can get (plus, an experiment on myself) by Ren Springlea?
I think this could fit well with illusionism about consciousness and moral subjectivism or error theory, which are my best guesses for theories of consciousness and metaethics, respectively. The idea could be that there is no objective cardinal intensity for pain in humans or other animals; cardinal intensities are illusory beliefs we impose. We can also then impose them on behalf of other animals, basically as we believe we would if we had introspective access to their experiences, or if they had capacities for reasoning and report like ours. This is in line with Frankish’s normative interpretation of illusionism, where introspection and an actual illusion isn’t necessary for consciousness; it’s what a hypothetical sophisticated introspective system connected to the system of interest in the right way would believe, e.g. here and here.
To be clear, though, contrary to Frankish, and more in line with Humphrey and (I think?) Muehlhauser, I actually do think mammals and birds, at least, have consciousness illusions, including illusions (beliefs) that pain is bad and some pains are worse (more intense) than others, etc., and I suspect such illusions are necessary for counting as moral patients at all. I think Frankish’s standard for introspection and belief are too high. However, I’m not sure there’s a cardinal structure there. For an individual to impose cardinal structure subjectively themselves, they may need to be able to understand multiplication and ratios, and be inclined to actually make ratio judgements about pain intensities. Probably only humans would meet this bar.
Or, maybe animals’ preferences already impose cardinal structure, whether based on tradeoffs between duration and intensity, or based on gambles, both types of measures we use for QALY ratings in humans. It could end up being that, like in humans, there are multiple (somewhat disagreeing) candidates for cardinal measures of pain intensity, depending on how you ask: direct ratings (like visual analogue scale, a scale we explain to raters how to interpret cardinally), time tradeoffs, (standard) gambles, or versions corrected for biases. There could be no fact of the matter which is “correct”, although gambles are more consistent with expected utility theory.
Furthermore, humans rarely think about relative intensities of pains in cardinal terms or rate them on cardinal scales or think about these tradeoffs when they’re in pain, so imposing that structure when they don’t make such judgements could really usually mean imagining hypothetical judgements. If we entertain these hypothetical judgements for humans, then it seems more intuitive that we can do so for other animals, too (although this may require imagining much more remote hypotheticals, if we’re imagining them reporting ratings on a ratio scale).
I am not familiar with the authors you cite so I will refrain from commenting on their specific proposals until I have read them. I speculate that my comment below is not particularly sensitive to their views; I am a realist about morality and phenomenal consciousness but nevertheless believe that what you are suggesting is a constructive way forward.
So long as it is transparent, I definitely think it would be reasonable to assign relative numerical weights to Welfare Footprint’s categories according to how much you yourself value preventing them. The weights you use might be entirely based on moral commitments, or might partly be based on empirical beliefs about their relative cardinal intensities (if you believe they exist), or even animals’ preferences (if you believe the cardinal intensities do not exist or believe that preferences are what really matter). Unless one assigns lexical priority to the most severe categories, we have to make a prioritization decision somehow, and assigning weights at least makes the process legible.