This is an argument against hedonic utility being cardinal and for widespread commensurability between hedonic experiences of different kinds. It seems that our tradeoffs, however we arrive at them, don’t track the moral value of hedonic experiences.
Let X be some method or system by which we think we can establish the cardinality and/or commensurability of our hedonic experiences, and rough tradeoff rates. For example, X=reinforcement learning system in our brains, our actual choices, or our judgements of value (including intensity).
If X is not identical to our hedonic experiences, then it may be the case that X is itself what’s forcing the observed cardinality and/or commensurability onto our hedonic experiences. But if it’s X that’s doing this, and it’s the hedonic experiences themselves that are of moral value, then that cardinality and/or commensurability are properties of X, not our hedonic experiences themselves. So the observed cardinality and/or commensurability is a moral illusion.
Here’s a more specific illustration of this argument:
Do our reinforcement systems have access to our whole experiences (or the whole hedonic component), or only some subsets of those neurons that are firing that are responsible for them? And what if they’re more strongly connected to parts of the brain for certain kinds of experiences than others? It seems like there’s a continuum of ways our reinforcement systems could be off or even badly off, so it would be more surprising to me that it would track true moral tradeoffs perfectly. Change (or add or remove) one connection between a neuron in the hedonic system and one in the reinforcement system, and now the tradeoffs made will be different, without affecting the moral value of the hedonic states. If the link between hedonic intensity and reinforcement strength is so fragile, what are the chances the reinforcement system has got it exactly right in the first place? Should be 0 (assuming my model is right).
At least for similar hedonic experiences of different intensities, if they’re actually cardinal, we might expect the reinforcement system to capture some continuous monotonic transformation and not a linear transformation. But then it could be applying different monotonic transformations to different kinds of hedonic experiences. So why should we trust the tradeoffs between these different kinds of hedonic experiences?
The “cardinal hedonist” might object that X (e.g. introspective judgement of intensity) could be identical to our hedonistic experiences, or does track their cardinality closely enough.
I think, as a matter of fact, X will necessarily involve extra (neural) machinery that can distort our judgements, as I illustrate with the reinforcement learning case. It could be that our judgements are still approximately correct despite this, though.
Most importantly, the accuracy of our judgements depends on there being something fundamental that they’re tracking in the first place, so I think hedonists who use cardinal judgements of intensity owe us a good explanation for where this supposed cardinality comes from, which I expect is not possible with our current understanding of neuroscience, and I’m skeptical that it will ever be possible. I think there’s a great deal of unavoidable arbitrariness in our understanding of consciousness.
Here’s an illustration with math. Let’s consider two kinds of hedonic experiences, A and B, with at least three different (signed) intensities each, a1<a2<a3 and b1<b2<b3, respectively, with IA={a1,a2,a3},IB={b1,b2,b3}. These intensities are at least ordered, but not necessarily cardinal like real numbers or integers and we can’t necessarily compare A and B. For example, A and B might be pleasure and suffering generally (with suffering negatively signed), or more specific experiences of these.
Then, what X does is map these intensities to numbers through some function,
f:IA∪IB→R
satisfying f(a1)<f(a2)<f(a3) and f(b1)<f(b2)<f(b3). We might even let IA and IB be some ordered continuous intervals, isomorphic to a real-valued interval, and have f be continuous and increasing on each of IA and IB, but again, it’s f that’s introducing the cardinalization and commensurability (or a different cardinalization and commensurability from the real one, if any); these aren’t inherent to A and B.
This is an argument against hedonic utility being cardinal and for widespread commensurability between hedonic experiences of different kinds. It seems that our tradeoffs, however we arrive at them, don’t track the moral value of hedonic experiences.
Let X be some method or system by which we think we can establish the cardinality and/or commensurability of our hedonic experiences, and rough tradeoff rates. For example, X=reinforcement learning system in our brains, our actual choices, or our judgements of value (including intensity).
If X is not identical to our hedonic experiences, then it may be the case that X is itself what’s forcing the observed cardinality and/or commensurability onto our hedonic experiences. But if it’s X that’s doing this, and it’s the hedonic experiences themselves that are of moral value, then that cardinality and/or commensurability are properties of X, not our hedonic experiences themselves. So the observed cardinality and/or commensurability is a moral illusion.
Here’s a more specific illustration of this argument:
Do our reinforcement systems have access to our whole experiences (or the whole hedonic component), or only some subsets of those neurons that are firing that are responsible for them? And what if they’re more strongly connected to parts of the brain for certain kinds of experiences than others? It seems like there’s a continuum of ways our reinforcement systems could be off or even badly off, so it would be more surprising to me that it would track true moral tradeoffs perfectly. Change (or add or remove) one connection between a neuron in the hedonic system and one in the reinforcement system, and now the tradeoffs made will be different, without affecting the moral value of the hedonic states. If the link between hedonic intensity and reinforcement strength is so fragile, what are the chances the reinforcement system has got it exactly right in the first place? Should be 0 (assuming my model is right).
At least for similar hedonic experiences of different intensities, if they’re actually cardinal, we might expect the reinforcement system to capture some continuous monotonic transformation and not a linear transformation. But then it could be applying different monotonic transformations to different kinds of hedonic experiences. So why should we trust the tradeoffs between these different kinds of hedonic experiences?
The “cardinal hedonist” might object that X (e.g. introspective judgement of intensity) could be identical to our hedonistic experiences, or does track their cardinality closely enough.
I think, as a matter of fact, X will necessarily involve extra (neural) machinery that can distort our judgements, as I illustrate with the reinforcement learning case. It could be that our judgements are still approximately correct despite this, though.
Most importantly, the accuracy of our judgements depends on there being something fundamental that they’re tracking in the first place, so I think hedonists who use cardinal judgements of intensity owe us a good explanation for where this supposed cardinality comes from, which I expect is not possible with our current understanding of neuroscience, and I’m skeptical that it will ever be possible. I think there’s a great deal of unavoidable arbitrariness in our understanding of consciousness.
Here’s an illustration with math. Let’s consider two kinds of hedonic experiences, A and B, with at least three different (signed) intensities each, a1<a2<a3 and b1<b2<b3, respectively, with IA={a1,a2,a3},IB={b1,b2,b3}. These intensities are at least ordered, but not necessarily cardinal like real numbers or integers and we can’t necessarily compare A and B. For example, A and B might be pleasure and suffering generally (with suffering negatively signed), or more specific experiences of these.
Then, what X does is map these intensities to numbers through some function,
f:IA∪IB→Rsatisfying f(a1)<f(a2)<f(a3) and f(b1)<f(b2)<f(b3). We might even let IA and IB be some ordered continuous intervals, isomorphic to a real-valued interval, and have f be continuous and increasing on each of IA and IB, but again, it’s f that’s introducing the cardinalization and commensurability (or a different cardinalization and commensurability from the real one, if any); these aren’t inherent to A and B.