I’m still trying to work through the maths on this so I won’t respond in much detail until I’ve got further with that, I may end up writing a separate post. I did start off at your position so there’s some chance I will end up there, I find this very confusing to think about.
Some brief comments on a couple of things:
I agree with this, but I don’t think this is our epistemic position, because we can understand all value relative to our own experiences.
I think relative is the operative word here. That is, you experience that a toe stub is 10 times worse than a papercut, and this motivates the development of moral theories that are consistent with this, and rules out ones that are not (e.g. ones that say they are equally bad). But there is an additional bit of parameter fixing that has to happen to get from the theory predicting this relative difference to predicting the absolute amount.
My claim is that at least generally speaking, and I think actually always, theories that are under consideration only predict these relative differences and not the absolute amounts. E.g. if a theory supposes that a certain pain receptor causes suffering when activated, then it might suppose that 10 receptors being activated causes 10 times as much suffering, but it doesn’t say anything about the absolute amount. This is also true of more fundamental theories (e.g. more information processing ⇒ more sentience). I have some ideas about why this is[1], but mainly I can’t think of any examples where this is not the case. If you can think of any then please tell me as that would at least partially invalidate this scale invariance thing (which would be good).
I think you would also say that theories don’t need to predict this overall scale parameter because we can always fix it based on our observations of absolute utility… this is the bit of maths that I’m not clear on yet, but I do currently think this is not true (i.e. the scale parameter does matter still, especially when you have a prior reason to think there would be a difference between the theories).
I agree that directly observing the value of a toe stub, say, under hedonism might not tell you much or anything about its absolute value under non-hedonistic theories of welfare.… However, I think we can say more under variants of closer precise theories.
I was intending to restrict to only theories that fall under hedonism, because I think this is the case where this kind of cross theory aggregation should work the best. And given that I think this scale invariance problem arises there then it would be even worse when considering more dissimilar theories.
So I was considering only theories where the welfare relevant states are things that feel pretty close to pleasure and pain, and you can be uncertain about how good or bad different states are for common sense reasons[2], but you’re able to tell at least roughly how good/​bad at least some states are.
Mentioned in the previous comment. One is that the prescriptions of utilitarianism have this scale invariance (only distinguish between better/​worse), as do the behaviours associated with pleasure/​pain (e.g. you can only communicate that something is more/​less painful, or [for animals] show an aversion to a more painful thing in favour of a less painful thing).
E.g. you might not remember them, you might struggle to factor in duration, the states might come along with some non-welfare-relevant experience which biases your recollection (e.g. a painfully bright red light vs a painfully bright green light)
My claim is that at least generally speaking, and I think actually always, theories that are under consideration only predict these relative differences and not the absolute amounts.
(...)
I have some ideas about why this is[1], but mainly I can’t think of any examples where this is not the case. If you can think of any then please tell me as that would at least partially invalidate this scale invariance thing (which would be good).
I think what matters here is less whether they predict absolute amounts, but which ones can be put on common scales. If everything could be put on the same common scale, then we would predict values relative to that common scale, and could treat the common scale like an absolute one. But scale invariance would still depend on you using that scale in a scale-invariant way with your moral theory.
I do doubt all theories can be put on one common scale together this way, but I suspect we can find common scales across some subsets of theories at a time. I think there usually is no foundational common scale between any pair of theories, but I’m open to the possibility in some cases, e.g. across approaches for counting conscious subsystems, causal vs evidential decision theory (MacAskill et al., 2019), in some pairs of person-affecting vs total utilitarian views (Riedener, 2019, also discussed inmy section here). This is because the theories seem to recognize the same central and foundational reasons, but just find that they apply differently or in different numbers. You can still value those reasons identically across theories. So, it seems like they’re using the same scale (all else equal), but differently.
I’m not sure, though. And maybe there are multiple plausible common scales for a given set of theories, but this could mean two envelopes problem between those common scales, not between the specific theories themselves.
And I agree that there probably isn’t a shared foundational common scale across all theories of consciousness, welfare and moral weights (as I discuss here).
I think you would also say that theories don’t need to predict this overall scale parameter because we can always fix it based on our observations of absolute utility
this is the bit of maths that I’m not clear on yet, but I do currently think this is not true (i.e. the scale parameter does matter still, especially when you have a prior reason to think there would be a difference between the theories).
Do you think we generally have the same problem for other phenomena, like how much water there is across theories of the nature of water or the strength of gravity as we moved from the Newtonian picture to general relativity? So, we shouldn’t treat theories of water as using a common scale, or theories of gravity as using a common scale? Again, maybe you end up with multiple common scales for water, and multiple for gravity, but the point is that we still can make some intertheoretic comparisons, even if vague/​underdetermined, based on the observations the theories are meant to explain, rather than say nothing about how they relate.
In these cases, including consciousness, water and gravity, it seems like we first care about the observations, and then we theorize about them, or else we wouldn’t bother theorizing about them at all. So we do some (fairly) theory-neutral valuing.
I’m still trying to work through the maths on this so I won’t respond in much detail until I’ve got further with that, I may end up writing a separate post. I did start off at your position so there’s some chance I will end up there, I find this very confusing to think about.
Some brief comments on a couple of things:
I think relative is the operative word here. That is, you experience that a toe stub is 10 times worse than a papercut, and this motivates the development of moral theories that are consistent with this, and rules out ones that are not (e.g. ones that say they are equally bad). But there is an additional bit of parameter fixing that has to happen to get from the theory predicting this relative difference to predicting the absolute amount.
My claim is that at least generally speaking, and I think actually always, theories that are under consideration only predict these relative differences and not the absolute amounts. E.g. if a theory supposes that a certain pain receptor causes suffering when activated, then it might suppose that 10 receptors being activated causes 10 times as much suffering, but it doesn’t say anything about the absolute amount. This is also true of more fundamental theories (e.g. more information processing ⇒ more sentience). I have some ideas about why this is[1], but mainly I can’t think of any examples where this is not the case. If you can think of any then please tell me as that would at least partially invalidate this scale invariance thing (which would be good).
I think you would also say that theories don’t need to predict this overall scale parameter because we can always fix it based on our observations of absolute utility… this is the bit of maths that I’m not clear on yet, but I do currently think this is not true (i.e. the scale parameter does matter still, especially when you have a prior reason to think there would be a difference between the theories).
I was intending to restrict to only theories that fall under hedonism, because I think this is the case where this kind of cross theory aggregation should work the best. And given that I think this scale invariance problem arises there then it would be even worse when considering more dissimilar theories.
So I was considering only theories where the welfare relevant states are things that feel pretty close to pleasure and pain, and you can be uncertain about how good or bad different states are for common sense reasons[2], but you’re able to tell at least roughly how good/​bad at least some states are.
Mentioned in the previous comment. One is that the prescriptions of utilitarianism have this scale invariance (only distinguish between better/​worse), as do the behaviours associated with pleasure/​pain (e.g. you can only communicate that something is more/​less painful, or [for animals] show an aversion to a more painful thing in favour of a less painful thing).
E.g. you might not remember them, you might struggle to factor in duration, the states might come along with some non-welfare-relevant experience which biases your recollection (e.g. a painfully bright red light vs a painfully bright green light)
I think what matters here is less whether they predict absolute amounts, but which ones can be put on common scales. If everything could be put on the same common scale, then we would predict values relative to that common scale, and could treat the common scale like an absolute one. But scale invariance would still depend on you using that scale in a scale-invariant way with your moral theory.
I do doubt all theories can be put on one common scale together this way, but I suspect we can find common scales across some subsets of theories at a time. I think there usually is no foundational common scale between any pair of theories, but I’m open to the possibility in some cases, e.g. across approaches for counting conscious subsystems, causal vs evidential decision theory (MacAskill et al., 2019), in some pairs of person-affecting vs total utilitarian views (Riedener, 2019, also discussed in my section here). This is because the theories seem to recognize the same central and foundational reasons, but just find that they apply differently or in different numbers. You can still value those reasons identically across theories. So, it seems like they’re using the same scale (all else equal), but differently.
I’m not sure, though. And maybe there are multiple plausible common scales for a given set of theories, but this could mean two envelopes problem between those common scales, not between the specific theories themselves.
And I agree that there probably isn’t a shared foundational common scale across all theories of consciousness, welfare and moral weights (as I discuss here).
Ya, that’s roughly my position, and more precisely that we can construct common scales based on our first-person observations of utility, although with the caveat that in fact these observations don’t uniquely determine the scale, so we still end up with multiple first-person observation-based common scales.
Do you think we generally have the same problem for other phenomena, like how much water there is across theories of the nature of water or the strength of gravity as we moved from the Newtonian picture to general relativity? So, we shouldn’t treat theories of water as using a common scale, or theories of gravity as using a common scale? Again, maybe you end up with multiple common scales for water, and multiple for gravity, but the point is that we still can make some intertheoretic comparisons, even if vague/​underdetermined, based on the observations the theories are meant to explain, rather than say nothing about how they relate.
In these cases, including consciousness, water and gravity, it seems like we first care about the observations, and then we theorize about them, or else we wouldn’t bother theorizing about them at all. So we do some (fairly) theory-neutral valuing.