I don’t think Romeo even has to deny any of the assumptions. Harsanyi’s result, derived from the three assumptions, is not enough to determine how to do intersubjective utility comparisons. It merely states that social welfare will be some linear combination of individual utilities. While this already greatly restricts the way in which utilities are aggregated, it does not specify which weights to use for this sum.
Moreover, arguing that weights should be equal based on the veil of ignorance, as I believe Harsanyi does, is not sufficient, since utility functions are only determined up to affine transformations, which includes rescalings. (This point has been made in the literature as a criticism of preference utilitarianism, I believe.) So there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.
Of course, if you are not using a kind of preference utilitarianism but instead just aggregate some quantities you believe to have an absolute scale—such as happiness and suffering—then you could argue that utility functions should just correspond to this one absolute scale, with the same scaling for everyone. Though I think this is also not a trivial argument—there are potentially different ways to get from this absolute scale or Axiology to behavior towards risky gambles, which in turn determine the utility functions.
> there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.
yup, thanks. Also across time as well as across agents at a particular moment.
I don’t think Romeo even has to deny any of the assumptions. Harsanyi’s result, derived from the three assumptions, is not enough to determine how to do intersubjective utility comparisons. It merely states that social welfare will be some linear combination of individual utilities. While this already greatly restricts the way in which utilities are aggregated, it does not specify which weights to use for this sum.
Moreover, arguing that weights should be equal based on the veil of ignorance, as I believe Harsanyi does, is not sufficient, since utility functions are only determined up to affine transformations, which includes rescalings. (This point has been made in the literature as a criticism of preference utilitarianism, I believe.) So there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.
Of course, if you are not using a kind of preference utilitarianism but instead just aggregate some quantities you believe to have an absolute scale—such as happiness and suffering—then you could argue that utility functions should just correspond to this one absolute scale, with the same scaling for everyone. Though I think this is also not a trivial argument—there are potentially different ways to get from this absolute scale or Axiology to behavior towards risky gambles, which in turn determine the utility functions.
> there seems to be no way to determine what equal weights should look like, without settling on a way to normalize utility functions, e.g., by range normalization or variance normalization. I think the debate about intersubjective utility comparisons comes in at the point where you ask how to normalize utility functions.
yup, thanks. Also across time as well as across agents at a particular moment.