One possibility in favour of non-hedonic values mattering much more could be internalizing an individual’s ethical views, obligations or rankings of social outcomes as their own welfare. For example, maybe Tortured Tim is undergoing torture in order to prevent the torture or deaths of others, possibly loved ones or many other people. We wouldn’t necessarily say the torture in itself is any less bad to him hedonically, and, among available alternatives, undergoing it may very well be best on his ethical views and best on many impartial views. Can we say that it’s better for him? We might say so based on preference-based values, or by assigning substantial non-hedonic value to acting ethically or being virtuous.
On the other hand, if we allow such values, then our welfare range over a short interval of time may become unbounded (especially for someone with aggregative views like a utilitarian), which seems kind of suspect psychologically. Furthermore, if we’re able to make interpersonal utility comparisons across such utility functions at all (but maybe we can’t or shouldn’t!), we may need to rely on psychologically plausible units, e.g. how psychologically motivating something is, which may keep such non-hedonic preferences from outweighing torture, since torture is extremely psychologically motivating, plausibly near the extreme of psychological motivation, at least while it’s happening.
Or, Tim’s non-hedonic preferences about his own torture relative to other things are just a separate component of his welfare, and we shouldn’t normalize it by how bad torture is for him hedonically. Also, his preferences may just change over time, especially while being tortured compared to not being tortured.
One possibility in favour of non-hedonic values mattering much more could be internalizing an individual’s ethical views, obligations or rankings of social outcomes as their own welfare. For example, maybe Tortured Tim is undergoing torture in order to prevent the torture or deaths of others, possibly loved ones or many other people. We wouldn’t necessarily say the torture in itself is any less bad to him hedonically, and, among available alternatives, undergoing it may very well be best on his ethical views and best on many impartial views. Can we say that it’s better for him? We might say so based on preference-based values, or by assigning substantial non-hedonic value to acting ethically or being virtuous.
On the other hand, if we allow such values, then our welfare range over a short interval of time may become unbounded (especially for someone with aggregative views like a utilitarian), which seems kind of suspect psychologically. Furthermore, if we’re able to make interpersonal utility comparisons across such utility functions at all (but maybe we can’t or shouldn’t!), we may need to rely on psychologically plausible units, e.g. how psychologically motivating something is, which may keep such non-hedonic preferences from outweighing torture, since torture is extremely psychologically motivating, plausibly near the extreme of psychological motivation, at least while it’s happening.
Or, Tim’s non-hedonic preferences about his own torture relative to other things are just a separate component of his welfare, and we shouldn’t normalize it by how bad torture is for him hedonically. Also, his preferences may just change over time, especially while being tortured compared to not being tortured.