I can’t turn this into a utility function because there’s too much agnosticism (and I think human utility functions are fictitious anyway). I will say that my preferences seem to be guided not only by a desire for intergenerational equality, but also for intergenerational agency.
If I’m a decision maker I’m going to consult all the relevant parties, but I can’t do that for the next generation. The next generation gets no say in the matter and yet feels the consequences just as vividly. There is no option where the next generation is (ex-ante) better off than the previous generation, but there is an option where they’re worse off (E). E is (imo) the worst option and if there was an opposite to E; a guaranteed −5 for this generation for a guaranteed +10 for the next, then I would consider that the best option.
Notice that the intergenerational inequality between the two is the same, but because the next generation has no agency I actually want there to be inequality (in their favor) as a kind of compensation. I think this extends to other moral decision processes too. Whenever a party can’t consent to a decision (because they’re in the future, far away, don’t understand me...), I’m inclined to make the payoff more unequal in their favor as a rectification.
EDIT: Maybe we can construct other thought experiments to see to what degree agency has value. Clearly we value it in ourselves (e.g people pay for it) and in others (e.g people die for democracy), but to what extend? If I have a 100% agency in a situation I feel that is valuable enough to compensate those without it with some WELLBY’s, even though I dislike inequality. What happens if we shift the parameters, (e.g more total wellbeing with less equality in wellbeing, but less total agency with more equal agency)?
I get a sense that I value equality in agency more than equality in WELLBY. I think I also value total agency (without increasing agency inequality) more than WELLBY equality. If we add risk aversion for parameters it seems I have more risk aversion for creating unequal agency than unequal WELLBY. For the other payoffs it’s hard to say. Do other people have the same inclination? Might be interesting to create more thought experiments.
Another hypothesis is that I don’t value agency, but want to minimize blame. If I construct the thought experiments such that no-one knows I did it, I still feel the same way. So it can’t be blame. Maybe it’s not blame but blameworthiness, or maybe responsibility. However, responsibility and blameworthiness are entwined with agency so this might not be a useful distinction. If you have a thought experiment that untangles them, please let me know.
I can’t turn this into a utility function because there’s too much agnosticism (and I think human utility functions are fictitious anyway). I will say that my preferences seem to be guided not only by a desire for intergenerational equality, but also for intergenerational agency.
If I’m a decision maker I’m going to consult all the relevant parties, but I can’t do that for the next generation. The next generation gets no say in the matter and yet feels the consequences just as vividly. There is no option where the next generation is (ex-ante) better off than the previous generation, but there is an option where they’re worse off (E). E is (imo) the worst option and if there was an opposite to E; a guaranteed −5 for this generation for a guaranteed +10 for the next, then I would consider that the best option.
Notice that the intergenerational inequality between the two is the same, but because the next generation has no agency I actually want there to be inequality (in their favor) as a kind of compensation.
I think this extends to other moral decision processes too. Whenever a party can’t consent to a decision (because they’re in the future, far away, don’t understand me...), I’m inclined to make the payoff more unequal in their favor as a rectification.
EDIT: Maybe we can construct other thought experiments to see to what degree agency has value. Clearly we value it in ourselves (e.g people pay for it) and in others (e.g people die for democracy), but to what extend? If I have a 100% agency in a situation I feel that is valuable enough to compensate those without it with some WELLBY’s, even though I dislike inequality. What happens if we shift the parameters, (e.g more total wellbeing with less equality in wellbeing, but less total agency with more equal agency)?
I get a sense that I value equality in agency more than equality in WELLBY. I think I also value total agency (without increasing agency inequality) more than WELLBY equality. If we add risk aversion for parameters it seems I have more risk aversion for creating unequal agency than unequal WELLBY. For the other payoffs it’s hard to say. Do other people have the same inclination? Might be interesting to create more thought experiments.
Another hypothesis is that I don’t value agency, but want to minimize blame. If I construct the thought experiments such that no-one knows I did it, I still feel the same way. So it can’t be blame.
Maybe it’s not blame but blameworthiness, or maybe responsibility. However, responsibility and blameworthiness are entwined with agency so this might not be a useful distinction. If you have a thought experiment that untangles them, please let me know.