You could discount utilons—say there is a “meta-utilon” which is a function of utilons, like maybe meta utilons = log(utilons). And then you could maximize expected metautilons rather than expected utilons. Then I think stochastic dominance is equivalent to saying “better for any non decreasing metautilon function”.
But you could also pick a single metautilon function and I believe the outcome would at least be consistent.
Really you might as well call the metautilons “utilons” though. They are just not necessarily additive.
Monotonic transformations can indeed solve the infinity issue. For example the sum of 1/n doesn’t converge, but the sum of 1/n^2 converges, even though x → x^2 is monotonic.
You could discount utilons—say there is a “meta-utilon” which is a function of utilons, like maybe meta utilons = log(utilons). And then you could maximize expected metautilons rather than expected utilons. Then I think stochastic dominance is equivalent to saying “better for any non decreasing metautilon function”.
But you could also pick a single metautilon function and I believe the outcome would at least be consistent.
Really you might as well call the metautilons “utilons” though. They are just not necessarily additive.
A monotonic transformation like log doesn’t solve the infinity issue right?
Time discounting (to get you comparisons between finite sums) doesn’t preserve the ordering over sequences.
This makes me think you are thinking about something else?
Monotonic transformations can indeed solve the infinity issue. For example the sum of 1/n doesn’t converge, but the sum of 1/n^2 converges, even though x → x^2 is monotonic.