Iâm inclined to think that this is a problem with infinities in general, not with unbounded utility functions per se.
I think itâs a problem for the conjunction of allowing some kinds of infinities and doing expected value maximization with unbounded utility functions. I think EV maximization with bounded utility functions isnât vulnerable to âisomorphicâ Dutch books/âmoney pumps or violations of the sure-thing principle. E.g., you could treat the possible outcomes of a lottery as all local parts of a larger single universe to aggregate, but then conditioning on the outcome of the first St. Petersburg lottery and comparing to the second lottery would correspond to comparing a local part of the first universe to the whole of the second universe, but the move from the whole first universe to the local part of the first universe canât happen via conditioning, and the arguments depend on conditioning.
Bounded utility functions have problems that unbounded utility functions donât, but these are in normative ethics and about how to actually assign values (including in infinite universes), not about violating plausible axioms of (normative) rationality/âdecision theory.
I think itâs a problem for the conjunction of allowing some kinds of infinities and doing expected value maximization with unbounded utility functions. I think EV maximization with bounded utility functions isnât vulnerable to âisomorphicâ Dutch books/âmoney pumps or violations of the sure-thing principle. E.g., you could treat the possible outcomes of a lottery as all local parts of a larger single universe to aggregate, but then conditioning on the outcome of the first St. Petersburg lottery and comparing to the second lottery would correspond to comparing a local part of the first universe to the whole of the second universe, but the move from the whole first universe to the local part of the first universe canât happen via conditioning, and the arguments depend on conditioning.
Bounded utility functions have problems that unbounded utility functions donât, but these are in normative ethics and about how to actually assign values (including in infinite universes), not about violating plausible axioms of (normative) rationality/âdecision theory.