I’m inclined to think that this is a problem with infinities in general, not with unbounded utility functions per se.
I think it’s a problem for the conjunction of allowing some kinds of infinities and doing expected value maximization with unbounded utility functions. I think EV maximization with bounded utility functions isn’t vulnerable to “isomorphic” Dutch books/money pumps or violations of the sure-thing principle. E.g., you could treat the possible outcomes of a lottery as all local parts of a larger single universe to aggregate, but then conditioning on the outcome of the first St. Petersburg lottery and comparing to the second lottery would correspond to comparing a local part of the first universe to the whole of the second universe, but the move from the whole first universe to the local part of the first universe can’t happen via conditioning, and the arguments depend on conditioning.
Bounded utility functions have problems that unbounded utility functions don’t, but these are in normative ethics and about how to actually assign values (including in infinite universes), not about violating plausible axioms of (normative) rationality/decision theory.
I think it’s a problem for the conjunction of allowing some kinds of infinities and doing expected value maximization with unbounded utility functions. I think EV maximization with bounded utility functions isn’t vulnerable to “isomorphic” Dutch books/money pumps or violations of the sure-thing principle. E.g., you could treat the possible outcomes of a lottery as all local parts of a larger single universe to aggregate, but then conditioning on the outcome of the first St. Petersburg lottery and comparing to the second lottery would correspond to comparing a local part of the first universe to the whole of the second universe, but the move from the whole first universe to the local part of the first universe can’t happen via conditioning, and the arguments depend on conditioning.
Bounded utility functions have problems that unbounded utility functions don’t, but these are in normative ethics and about how to actually assign values (including in infinite universes), not about violating plausible axioms of (normative) rationality/decision theory.