There are also other cases, involving St. Petersburg-like lotteries as I mentioned in my top-level comment, and possibly others that only require a bounded number of decisions. There’s a treatment of decision theory here that derives “boundedness” (EDIT: lexicographically ordered ordinal sequences of bounded real utilities) from rationality axioms extended to lotteries with infinitely many possible outcomes:
I haven’t come across any exotic cases that undermine the rationality of EU maximization with bounded utility functions relative to unbounded EU maximization, and I doubt there are, because the former is consistent with or implied by extensions of standard rationality axioms. Are you aware of any? Or are you thinking of conflicts with other moral intuitions (e.g. impartiality or against timidity or against local dependence on the welfare of unaffected individuals or your own past welfare)? Or problems that are difficult for both bounded and unbounded, e.g. those related to the debate over causal vs evidential decision theory?
We could believe we need to balance rationality axioms with other normative intuitions, including moral ones, so we can favour the violation of rationality axioms in some cases to preserve those moral intuitions.
There are also other cases, involving St. Petersburg-like lotteries as I mentioned in my top-level comment, and possibly others that only require a bounded number of decisions. There’s a treatment of decision theory here that derives “boundedness” (EDIT: lexicographically ordered ordinal sequences of bounded real utilities) from rationality axioms extended to lotteries with infinitely many possible outcomes:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/phpr.12704
I haven’t come across any exotic cases that undermine the rationality of EU maximization with bounded utility functions relative to unbounded EU maximization, and I doubt there are, because the former is consistent with or implied by extensions of standard rationality axioms. Are you aware of any? Or are you thinking of conflicts with other moral intuitions (e.g. impartiality or against timidity or against local dependence on the welfare of unaffected individuals or your own past welfare)? Or problems that are difficult for both bounded and unbounded, e.g. those related to the debate over causal vs evidential decision theory?
We could believe we need to balance rationality axioms with other normative intuitions, including moral ones, so we can favour the violation of rationality axioms in some cases to preserve those moral intuitions.