Also, I saw this cited somewhere, I think showing that there’s a Dutch book that results in a sure loss for any unbounded utility function (I haven’t read it myself yet to verify this, though):
EDIT: It’s an infinite sequence of bets, each of which has positive EV, so you should take each if offered in order, one at a time, but all of them together leads to a sure loss, because each bet’s win condition is the lose condition for the next bet, and the loss is equal to or greater in magnitude than the win value. However, to guarantee a loss, there’s no bound on the number of bets you’ll need to make, although never infinitely many (with probability 0, if the conjunctions of the conditions has probability 0), like repeated double or nothing.
Though note that infinite sequences of choices are a well known paradox-ridden corner of decision theory, so proving that a theory falls down there is not conclusive.
I feel that exotic cases like this are interesting and help build up a picture of difficult cases for theories to cover, but don’t count strongly against particular theories which are shown to fail them. This is because it isn’t clear whether (1) any rival theories can deal with the exotic case, or (2) whether usual conditions (or theories) need to be slightly modified in the exotic setting. In other words, it may be another area where the central idea of Richard’s post (‘Puzzles for Everyone’) applies.
There are also other cases, involving St. Petersburg-like lotteries as I mentioned in my top-level comment, and possibly others that only require a bounded number of decisions. There’s a treatment of decision theory here that derives “boundedness” (EDIT: lexicographically ordered ordinal sequences of bounded real utilities) from rationality axioms extended to lotteries with infinitely many possible outcomes:
I haven’t come across any exotic cases that undermine the rationality of EU maximization with bounded utility functions relative to unbounded EU maximization, and I doubt there are, because the former is consistent with or implied by extensions of standard rationality axioms. Are you aware of any? Or are you thinking of conflicts with other moral intuitions (e.g. impartiality or against timidity or against local dependence on the welfare of unaffected individuals or your own past welfare)? Or problems that are difficult for both bounded and unbounded, e.g. those related to the debate over causal vs evidential decision theory?
We could believe we need to balance rationality axioms with other normative intuitions, including moral ones, so we can favour the violation of rationality axioms in some cases to preserve those moral intuitions.
Also, I saw this cited somewhere, I think showing that there’s a Dutch book that results in a sure loss for any unbounded utility function (I haven’t read it myself yet to verify this, though):
https://www.jstor.org/stable/3328594
https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-8284.00178
https://academic.oup.com/analysis/article-abstract/59/4/257/173397
(All links for the same paper.)
EDIT: It’s an infinite sequence of bets, each of which has positive EV, so you should take each if offered in order, one at a time, but all of them together leads to a sure loss, because each bet’s win condition is the lose condition for the next bet, and the loss is equal to or greater in magnitude than the win value. However, to guarantee a loss, there’s no bound on the number of bets you’ll need to make, although never infinitely many (with probability 0, if the conjunctions of the conditions has probability 0), like repeated double or nothing.
Though note that infinite sequences of choices are a well known paradox-ridden corner of decision theory, so proving that a theory falls down there is not conclusive.
I feel that exotic cases like this are interesting and help build up a picture of difficult cases for theories to cover, but don’t count strongly against particular theories which are shown to fail them. This is because it isn’t clear whether (1) any rival theories can deal with the exotic case, or (2) whether usual conditions (or theories) need to be slightly modified in the exotic setting. In other words, it may be another area where the central idea of Richard’s post (‘Puzzles for Everyone’) applies.
There are also other cases, involving St. Petersburg-like lotteries as I mentioned in my top-level comment, and possibly others that only require a bounded number of decisions. There’s a treatment of decision theory here that derives “boundedness” (EDIT: lexicographically ordered ordinal sequences of bounded real utilities) from rationality axioms extended to lotteries with infinitely many possible outcomes:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/phpr.12704
I haven’t come across any exotic cases that undermine the rationality of EU maximization with bounded utility functions relative to unbounded EU maximization, and I doubt there are, because the former is consistent with or implied by extensions of standard rationality axioms. Are you aware of any? Or are you thinking of conflicts with other moral intuitions (e.g. impartiality or against timidity or against local dependence on the welfare of unaffected individuals or your own past welfare)? Or problems that are difficult for both bounded and unbounded, e.g. those related to the debate over causal vs evidential decision theory?
We could believe we need to balance rationality axioms with other normative intuitions, including moral ones, so we can favour the violation of rationality axioms in some cases to preserve those moral intuitions.