Just to briefly indicate the horns of the paradox: in order to avoid the “recklessness” of orthodox (risk-neutral) expected utility in the face of tiny chances of enormous payoffs, you must either endorsetimidity or reject transitivity.
(...)
And rejecting transitivity strikes me as basically just giving up on the project of coherently systematizing how we should respond to uncertain prospects; I don’t view that as an acceptable option at all.
On orthodox expected utility theory (EUT), boundedness, and hence timidity if we can conceptualize “enormous payoffs”*, follows from standard decision-theoretic assumptions. Unbounded EU maximization violates the sure-thing principle, is vulnerable to Dutch books and is vulnerable to money pumps, all plausibly irrational. See, e.g. Paul Christiano’s comment with St. Petersburg lotteries and my response. So, it’s pretty plausible that unbounded EU maximization (and perhaps recklessness generally) is just inevitably formally irrational and similarly gives up on the same project of coherent systematization. Timidity seems like the only rational option. Even if it has unintuitive implications, it at least doesn’t conflict with principles of rationality.
However, I’m not totally sure, and I just wrote this post to discuss one of my major doubts. I do think all of this counts against normative realism about decision theory, though, and so Harsanyi’s utilitarian theorem and probably moral realism generally.
* One might instead just respond that there are no enormous payoffs. We can only talk about enormous payoffs with timid/bounded EUT because we have two kinds of value: (in the kinds of cases we’re interested in) impartial additive value, and decision-theoretic utility, as a function of this impartial additive value.
Also, I saw this cited somewhere, I think showing that there’s a Dutch book that results in a sure loss for any unbounded utility function (I haven’t read it myself yet to verify this, though):
EDIT: It’s an infinite sequence of bets, each of which has positive EV, so you should take each if offered in order, one at a time, but all of them together leads to a sure loss, because each bet’s win condition is the lose condition for the next bet, and the loss is equal to or greater in magnitude than the win value. However, to guarantee a loss, there’s no bound on the number of bets you’ll need to make, although never infinitely many (with probability 0, if the conjunctions of the conditions has probability 0), like repeated double or nothing.
Though note that infinite sequences of choices are a well known paradox-ridden corner of decision theory, so proving that a theory falls down there is not conclusive.
I feel that exotic cases like this are interesting and help build up a picture of difficult cases for theories to cover, but don’t count strongly against particular theories which are shown to fail them. This is because it isn’t clear whether (1) any rival theories can deal with the exotic case, or (2) whether usual conditions (or theories) need to be slightly modified in the exotic setting. In other words, it may be another area where the central idea of Richard’s post (‘Puzzles for Everyone’) applies.
There are also other cases, involving St. Petersburg-like lotteries as I mentioned in my top-level comment, and possibly others that only require a bounded number of decisions. There’s a treatment of decision theory here that derives “boundedness” (EDIT: lexicographically ordered ordinal sequences of bounded real utilities) from rationality axioms extended to lotteries with infinitely many possible outcomes:
I haven’t come across any exotic cases that undermine the rationality of EU maximization with bounded utility functions relative to unbounded EU maximization, and I doubt there are, because the former is consistent with or implied by extensions of standard rationality axioms. Are you aware of any? Or are you thinking of conflicts with other moral intuitions (e.g. impartiality or against timidity or against local dependence on the welfare of unaffected individuals or your own past welfare)? Or problems that are difficult for both bounded and unbounded, e.g. those related to the debate over causal vs evidential decision theory?
We could believe we need to balance rationality axioms with other normative intuitions, including moral ones, so we can favour the violation of rationality axioms in some cases to preserve those moral intuitions.
On orthodox expected utility theory (EUT), boundedness, and hence timidity if we can conceptualize “enormous payoffs”*, follows from standard decision-theoretic assumptions. Unbounded EU maximization violates the sure-thing principle, is vulnerable to Dutch books and is vulnerable to money pumps, all plausibly irrational. See, e.g. Paul Christiano’s comment with St. Petersburg lotteries and my response. So, it’s pretty plausible that unbounded EU maximization (and perhaps recklessness generally) is just inevitably formally irrational and similarly gives up on the same project of coherent systematization. Timidity seems like the only rational option. Even if it has unintuitive implications, it at least doesn’t conflict with principles of rationality.
However, I’m not totally sure, and I just wrote this post to discuss one of my major doubts. I do think all of this counts against normative realism about decision theory, though, and so Harsanyi’s utilitarian theorem and probably moral realism generally.
* One might instead just respond that there are no enormous payoffs. We can only talk about enormous payoffs with timid/bounded EUT because we have two kinds of value: (in the kinds of cases we’re interested in) impartial additive value, and decision-theoretic utility, as a function of this impartial additive value.
Also, I saw this cited somewhere, I think showing that there’s a Dutch book that results in a sure loss for any unbounded utility function (I haven’t read it myself yet to verify this, though):
https://www.jstor.org/stable/3328594
https://onlinelibrary.wiley.com/doi/abs/10.1111/1467-8284.00178
https://academic.oup.com/analysis/article-abstract/59/4/257/173397
(All links for the same paper.)
EDIT: It’s an infinite sequence of bets, each of which has positive EV, so you should take each if offered in order, one at a time, but all of them together leads to a sure loss, because each bet’s win condition is the lose condition for the next bet, and the loss is equal to or greater in magnitude than the win value. However, to guarantee a loss, there’s no bound on the number of bets you’ll need to make, although never infinitely many (with probability 0, if the conjunctions of the conditions has probability 0), like repeated double or nothing.
Though note that infinite sequences of choices are a well known paradox-ridden corner of decision theory, so proving that a theory falls down there is not conclusive.
I feel that exotic cases like this are interesting and help build up a picture of difficult cases for theories to cover, but don’t count strongly against particular theories which are shown to fail them. This is because it isn’t clear whether (1) any rival theories can deal with the exotic case, or (2) whether usual conditions (or theories) need to be slightly modified in the exotic setting. In other words, it may be another area where the central idea of Richard’s post (‘Puzzles for Everyone’) applies.
There are also other cases, involving St. Petersburg-like lotteries as I mentioned in my top-level comment, and possibly others that only require a bounded number of decisions. There’s a treatment of decision theory here that derives “boundedness” (EDIT: lexicographically ordered ordinal sequences of bounded real utilities) from rationality axioms extended to lotteries with infinitely many possible outcomes:
https://onlinelibrary.wiley.com/doi/pdf/10.1111/phpr.12704
I haven’t come across any exotic cases that undermine the rationality of EU maximization with bounded utility functions relative to unbounded EU maximization, and I doubt there are, because the former is consistent with or implied by extensions of standard rationality axioms. Are you aware of any? Or are you thinking of conflicts with other moral intuitions (e.g. impartiality or against timidity or against local dependence on the welfare of unaffected individuals or your own past welfare)? Or problems that are difficult for both bounded and unbounded, e.g. those related to the debate over causal vs evidential decision theory?
We could believe we need to balance rationality axioms with other normative intuitions, including moral ones, so we can favour the violation of rationality axioms in some cases to preserve those moral intuitions.