Yeah, Iâve seen mentions of Buchakâs work and one talk from her, but didnât really get it, and currently (with maybe medium confidence?) still think that, when talking about utility itself, and thus having accounted for diminishing returns and all that, one should be risk-neutral.
I hadnât heard of martingales, and have relatively limited knowledge of the St Petersburg paradox. It seems to me (low confidence) that:
things like the St Petersburg paradox and Pascalâs mugging are plausible candidates for reasons to reject standard expected utility maximisation, at least in certain edge cases, and maybe also expected value reasoning
Recognising that there are diminishing returns to many (most?) things at least somewhat blunts the force of those weird cases
Things like accepting risk aversion or rounding infinitemal probabilities to 0 may solve the problems without us having to get rid of expected value reasoning or entirely get rid of expected utility maximisation (just augment it substantially)
There are some arguments for just accepting as rational what expected utility maximisation says in these edge casesâitâs not totally clear that our aversion to the ânaive probabilisticâ answer here is valid; maybe that aversion just reflects scope neglect, or the fact that, in the St Petersburg case, thereâs the overlooked cost of it potentially taking months of continual play to earn substantial sums
I donât think these reveal problems with using EPs specifically. It seems like the same problems could occur if you talked in qualitative terms about probabilities (e.g., âat least possibleâ, âfairly good oddsâ), and in either case the âfixâ might look the same (e.g., rounding down either a quantitative or qualitative probability to 0 or to impossibility).
But it does seem that, in practice, people not using EPs are more likely to round down low probabilities to 0. This could be seen as good, for avoiding Pascalâs mugging, and/âor as bad, for a whole host of other reasons (e.g., ignoring many x-risks).
Maybe a fuller version of this post would include edge cases like that, but I know less about them, and I think they could create âissuesâ (arguably) even when one isnât using explicit probabilities anyway.
I mostly agree with you. I subtracted the reference to martingales from my previous comment because: a) not my expertise, b) this discussion doesnât need additional complexity.
Iâm sorry for having raised issues about paradoxes (perhaps there should be a Godwinâs Law about them); I donât think we should mix edge cases like St. Petersburg (and problems with unbounded utility in general) with the optimizerâs curse â itâs already hard to analyze them separately.
when talking about utility itself, and thus having accounted for diminishing returns and all that, one should be risk-neutral.
Pace Buchak, I agree with that, but I wouldnât say it aloud without adding caveats: in the real world, our problems are often of dynamic choice (and so one may have to think about optimal stopping and strategies, information gathering, etc.), we donât observe utility-functions, we have limited cognitive resources, and we are evaluated and have to cooperate with others, etc. So I guess some âpureâ risk-aversion might be a workable satisficing heuristics to [signal you] try to avoid the worst outcomes when you canât account for all that. But thatâs not talking about utility itselfâand certainly not talking probability /â uncertainty itself.
I subtracted the reference to martingales from my previous comment because: a) not my expertise, b) this discussion doesnât need additional complexity.
Iâm sorry for having raised issues about paradoxes (perhaps there should be a Godwinâs Law about them); I donât think we should mix edge cases like St. Petersburg (and problems with unbounded utility in general) with the optimizerâs curse â itâs already hard to analyze them separately.
In line with the spirit of your comment, I believe, I think that itâs useful to recognise that not all discussions related to pros and cons of probabilities or how to use them or that sort of thing can or should address all potential issues. And I think that itâs good to recognise/âacknowledge when a certain issue or edge case actually applies more broadly than just to the particular matter at hand (e.g., how St Petersburg is relevant even aside from the optimizerâs curse). An example of roughly the sort of reasoning I mean with that second sentence, from Tarsney writing on moral uncertainty:
The third worry suggests a broader objection, that content-based normalization approach in general is vulnerable to fanaticism. Suppose we conclude that a pluralistic hybrid of Kantianism and contractarianism would give lexical priority to Kantianism, and on this basis conclude that an agent who has positive credence in Kantianism, contractarianism, and this pluralistic hybrid ought to give lexical priority to Kantianism as well. [...]
I am willing to bite the bullet on this objection, up to a point: Some value claims may simply be more intrinsically weighty than others, and in some cases absolutely so. In cases where the agentâs credence in the lexically prioritized value claim approaches zero, however, the situation begins to resemble Pascalâs Wager (Pascal, 1669), the St. Petersburg Lottery (Bernoulli, 1738), and similar cases of extreme probabilities and magnitudes that bedevil decision theory in the context of merely empirical uncertainty. It is reasonable to hope, then, that the correct decision-theoretic solution to these problems (e.g. a dismissal of ârationally negligible probabilitiesâ (Smith, 2014, 2016) or general rational permission for non-neutral risk attitudes (Buchak, 2013)) will blunt the force of the fanaticism objection.
But I certainly donât think you need to apologise for raising those issues! They are relevant and very worthy of discussionâI just donât know if theyâre in the top 7 issues Iâd discuss in this particular post, given its intended aims and my current knowledge base.
Oh, I only apologised because, well, if we start discussing about catchy paradoxes, weâll soon lose the track of our original point.
But if you enjoy it, and since it is a relevant subject, I think people use 3 broad âstrategiesâ to tackle St. Petersburg paradoxes and the like:
[epistemic status: low, but it kind makes sense]
a) âeconomistâ: âif you use a bounded version, or takes time into account, the paradox disappears: just apply a logarithmic function for diminishing returns...â
b) âphilosopherâ: âunbounded utility is weirdâ or âbeware, itâs Pascalâs Wager with objective probabilities!â
c) âstatisticianâ: âthe problem is this probability distribution, you canât apply central limit /â other theorem, or the indifference principle, or etc., and calculate its expectationâ
Yeah, Iâve seen mentions of Buchakâs work and one talk from her, but didnât really get it, and currently (with maybe medium confidence?) still think that, when talking about utility itself, and thus having accounted for diminishing returns and all that, one should be risk-neutral.
I hadnât heard of martingales, and have relatively limited knowledge of the St Petersburg paradox. It seems to me (low confidence) that:
things like the St Petersburg paradox and Pascalâs mugging are plausible candidates for reasons to reject standard expected utility maximisation, at least in certain edge cases, and maybe also expected value reasoning
Recognising that there are diminishing returns to many (most?) things at least somewhat blunts the force of those weird cases
Things like accepting risk aversion or rounding infinitemal probabilities to 0 may solve the problems without us having to get rid of expected value reasoning or entirely get rid of expected utility maximisation (just augment it substantially)
There are some arguments for just accepting as rational what expected utility maximisation says in these edge casesâitâs not totally clear that our aversion to the ânaive probabilisticâ answer here is valid; maybe that aversion just reflects scope neglect, or the fact that, in the St Petersburg case, thereâs the overlooked cost of it potentially taking months of continual play to earn substantial sums
I donât think these reveal problems with using EPs specifically. It seems like the same problems could occur if you talked in qualitative terms about probabilities (e.g., âat least possibleâ, âfairly good oddsâ), and in either case the âfixâ might look the same (e.g., rounding down either a quantitative or qualitative probability to 0 or to impossibility).
But it does seem that, in practice, people not using EPs are more likely to round down low probabilities to 0. This could be seen as good, for avoiding Pascalâs mugging, and/âor as bad, for a whole host of other reasons (e.g., ignoring many x-risks).
Maybe a fuller version of this post would include edge cases like that, but I know less about them, and I think they could create âissuesâ (arguably) even when one isnât using explicit probabilities anyway.
I mostly agree with you. I subtracted the reference to martingales from my previous comment because: a) not my expertise, b) this discussion doesnât need additional complexity.
Iâm sorry for having raised issues about paradoxes (perhaps there should be a Godwinâs Law about them); I donât think we should mix edge cases like St. Petersburg (and problems with unbounded utility in general) with the optimizerâs curse â itâs already hard to analyze them separately.
Pace Buchak, I agree with that, but I wouldnât say it aloud without adding caveats: in the real world, our problems are often of dynamic choice (and so one may have to think about optimal stopping and strategies, information gathering, etc.), we donât observe utility-functions, we have limited cognitive resources, and we are evaluated and have to cooperate with others, etc. So I guess some âpureâ risk-aversion might be a workable satisficing heuristics to [signal you] try to avoid the worst outcomes when you canât account for all that. But thatâs not talking about utility itselfâand certainly not talking probability /â uncertainty itself.
In line with the spirit of your comment, I believe, I think that itâs useful to recognise that not all discussions related to pros and cons of probabilities or how to use them or that sort of thing can or should address all potential issues. And I think that itâs good to recognise/âacknowledge when a certain issue or edge case actually applies more broadly than just to the particular matter at hand (e.g., how St Petersburg is relevant even aside from the optimizerâs curse). An example of roughly the sort of reasoning I mean with that second sentence, from Tarsney writing on moral uncertainty:
But I certainly donât think you need to apologise for raising those issues! They are relevant and very worthy of discussionâI just donât know if theyâre in the top 7 issues Iâd discuss in this particular post, given its intended aims and my current knowledge base.
Oh, I only apologised because, well, if we start discussing about catchy paradoxes, weâll soon lose the track of our original point.
But if you enjoy it, and since it is a relevant subject, I think people use 3 broad âstrategiesâ to tackle St. Petersburg paradoxes and the like:
[epistemic status: low, but it kind makes sense]
a) âeconomistâ: âif you use a bounded version, or takes time into account, the paradox disappears: just apply a logarithmic function for diminishing returns...â
b) âphilosopherâ: âunbounded utility is weirdâ or âbeware, itâs Pascalâs Wager with objective probabilities!â
c) âstatisticianâ: âthe problem is this probability distribution, you canât apply central limit /â other theorem, or the indifference principle, or etc., and calculate its expectationâ