I mostly agree with you. I subtracted the reference to martingales from my previous comment because: a) not my expertise, b) this discussion doesn’t need additional complexity.
I’m sorry for having raised issues about paradoxes (perhaps there should be a Godwin’s Law about them); I don’t think we should mix edge cases like St. Petersburg (and problems with unbounded utility in general) with the optimizer’s curse – it’s already hard to analyze them separately.
when talking about utility itself, and thus having accounted for diminishing returns and all that, one should be risk-neutral.
Pace Buchak, I agree with that, but I wouldn’t say it aloud without adding caveats: in the real world, our problems are often of dynamic choice (and so one may have to think about optimal stopping and strategies, information gathering, etc.), we don’t observe utility-functions, we have limited cognitive resources, and we are evaluated and have to cooperate with others, etc. So I guess some “pure” risk-aversion might be a workable satisficing heuristics to [signal you] try to avoid the worst outcomes when you can’t account for all that. But that’s not talking about utility itself—and certainly not talking probability / uncertainty itself.
I subtracted the reference to martingales from my previous comment because: a) not my expertise, b) this discussion doesn’t need additional complexity.
I’m sorry for having raised issues about paradoxes (perhaps there should be a Godwin’s Law about them); I don’t think we should mix edge cases like St. Petersburg (and problems with unbounded utility in general) with the optimizer’s curse – it’s already hard to analyze them separately.
In line with the spirit of your comment, I believe, I think that it’s useful to recognise that not all discussions related to pros and cons of probabilities or how to use them or that sort of thing can or should address all potential issues. And I think that it’s good to recognise/acknowledge when a certain issue or edge case actually applies more broadly than just to the particular matter at hand (e.g., how St Petersburg is relevant even aside from the optimizer’s curse). An example of roughly the sort of reasoning I mean with that second sentence, from Tarsney writing on moral uncertainty:
The third worry suggests a broader objection, that content-based normalization approach in general is vulnerable to fanaticism. Suppose we conclude that a pluralistic hybrid of Kantianism and contractarianism would give lexical priority to Kantianism, and on this basis conclude that an agent who has positive credence in Kantianism, contractarianism, and this pluralistic hybrid ought to give lexical priority to Kantianism as well. [...]
I am willing to bite the bullet on this objection, up to a point: Some value claims may simply be more intrinsically weighty than others, and in some cases absolutely so. In cases where the agent’s credence in the lexically prioritized value claim approaches zero, however, the situation begins to resemble Pascal’s Wager (Pascal, 1669), the St. Petersburg Lottery (Bernoulli, 1738), and similar cases of extreme probabilities and magnitudes that bedevil decision theory in the context of merely empirical uncertainty. It is reasonable to hope, then, that the correct decision-theoretic solution to these problems (e.g. a dismissal of “rationally negligible probabilities” (Smith, 2014, 2016) or general rational permission for non-neutral risk attitudes (Buchak, 2013)) will blunt the force of the fanaticism objection.
But I certainly don’t think you need to apologise for raising those issues! They are relevant and very worthy of discussion—I just don’t know if they’re in the top 7 issues I’d discuss in this particular post, given its intended aims and my current knowledge base.
Oh, I only apologised because, well, if we start discussing about catchy paradoxes, we’ll soon lose the track of our original point.
But if you enjoy it, and since it is a relevant subject, I think people use 3 broad “strategies” to tackle St. Petersburg paradoxes and the like:
[epistemic status: low, but it kind makes sense]
a) “economist”: “if you use a bounded version, or takes time into account, the paradox disappears: just apply a logarithmic function for diminishing returns...”
b) “philosopher”: “unbounded utility is weird” or “beware, it’s Pascal’s Wager with objective probabilities!”
c) “statistician”: “the problem is this probability distribution, you can’t apply central limit / other theorem, or the indifference principle, or etc., and calculate its expectation”
I mostly agree with you. I subtracted the reference to martingales from my previous comment because: a) not my expertise, b) this discussion doesn’t need additional complexity.
I’m sorry for having raised issues about paradoxes (perhaps there should be a Godwin’s Law about them); I don’t think we should mix edge cases like St. Petersburg (and problems with unbounded utility in general) with the optimizer’s curse – it’s already hard to analyze them separately.
Pace Buchak, I agree with that, but I wouldn’t say it aloud without adding caveats: in the real world, our problems are often of dynamic choice (and so one may have to think about optimal stopping and strategies, information gathering, etc.), we don’t observe utility-functions, we have limited cognitive resources, and we are evaluated and have to cooperate with others, etc. So I guess some “pure” risk-aversion might be a workable satisficing heuristics to [signal you] try to avoid the worst outcomes when you can’t account for all that. But that’s not talking about utility itself—and certainly not talking probability / uncertainty itself.
In line with the spirit of your comment, I believe, I think that it’s useful to recognise that not all discussions related to pros and cons of probabilities or how to use them or that sort of thing can or should address all potential issues. And I think that it’s good to recognise/acknowledge when a certain issue or edge case actually applies more broadly than just to the particular matter at hand (e.g., how St Petersburg is relevant even aside from the optimizer’s curse). An example of roughly the sort of reasoning I mean with that second sentence, from Tarsney writing on moral uncertainty:
But I certainly don’t think you need to apologise for raising those issues! They are relevant and very worthy of discussion—I just don’t know if they’re in the top 7 issues I’d discuss in this particular post, given its intended aims and my current knowledge base.
Oh, I only apologised because, well, if we start discussing about catchy paradoxes, we’ll soon lose the track of our original point.
But if you enjoy it, and since it is a relevant subject, I think people use 3 broad “strategies” to tackle St. Petersburg paradoxes and the like:
[epistemic status: low, but it kind makes sense]
a) “economist”: “if you use a bounded version, or takes time into account, the paradox disappears: just apply a logarithmic function for diminishing returns...”
b) “philosopher”: “unbounded utility is weird” or “beware, it’s Pascal’s Wager with objective probabilities!”
c) “statistician”: “the problem is this probability distribution, you can’t apply central limit / other theorem, or the indifference principle, or etc., and calculate its expectation”