One way you could think about the St Petersburg lottery money pump is that the future version of yourself after evaluating the lottery just has different preferences or is a different agent. Now, you might say your preferences should be consistent over time and after evaluations, but why? I think the main reason is to avoid picking dominated outcome distributions, but there could be other ways to do that in practice, e.g. pre-commitments, resolute choice, burning bridges, trades, etc.. You would want to do the same thing for Parfit’s hitchhiker. And you would similarly want to constrain the choices of or make trades with other agents with different preferences, if you were handing off the decision-making to them.
I grant that this is pretty weird. But I think it’s weird because of the mathematical property that an infinite function can have where it’s average value (or its expected value) can be greater than any possible value it might have. In light of such a situation, it’s not particularly surprising that each time you discover the outcome of the situation, you’ll be disappointed and want to trade it away. If a view has weird implications because of weird math, that is the fault of the math, not of the view.
I’m not sure I would only blame the math, or that you should really separate the math from the view.
Basically all of the arguments for the finitary independence axiom and finitary sure-thing principle are also arguments for their infinitary versions, and then they imply “bounded” utility functions.[1] You could make exceptions for unbounded prospects and infinities because infinites are weird, but you should also probably accept that you’re at least somewhat undermining some of your arguments for fanaticism in the first place, because they won’t hold in full generality.
Indeed, I would say fanaticism is less instrumentally rational than bounded utility functions, i.e. more prone to making dominated choices. But there can be genuine tradeoffs between instrumental rationality and other desiderata. I don’t see why sometimes in theory making dominated choices is worse than sacrificing other desiderata. Either way, you’re losing something.
In my case, I’m willing to sacrifice some instrumental rationality to avoid fanaticism, so I’m sympathetic to some difference-making views.
That assumes independence of irrelevant alternatives, transitivity and completeness, but I’d think you can drop completeness and get a similar result, with “multi-utility functions”.
One way you could think about the St Petersburg lottery money pump is that the future version of yourself after evaluating the lottery just has different preferences or is a different agent. Now, you might say your preferences should be consistent over time and after evaluations, but why? I think the main reason is to avoid picking dominated outcome distributions, but there could be other ways to do that in practice, e.g. pre-commitments, resolute choice, burning bridges, trades, etc.. You would want to do the same thing for Parfit’s hitchhiker. And you would similarly want to constrain the choices of or make trades with other agents with different preferences, if you were handing off the decision-making to them.
I’m not sure I would only blame the math, or that you should really separate the math from the view.
Basically all of the arguments for the finitary independence axiom and finitary sure-thing principle are also arguments for their infinitary versions, and then they imply “bounded” utility functions.[1] You could make exceptions for unbounded prospects and infinities because infinites are weird, but you should also probably accept that you’re at least somewhat undermining some of your arguments for fanaticism in the first place, because they won’t hold in full generality.
Indeed, I would say fanaticism is less instrumentally rational than bounded utility functions, i.e. more prone to making dominated choices. But there can be genuine tradeoffs between instrumental rationality and other desiderata. I don’t see why sometimes in theory making dominated choices is worse than sacrificing other desiderata. Either way, you’re losing something.
In my case, I’m willing to sacrifice some instrumental rationality to avoid fanaticism, so I’m sympathetic to some difference-making views.
See Jeffrey Sanford Russell, and Yoaav Isaacs. “Infinite Prospects*.” Philosophy and Phenomenological Research, vol. 103, no. 1, Wiley, July 2020, pp. 178–98, https://doi.org/10.1111/phpr.12704, https://philarchive.org/rec/RUSINP-2
That assumes independence of irrelevant alternatives, transitivity and completeness, but I’d think you can drop completeness and get a similar result, with “multi-utility functions”.