The other unlisted option (here) is that we just accept that infinities are weird and can generate counter-intuitive results and that we shouldn’t take too much from them because it is easier to blame them then all of the other things wrapped up with them. I think the ordering on integers is weird, but it’s not a metaphysical problem. The weird fact is that every integer is unusually small. But that’s just a fact, not a problem to solve.
Infinities generate paradoxes. There are plenty of examples. In decision theory, there is also stuff like Satan’s apple and the expanding sphere of suffering / pleasure. Blaming them all on the weirdness of infinities just seems tidier than coming up with separate ad hoc resolutions.
I think there’s something to this. I argue in Sacrifice or weaken utilitarian principles that it’s better to satisfy the principles you find intuitive more than less (i.e. satisfy weaker versions, which could include the finitary or deterministic case versions, or approximate versions). So, it’s kind of a matter of degree. Still, I think we should have some nuance about infinities rather than treat them all the same and paint their consequences as all easily dismissable. (I gather that this is compatible with your responses so far.)
In general, I take actual infinities (infinities in outcomes or infinitely many decisions or options) as more problematic for basically everyone (although perhaps with additional problems for those with impartial aggregative views) and so their problems easier to dismiss and blame on infinities. Problems from probability distributions with infinitely many outcomes seem to apply much more narrowly and so harder to dismiss or blame on infinities.
(The rest of this comment goes through examples.)
And I don’t think the resolutions are in general ad hoc. Arguments for the Sure-Thing Principle are arguments for bounded utility (well, something more general), and we can characterize the ways that avoid the problem as such (given other EUT axioms, e.g. Russell and Isaacs, 2021). Dutch book arguments for probabilism are arguments that your credences should satisfy certain properties not satisfied by improper distributions. And improper distributions are poorly behaved in other ways that make them implausible for use as credences. For example, how do you define expectations, medians and other quantiles over them — or even the expected value of a nonzero constant functions or two-valued step function over improper distributions — in a way that makes sense? Improper distributions just do very little of what credences are supposed to do.
There are also representation theorems in infinite ethics, specifically giving discounting and limit functions under some conditions in Asheim, 2010 (discussed in West, 2015), and average utilitarianism under others in Pivato (2021, and further discussed in 2022 and 2023).
Satan’s apple would be a problem for basically everyone, and it results from an actual infinity, i.e. infinitely many actual decisions made. (I think how you should handle it in practice is to precommit to taking at most a specific finite number of pieces of the apple, or use a probability distribution, possibly one with infinite expected value but finite with certainty.)
Similarly, when you have infinitely many options to choose from, there may not be any undominated option. As long as you respect statewise dominance and have two outcomes A and B, with one strictly worse than the other, then there’s no undominated option among the set pA + (1-p)B, for all p strictly between 0 and 1 (with p=1/n or 1-1/n for each n). These are cases where the argument for dismissal is strong, because “solving” these problems would mean giving up the most basic requirements of our theories. (And this fits well with scalar utilitarianism.)
My inclination for the expanding sphere of suffering/pleasure is that there are principled solutions:
If you can argue for the separateness of persons, then you should sum over each person’s life before summing across lives. Or, people’s utility values are in fact utility functions, just preferences about how things go, then there may be nothing to aggregate within the person. There’s no temporal aggregation over each person in Harsanyi’s theorem.
If we have to pick an order to sum in or take a value density over, there are more or less natural ones, e.g. using a sequence of nested compact convex sets whose union is the whole space. If we can’t pick one, we can pick multiple or them all, either allowing incompleteness with a multi-utility representation (Shapley and Baucells, 1998, Dubra, Maccheroni, and Ok, 2004, McCarthy et al., 2017, McCarthy et al., 2021), or having normative uncertainty between them.
The other unlisted option (here) is that we just accept that infinities are weird and can generate counter-intuitive results and that we shouldn’t take too much from them because it is easier to blame them then all of the other things wrapped up with them. I think the ordering on integers is weird, but it’s not a metaphysical problem. The weird fact is that every integer is unusually small. But that’s just a fact, not a problem to solve.
Infinities generate paradoxes. There are plenty of examples. In decision theory, there is also stuff like Satan’s apple and the expanding sphere of suffering / pleasure. Blaming them all on the weirdness of infinities just seems tidier than coming up with separate ad hoc resolutions.
I think there’s something to this. I argue in Sacrifice or weaken utilitarian principles that it’s better to satisfy the principles you find intuitive more than less (i.e. satisfy weaker versions, which could include the finitary or deterministic case versions, or approximate versions). So, it’s kind of a matter of degree. Still, I think we should have some nuance about infinities rather than treat them all the same and paint their consequences as all easily dismissable. (I gather that this is compatible with your responses so far.)
In general, I take actual infinities (infinities in outcomes or infinitely many decisions or options) as more problematic for basically everyone (although perhaps with additional problems for those with impartial
aggregativeviews) and so their problems easier to dismiss and blame on infinities. Problems from probability distributions with infinitely many outcomes seem to apply much more narrowly and so harder to dismiss or blame on infinities.(The rest of this comment goes through examples.)
And I don’t think the resolutions are in general ad hoc. Arguments for the Sure-Thing Principle are arguments for bounded utility (well, something more general), and we can characterize the ways that avoid the problem as such (given other EUT axioms, e.g. Russell and Isaacs, 2021). Dutch book arguments for probabilism are arguments that your credences should satisfy certain properties not satisfied by improper distributions. And improper distributions are poorly behaved in other ways that make them implausible for use as credences. For example, how do you define expectations, medians and other quantiles over them — or even the expected value of a nonzero constant functions or two-valued step function over improper distributions — in a way that makes sense? Improper distributions just do very little of what credences are supposed to do.
There are also representation theorems in infinite ethics, specifically giving discounting and limit functions under some conditions in Asheim, 2010 (discussed in West, 2015), and average utilitarianism under others in Pivato (2021, and further discussed in 2022 and 2023).
Satan’s apple would be a problem for basically everyone, and it results from an actual infinity, i.e. infinitely many actual decisions made. (I think how you should handle it in practice is to precommit to taking at most a specific finite number of pieces of the apple, or use a probability distribution, possibly one with infinite expected value but finite with certainty.)
Similarly, when you have infinitely many options to choose from, there may not be any undominated option. As long as you respect statewise dominance and have two outcomes A and B, with one strictly worse than the other, then there’s no undominated option among the set pA + (1-p)B, for all p strictly between 0 and 1 (with p=1/n or 1-1/n for each n). These are cases where the argument for dismissal is strong, because “solving” these problems would mean giving up the most basic requirements of our theories. (And this fits well with scalar utilitarianism.)
My inclination for the expanding sphere of suffering/pleasure is that there are principled solutions:
If you can argue for the separateness of persons, then you should sum over each person’s life before summing across lives. Or, people’s utility values are in fact utility functions, just preferences about how things go, then there may be nothing to aggregate within the person. There’s no temporal aggregation over each person in Harsanyi’s theorem.
If we have to pick an order to sum in or take a value density over, there are more or less natural ones, e.g. using a sequence of nested compact convex sets whose union is the whole space. If we can’t pick one, we can pick multiple or them all, either allowing incompleteness with a multi-utility representation (Shapley and Baucells, 1998, Dubra, Maccheroni, and Ok, 2004, McCarthy et al., 2017, McCarthy et al., 2021), or having normative uncertainty between them.