I don’t think any of the axioms are self-evident. FWIW, I don’t really think anything is self-evident, maybe other than direct logical deductions and applications of definitions.
I have some sympathy for rejecting each of them, except maybe transitivity, which I’m pretty strongly inclined not to give up. (EDIT: On the other hand, I’m quite willing to give up the Independence of Irrelevant Alternatives, which is similar to transitivity.) I give weight to views that violate the axioms, under normative uncertainty.
Some ways you might reject them:
Continuity: Continuity rules out infinities and prospects with finite value but infinite expected value, like St Petersburg lotteries. If Continuity is meant to apply to all logically coherent prospects (including prospects with infinitely many possible outcomes), then this implies your utility function must be bounded. This rules out expectational total utilitarianism as a general view.
Continuity: You might think some harms are infinitely worse than others, e.g. when suffering reaches the threshold of unbearability. It could also be that this threshold is imprecise/vague/fuzzy, and we would also reject Completeness to accommodate that.
Completeness: Some types of values/goods/bads may be incomparable. Or, you might think interpersonal welfare comparisons, e.g. across very different kinds of minds, are not always possible. Tradeoffs between incomparable values would often be indeterminate. Or, you might think they are comparable in principle, but only vaguely so, leaving gaps of incomparability when the tradeoffs seem too close.
Independence: Different accounts of risk aversion or difference-making risk aversion (not just decreasing marginal utility, which is consistent with Independence).
Continuity doesn’t imply your utility function is bounded, just that it never takes on the value “infinity”, ie for any value it takes on, there are higher and lower values that can be averaged to reach that value.
If your utility function can take arbitrarily large but finite values, then you can design a prospect/lottery with infinitely many possible outcomes and infinite expected value, like the St Petersburg paradox. Then you can treat such a prospect/lottery as if it has infinite actual value, and demonstrate violations of Continuity the same way you would with an outcome with infinite value. This is assuming Continuity applies to arbitrary prospects/lotteries, including with infinitely many possible outcomes, not just finitely many possible outcomes per prospect/lottery.
(Infinitary versions of Independence and the Sure-Thing Principle also rule out “unbounded” utility functions. See Russell & Isaacs, 2020.)
Assume your utility function u is unbounded from above. Pick outcomes x1,x2,... such that u(xn)≥2n. Let your lottery X be xn with probability 1/2n. Note that ∑∞n=11/2n=1, so the probabilities sum to 1.
Then this lottery has infinite expected utility:
E[u(X)]=∞∑n=112nu(xn)≥∞∑n=112n2n=∞∑n=11=∞.
Now, consider any two other lotteries A and B with finite expected utility, such that A≺B≺X. There’s no way to mix A and X probabilistically to be equivalent to B, because
1. Continuity: Continuity rules out infinities and prospects with finite value but infinite expected value, like St Petersburg lotteries. If continuity is meant to apply to all logically coherent prospects (as usually assumed), then this implies your utility function must be bounded. This rules out expectational total utilitarianism as a general view.
2. Continuity: You might think some harms are infinitely worse than others, e.g. when suffering reaches the threshold of unbearability. It could also be that this threshold is imprecise/vague/fuzzy, and we would also reject completeness to accommodate that.
In practice, I think the effects of one’s actions decay to practically 0 after 100 years or so. In principle, I am open to one’s actions having effects which are arbitrarily large, but not infinite, and continuity does not rule out arbitrarily large effects.
3. Completeness: Some types of values/goods/bads may be incomparable. Or, you might think interpersonal welfare comparisons, e.g. across very different kinds of minds, are not always possible. Tradeoffs between incomparable values would often be indeterminate. Or, you might think they are comparable in principle, but only vaguely so, leaving gaps of incomparability when the tradeoffs seem too close.
Reality forces us to compare outcomes, at least implicitly.
4. Independence: Different accounts of risk aversion or difference-making risk aversion (not just decreasing marginal utility, which is consistent with Independence).
I just do not see how adding the same possibility to each of 2 lotteries can change my assessment of these.
In practice, I think the effects of one’s actions decay to practically 0 after 100 years or so. In principle, I am open one’s actions having effects which are arbitrarily large, but not infinite, and continuity does not rule out arbitrarily large effects.
If you allow arbitrarily large values and prospects with infinitely many different possible outcomes, then you can construct St Petersburg-like prospects, which have infinite expected value but only take finite value in every outcome. These violate Continuity (if it’s meant to apply to all prospects, including ones with infinitely many possible outcomes). So from arbitrary large values, we violate Continuity.
We’ve also discussed this a bit before, and I don’t expect to change your mind now, but I think actually infinite effects are quite plausible (mostly through acausal influence in a possibly spatially infinite universe), and I think it’s unwarranted to assign them probability 0.
Reality forces us to compare outcomes, at least implicitly.
There are decision rules that are consistent with violations of Completeness. I’m guessing you want to treat incomparable prospects/lotteries as equivalent or that whenever you pick one prospect over another, the one you pick is at least as good as the latter, but this would force other constraints on how you compare prospects/lotteries that these decision rules for incomplete preferences don’t.
I just do not see how adding the same possibility to each of 2 lotteries can change my assessment of these.
You could read more about the relevant accounts of risk aversion and difference-making risk aversion, e.g. discussed here and here. Their motivations would explain why and how Independence is violated. To be clear, I’m not personally sold on them.
If you allow arbitrarily large values and prospects with infinitely many different possible outcomes, then you can construct St Petersburg-like prospects, which have infinite expected value but only take finite value in every outcome. These violate Continuity (if it’s meant to apply to all prospects, including ones with infinitely many possible outcomes). So from arbitrary large values, we violate Continuity.
Sorry for the lack of clarity. In principle, I am open to lotteries with arbitrarily large expected utility, but not infinite, and continuity does not rule out arbitratily large expected utilities. I am open to lotteries with arbitrarily many outcomes (in principle), but not to lotteries with infinitely many outcomes (not even in principle).
We’ve also discussed this a bit before, and I don’t expect to change your mind now, but I think actually infinite effects are quite plausible (mostly through acausal influence in a possibly spatially infinite universe), and I think it’s unwarranted to assign them probability 0.
I think empirical evidence can take us from a very large universe to an arbitrarily large universe (for arbitrarily strong evidence), but never to an infinite universe. An arbitrarily large universe would still be infinitely smaller than an infinite universe, so I would say the former provides no empirical evidence for the latter. So I am confused about why discussions about infinite ethics often mention there is empirical evidence pointing to the existence of infinity[1]. Assigning a probability of 0 to something for which there is not empirical evidence at all makes sense to me.
There are decision rules that are consistent with violations of Completeness. I’m guessing you want to treat incomparable prospects/lotteries as equivalent or that whenever you pick one prospect over another, the one you pick is at least as good as the latter, but this would force other constraints on how you compare prospects/lotteries that these decision rules for incomplete preferences don’t.
I have not looked into the post you linked, but you guessed correctly. Which constraints would be forced as a result? I do not think preferential gaps make sense in principle.
You could read more about the relevant accounts of risk aversion and difference-making risk aversion, e.g. discussed here and here. Their motivations would explain why and how Independence is violated. To be clear, I’m not personally sold on them.
Thanks for the links. Plato’s section The Challenge from Risk Aversion argues for risk aversion based on observed risk aversion with respect to resources like cups of tea and money. I guess the same applies to Rethink Priorities’ section. I am very much on board with risk aversion with respect to resources, but I still think it makes all sense to be risk neutral relative to total hedonistic welfare.
I don’t think any of the axioms are self-evident. FWIW, I don’t really think anything is self-evident, maybe other than direct logical deductions and applications of definitions.
I have some sympathy for rejecting each of them, except maybe transitivity, which I’m pretty strongly inclined not to give up. (EDIT: On the other hand, I’m quite willing to give up the Independence of Irrelevant Alternatives, which is similar to transitivity.) I give weight to views that violate the axioms, under normative uncertainty.
Some ways you might reject them:
Continuity: Continuity rules out infinities and prospects with finite value but infinite expected value, like St Petersburg lotteries. If Continuity is meant to apply to all logically coherent prospects (including prospects with infinitely many possible outcomes), then this implies your utility function must be bounded. This rules out expectational total utilitarianism as a general view.
Continuity: You might think some harms are infinitely worse than others, e.g. when suffering reaches the threshold of unbearability. It could also be that this threshold is imprecise/vague/fuzzy, and we would also reject Completeness to accommodate that.
Completeness: Some types of values/goods/bads may be incomparable. Or, you might think interpersonal welfare comparisons, e.g. across very different kinds of minds, are not always possible. Tradeoffs between incomparable values would often be indeterminate. Or, you might think they are comparable in principle, but only vaguely so, leaving gaps of incomparability when the tradeoffs seem too close.
Independence: Different accounts of risk aversion or difference-making risk aversion (not just decreasing marginal utility, which is consistent with Independence).
Continuity doesn’t imply your utility function is bounded, just that it never takes on the value “infinity”, ie for any value it takes on, there are higher and lower values that can be averaged to reach that value.
If your utility function can take arbitrarily large but finite values, then you can design a prospect/lottery with infinitely many possible outcomes and infinite expected value, like the St Petersburg paradox. Then you can treat such a prospect/lottery as if it has infinite actual value, and demonstrate violations of Continuity the same way you would with an outcome with infinite value. This is assuming Continuity applies to arbitrary prospects/lotteries, including with infinitely many possible outcomes, not just finitely many possible outcomes per prospect/lottery.
(Infinitary versions of Independence and the Sure-Thing Principle also rule out “unbounded” utility functions. See Russell & Isaacs, 2020.)
Yes, continuity doesn’t rule out St Petersburg paradoxes. But i don’t see how unbounded utility leads to a contradiction. can you demonstrate it?
Assume your utility function u is unbounded from above. Pick outcomes x1,x2,... such that u(xn)≥2n. Let your lottery X be xn with probability 1/2n. Note that ∑∞n=11/2n=1, so the probabilities sum to 1.
Then this lottery has infinite expected utility:
E[u(X)]=∞∑n=112nu(xn)≥∞∑n=112n2n=∞∑n=11=∞.Now, consider any two other lotteries A and B with finite expected utility, such that A≺B≺X. There’s no way to mix A and X probabilistically to be equivalent to B, because
E[u(pA+(1−p)X)]=pE[u(A)]+(1−p)E[u(X)]=pE[u(A)]+∞=∞>E[u(B)],whenever p<1. For p=1, E[pA+(1−p)X]=E[u(A)]<E[u(B)].
So Continuity is violated.
Thanks, Michael! Nitpick, E((X)) in the 3rd line from the bottom should be E(u(X)).
Thanks, fixed!
Got it, yes I agree now.
Thanks, Michael.
In practice, I think the effects of one’s actions decay to practically 0 after 100 years or so. In principle, I am open to one’s actions having effects which are arbitrarily large, but not infinite, and continuity does not rule out arbitrarily large effects.
Reality forces us to compare outcomes, at least implicitly.
I just do not see how adding the same possibility to each of 2 lotteries can change my assessment of these.
If you allow arbitrarily large values and prospects with infinitely many different possible outcomes, then you can construct St Petersburg-like prospects, which have infinite expected value but only take finite value in every outcome. These violate Continuity (if it’s meant to apply to all prospects, including ones with infinitely many possible outcomes). So from arbitrary large values, we violate Continuity.
We’ve also discussed this a bit before, and I don’t expect to change your mind now, but I think actually infinite effects are quite plausible (mostly through acausal influence in a possibly spatially infinite universe), and I think it’s unwarranted to assign them probability 0.
There are decision rules that are consistent with violations of Completeness. I’m guessing you want to treat incomparable prospects/lotteries as equivalent or that whenever you pick one prospect over another, the one you pick is at least as good as the latter, but this would force other constraints on how you compare prospects/lotteries that these decision rules for incomplete preferences don’t.
You could read more about the relevant accounts of risk aversion and difference-making risk aversion, e.g. discussed here and here. Their motivations would explain why and how Independence is violated. To be clear, I’m not personally sold on them.
Thanks, Michael.
Sorry for the lack of clarity. In principle, I am open to lotteries with arbitrarily large expected utility, but not infinite, and continuity does not rule out arbitratily large expected utilities. I am open to lotteries with arbitrarily many outcomes (in principle), but not to lotteries with infinitely many outcomes (not even in principle).
I think empirical evidence can take us from a very large universe to an arbitrarily large universe (for arbitrarily strong evidence), but never to an infinite universe. An arbitrarily large universe would still be infinitely smaller than an infinite universe, so I would say the former provides no empirical evidence for the latter. So I am confused about why discussions about infinite ethics often mention there is empirical evidence pointing to the existence of infinity[1]. Assigning a probability of 0 to something for which there is not empirical evidence at all makes sense to me.
I have not looked into the post you linked, but you guessed correctly. Which constraints would be forced as a result? I do not think preferential gaps make sense in principle.
Thanks for the links. Plato’s section The Challenge from Risk Aversion argues for risk aversion based on observed risk aversion with respect to resources like cups of tea and money. I guess the same applies to Rethink Priorities’ section. I am very much on board with risk aversion with respect to resources, but I still think it makes all sense to be risk neutral relative to total hedonistic welfare.
From Bostrom (2011), “Recent cosmological evidence suggests that the world is probably infinite”.