I find that this approach undermines one of the major intuitions behind utilitarianism in the first place: what is permissible, obligatory, etc., should not depend on parts of the universe that are independent of (unaffected by) my actions, (a stochastic version of) separability. It is no longer the case that what’s best depends only on the ex ante prospects each individual faces, basically one of the assumptions in Harsanyi’s argument for utilitarianism (Postulate c in the paper, assumption 3 here) and this generalization (Anteriority), because now the statistical dependence between individuals’ prospects matters. You could assume separability (independence of unconcerned agents) in uncertainty-free cases and still arrive at utilitarianism, but you’ve still undermined the intuition. Why use an additive theory at all now?
Could you elaborate why this violates Pareto? I’m used to that assumption being phrased in terms of sure things, but even if you make it stochastic it still seems fine to say “if A stochastically dominates B for each person, then A > B”.
And for what it’s worth, this is not one of my major intuitions behind utilitarianism. Cluelessness already implies that I need to consider a butterfly flapping its wings before deciding whether to donate to AMF; stating that the butterfly could be outside my light cone doesn’t seem qualitatively different.
(Possibly it is a key intuition that Harsanyi had, not sure. Also I do agree that considering consequences unaffected by my actions is a counterintuitive thing for any decision theory to do, moral or otherwise.)
You can’t get them to give opposite strict inequalities, i.e. A<B according to Pareto and A>B according to stochastic dominance, since a Pareto improvement implies higher expected total utility, which implies not stochastically dominated. But you can get a Pareto improvement that doesn’t stochastically dominate (being incomparable). “Gamble A first-order stochastically dominates gamble B if and only if every expected utility maximizer with an increasing utility function prefers gamble A over gamble B.”, which means that stochastic dominance with total utility is compatible with (but weaker than) the order implied by the expected value of any increasing function of total utility, including ones with very different risk preferences over total utility. So, you could apply f where f(x)=x,x3,x1/3,arctan(x),ex,−e−x, etc..
Let I be a random variable that’s 0 or 1, with probability 0.5 each. Consider two options with the following utility prospects for a single person:
I
0.5(1−I).
1 is better, with expected value 0.5, while 2 has expected value 0.25. 1 also stochastically dominates 2. Pareto and stochastic dominance agree here.
Suppose there’s another individual, with prospect I in both 1 and 2. Summing the utilities, we get
I+I=2I
0.5(1−I)+I=0.5(1+I)
But neither stochastically dominates the other. 2 has a 100% probability of being at least 0.5, but 1 only has a 50%. Pareto would rule out 2, but stochastic dominance does not. Both are permissible. So, this violates your definition of Pareto, although it’s compatible with a weak Pareto definition.
We can make it slightly worse with 3 options and 3 people:
I+I+0=2I
I+I+I=3I
0.5(1−I)+I+0=0.5(1+I)
Again, 3 is not stochastically dominated by 1 or 2, but 1 is stochastically dominated by 2, so ruled out. So 3 is permissible, while the Pareto improvement over it, 2, is not (although a better Pareto improvement, 3, is permissible). So, stochastic dominance permits an option (3) while ruling out another option (1) that Pareto dominates it. This of course doesn’t mean 3 stochastically dominates 1, though.
And for what it’s worth, this is not one of my major intuitions behind utilitarianism. Cluelessness already implies that I need to consider a butterfly flapping its wings before deciding whether to donate to AMF; stating that the butterfly could be outside my light cone doesn’t seem qualitatively different.
Cluelessness seems to me to be a practical concern about prediction, not how you evaluate uncertain outcomes when distributions are specified. If we are assuming the butterfly is completely independent from what you’re doing locally, then you’re pretty much biting the bullet on the Egyptology objection, and what you should do or are allowed to do now can depend on how well off you think the long-dead ancient Egyptians were, for non-instrumental reasons (not because this knowledge changes predictions about future events). I’m personally willing to bite this bullet, though; I don’t see why I can’t just care about the whole distribution.
And then we can make it worse still with infinitely many options :P
I+I+(1−1n)I=(3−1n)I, for n≥0
0.5(1−I)+I+0=0.5(1+I)
Here, each option in 1 is Pareto dominated and stochastically dominated by any option from 1 for larger n, and 2 is the only option which is not stochastically dominated. If you are not allowed to choose stochastically dominated options, then 2 is the only permissible option, despite being Pareto dominated by all the others. In general, though, I think you just want to go with something like “scalar utilitarianism” and allow yourself to choose stochastically dominated options when there are infinitely many of them, or else you may have no permissible options.
I find that this approach undermines one of the major intuitions behind utilitarianism in the first place: what is permissible, obligatory, etc., should not depend on parts of the universe that are independent of (unaffected by) my actions, (a stochastic version of) separability. It is no longer the case that what’s best depends only on the ex ante prospects each individual faces, basically one of the assumptions in Harsanyi’s argument for utilitarianism (Postulate c in the paper, assumption 3 here) and this generalization (Anteriority), because now the statistical dependence between individuals’ prospects matters. You could assume separability (independence of unconcerned agents) in uncertainty-free cases and still arrive at utilitarianism, but you’ve still undermined the intuition. Why use an additive theory at all now?
Could you elaborate why this violates Pareto? I’m used to that assumption being phrased in terms of sure things, but even if you make it stochastic it still seems fine to say “if A stochastically dominates B for each person, then A > B”.
And for what it’s worth, this is not one of my major intuitions behind utilitarianism. Cluelessness already implies that I need to consider a butterfly flapping its wings before deciding whether to donate to AMF; stating that the butterfly could be outside my light cone doesn’t seem qualitatively different.
(Possibly it is a key intuition that Harsanyi had, not sure. Also I do agree that considering consequences unaffected by my actions is a counterintuitive thing for any decision theory to do, moral or otherwise.)
You can’t get them to give opposite strict inequalities, i.e. A<B according to Pareto and A>B according to stochastic dominance, since a Pareto improvement implies higher expected total utility, which implies not stochastically dominated. But you can get a Pareto improvement that doesn’t stochastically dominate (being incomparable). “Gamble A first-order stochastically dominates gamble B if and only if every expected utility maximizer with an increasing utility function prefers gamble A over gamble B.”, which means that stochastic dominance with total utility is compatible with (but weaker than) the order implied by the expected value of any increasing function of total utility, including ones with very different risk preferences over total utility. So, you could apply f where f(x)=x,x3,x1/3,arctan(x),ex,−e−x, etc..
Let I be a random variable that’s 0 or 1, with probability 0.5 each. Consider two options with the following utility prospects for a single person:
I
0.5(1−I).
1 is better, with expected value 0.5, while 2 has expected value 0.25. 1 also stochastically dominates 2. Pareto and stochastic dominance agree here.
Suppose there’s another individual, with prospect I in both 1 and 2. Summing the utilities, we get
I+I=2I
0.5(1−I)+I=0.5(1+I)
But neither stochastically dominates the other. 2 has a 100% probability of being at least 0.5, but 1 only has a 50%. Pareto would rule out 2, but stochastic dominance does not. Both are permissible. So, this violates your definition of Pareto, although it’s compatible with a weak Pareto definition.
We can make it slightly worse with 3 options and 3 people:
I+I+0=2I
I+I+I=3I
0.5(1−I)+I+0=0.5(1+I)
Again, 3 is not stochastically dominated by 1 or 2, but 1 is stochastically dominated by 2, so ruled out. So 3 is permissible, while the Pareto improvement over it, 2, is not (although a better Pareto improvement, 3, is permissible). So, stochastic dominance permits an option (3) while ruling out another option (1) that Pareto dominates it. This of course doesn’t mean 3 stochastically dominates 1, though.
Cluelessness seems to me to be a practical concern about prediction, not how you evaluate uncertain outcomes when distributions are specified. If we are assuming the butterfly is completely independent from what you’re doing locally, then you’re pretty much biting the bullet on the Egyptology objection, and what you should do or are allowed to do now can depend on how well off you think the long-dead ancient Egyptians were, for non-instrumental reasons (not because this knowledge changes predictions about future events). I’m personally willing to bite this bullet, though; I don’t see why I can’t just care about the whole distribution.
And then we can make it worse still with infinitely many options :P
I+I+(1−1n)I=(3−1n)I, for n≥0
0.5(1−I)+I+0=0.5(1+I)
Here, each option in 1 is Pareto dominated and stochastically dominated by any option from 1 for larger n, and 2 is the only option which is not stochastically dominated. If you are not allowed to choose stochastically dominated options, then 2 is the only permissible option, despite being Pareto dominated by all the others. In general, though, I think you just want to go with something like “scalar utilitarianism” and allow yourself to choose stochastically dominated options when there are infinitely many of them, or else you may have no permissible options.
Oh interesting, thanks for sharing. These are compelling counterexamples