Suppose that if I take trade 1, I have a p≤100% subjective probability that trade 2 will be available, will definitely take it if it is, and conditional on taking trade 2, a q≤100% subjective probability that trade 3 will be available and will definitely take it if it is. There are two cases:
If p=q=100%, then I stick with World 1 and don’t make any trade. No Dutch book. (I don’t think p=q=100% is reasonable to assume in practice, though.)
Otherwise, p<100% or q<100% (or generally my overall probability of eventually taking trade 3 is less than 100%; I don’t need to definitely take the trades if they’re available). Based on my subjective probabilities, I’m not guaranteed to make both trades 2 and 3, so I’m not guaranteed to go from World 1 to World 1 but poorer. When I do end up in World 1 but poorer, this isn’t necessarily so different from the kinds of mundane errors that EU maximizers can make, too, e.g. if they find out that an option they selected was worse than they originally thought and switch to an earlier one at a cost.
A more specific person-affecting approach that handles uncertainty is in Teruji Thomas, 2019. The choices can be taken to be between policy functions for sequential decisions instead of choices between immediate decisions; the results are only sensitive to the distributions over final outcomes, anyway.
Alternatively (or maybe this is a special case of Thomas’s work), as long as you guarantee transitivity within each set of possible definite outcomes from your corresponding policy functions (even at the cost of IIA), e.g. by using voting methods like Schulze/beatpath, then you can always avoid (strictly) statewise dominated policies as part of your decision procedure*. This rules out the kinds of Dutch books that guarantee that you’re no better off in any state but worse off in some state. I’m not sure whether or not this approach will be guaranteed to avoid (strict) stochastically dominated options under the more plausible extensions of stochastic dominance when IIA is violated, but this will depend on the extension.
*over a finite set of policies to choose from. Say outcome distribution A (strictly) statewise dominates outcome distribution B given the set of alternative outcome distributions S={X1,…,Xn} (including A and B) if
P[A≥SB]=1, and for strict domination, P[A>SB]>0,
where the inequality is evaluated statewise by fixing a state for A, B and all the alternatives in S, i.e.A(ω)≥{X1(ω),…,Xn(ω)}B(ω) for state ω, with respect to a probability measure P over ω.
Suppose that if I take trade 1, I have a p≤100% subjective probability that trade 2 will be available, will definitely take it if it is, and conditional on taking trade 2, a q≤100% subjective probability that trade 3 will be available and will definitely take it if it is. There are two cases:
If p=q=100%, then I stick with World 1 and don’t make any trade. No Dutch book. (I don’t think p=q=100% is reasonable to assume in practice, though.)
Otherwise, p<100% or q<100% (or generally my overall probability of eventually taking trade 3 is less than 100%; I don’t need to definitely take the trades if they’re available). Based on my subjective probabilities, I’m not guaranteed to make both trades 2 and 3, so I’m not guaranteed to go from World 1 to World 1 but poorer. When I do end up in World 1 but poorer, this isn’t necessarily so different from the kinds of mundane errors that EU maximizers can make, too, e.g. if they find out that an option they selected was worse than they originally thought and switch to an earlier one at a cost.
A more specific person-affecting approach that handles uncertainty is in Teruji Thomas, 2019. The choices can be taken to be between policy functions for sequential decisions instead of choices between immediate decisions; the results are only sensitive to the distributions over final outcomes, anyway.
Alternatively (or maybe this is a special case of Thomas’s work), as long as you guarantee transitivity within each set of possible definite outcomes from your corresponding policy functions (even at the cost of IIA), e.g. by using voting methods like Schulze/beatpath, then you can always avoid (strictly) statewise dominated policies as part of your decision procedure*. This rules out the kinds of Dutch books that guarantee that you’re no better off in any state but worse off in some state. I’m not sure whether or not this approach will be guaranteed to avoid (strict) stochastically dominated options under the more plausible extensions of stochastic dominance when IIA is violated, but this will depend on the extension.
*over a finite set of policies to choose from. Say outcome distribution A (strictly) statewise dominates outcome distribution B given the set of alternative outcome distributions S={X1,…,Xn} (including A and B) if
P[A≥SB]=1, and for strict domination, P[A>SB]>0,where the inequality is evaluated statewise by fixing a state for A, B and all the alternatives in S, i.e.A(ω)≥{X1(ω),…,Xn(ω)}B(ω) for state ω, with respect to a probability measure P over ω.