I think you are probably only rationally required not to take any action resulting in a world that is worse than an alternative at every percentile of the probability distribution
I think this is probably wrong, and I view stochastic dominance as a backup decision rule, not as a total replacement for expected value. Some thoughts here.
I think Dutch book/money pump arguments require you to rank unrealistic hypotheticals (e.g. where your subjective probabilities in, say, extinction risk are predictably manipulated by an adversary), and the laws of large numbers and central limit theorems can have limited applicability, if there are too few statistically independent outcomes.
Even much of our uncertainty should be correlated across agents in a multiverse, e.g. uncertainty about logical implications, facts or tendencies about the world. We can condition on some of those uncertain possibilities separately, apply the LLN or CLT to each across the multiverse, and then aggregate over the conditions, but I’m not convinced this works out to give you EV maximization.
I think this is probably wrong, and I view stochastic dominance as a backup decision rule, not as a total replacement for expected value. Some thoughts here.
Why try to maximize EV at all, though?
I think Dutch book/money pump arguments require you to rank unrealistic hypotheticals (e.g. where your subjective probabilities in, say, extinction risk are predictably manipulated by an adversary), and the laws of large numbers and central limit theorems can have limited applicability, if there are too few statistically independent outcomes.
Even much of our uncertainty should be correlated across agents in a multiverse, e.g. uncertainty about logical implications, facts or tendencies about the world. We can condition on some of those uncertain possibilities separately, apply the LLN or CLT to each across the multiverse, and then aggregate over the conditions, but I’m not convinced this works out to give you EV maximization.