Oh, also you wrote ”La is better than Lb” in the definition of Minimal Tradeoffs, but I think you meant the reverse?
But there is a worry that if you don’t make it a fixed r then you could have an infinite sequence of decreasing rs but they don’t go arbitrarily low. (e.g., 1, 3⁄4, 5⁄8, 9⁄16, 17⁄32, 33⁄64, …)
Isn’t the problem if the r‘s approach 1? Specifically, for each lottery, get the infimum of the r’s that work (it should be ≤1), and then take the supremum of those over each lottery. Your definition requires that this supremum is < 1.
Yep, that utility function is bounded, so using it and EU theory will avoid Fanaticism and bring on this problem. So much the worse for that utility function, I reckon.
And, in a sense, we’re not just comparing lotteries here. L_risky + B is two independent lotteries summed together, and we know in advance that you’re not going to affect B at all. In fact, it seems like B is the sort of thing you shouldn’t have to worry about at all in your decision-making. (After all, it’s a bunch of events off in ancient India or in far distant space, outside your lightcone.) In the moral setting we’re dealing with, it seems entirely appropriate to cancel B from both sides of the comparison and just look at L_risky and L_safe, or to conditionalise the comparison on whatever B will actually turn out as: some b. That’s roughly what’s going on there.
Hmm, I think this kind of stochastic separability assumption implies risk-neutrality (under the assumption of independence of irrelevant alternatives?), since it will force your rankings to be shift-invariant. If you do maximize the expected value of some function of the total utilitarian sum (you’re a vNM-rational utilitarian), then I think it should rule out non-linear functions of that sum.
However, what if we maximize the expected value of some function of the difference we make (e.g. compared to a “business as usual” option, subtracting the value of that option)? This way, we have to ignore the independent background B since it gets cancelled, and we can use a bounded vNM utility function on what’s left. One argument I’ve heard against this (from section 4.2 here) is that it’s too agent-relative, but the intuition for stochastic separability itself seems kind of agent-relative, too. I suppose there are slightly different ways of framing stochastic separability, “What I can’t affect shouldn’t change what I should do” vs “What isn’t affected shouldn’t change what’s best”, with only the former agent-relative, although also more plausible given agent-relative ethics. If I reject agent relative ethics, neither seems so obvious.
Oh, also you wrote ”La is better than Lb” in the definition of Minimal Tradeoffs, but I think you meant the reverse?
Isn’t the problem if the r‘s approach 1? Specifically, for each lottery, get the infimum of the r’s that work (it should be ≤1), and then take the supremum of those over each lottery. Your definition requires that this supremum is < 1.
Hmm, I think this kind of stochastic separability assumption implies risk-neutrality (under the assumption of independence of irrelevant alternatives?), since it will force your rankings to be shift-invariant. If you do maximize the expected value of some function of the total utilitarian sum (you’re a vNM-rational utilitarian), then I think it should rule out non-linear functions of that sum.
However, what if we maximize the expected value of some function of the difference we make (e.g. compared to a “business as usual” option, subtracting the value of that option)? This way, we have to ignore the independent background B since it gets cancelled, and we can use a bounded vNM utility function on what’s left. One argument I’ve heard against this (from section 4.2 here) is that it’s too agent-relative, but the intuition for stochastic separability itself seems kind of agent-relative, too. I suppose there are slightly different ways of framing stochastic separability, “What I can’t affect shouldn’t change what I should do” vs “What isn’t affected shouldn’t change what’s best”, with only the former agent-relative, although also more plausible given agent-relative ethics. If I reject agent relative ethics, neither seems so obvious.