Oh, also you wrote âLa is better than Lbâ in the definition of Minimal Tradeoffs, but I think you meant the reverse?
But there is a worry that if you donât make it a fixed r then you could have an infinite sequence of decreasing rs but they donât go arbitrarily low. (e.g., 1, 3â4, 5â8, 9â16, 17â32, 33â64, âŚ)
Isnât the problem if the râs approach 1? Specifically, for each lottery, get the infimum of the râs that work (it should be â¤1), and then take the supremum of those over each lottery. Your definition requires that this supremum is < 1.
Yep, that utility function is bounded, so using it and EU theory will avoid Fanaticism and bring on this problem. So much the worse for that utility function, I reckon.
And, in a sense, weâre not just comparing lotteries here. L_risky + B is two independent lotteries summed together, and we know in advance that youâre not going to affect B at all. In fact, it seems like B is the sort of thing you shouldnât have to worry about at all in your decision-making. (After all, itâs a bunch of events off in ancient India or in far distant space, outside your lightcone.) In the moral setting weâre dealing with, it seems entirely appropriate to cancel B from both sides of the comparison and just look at L_risky and L_safe, or to conditionalise the comparison on whatever B will actually turn out as: some b. Thatâs roughly whatâs going on there.
Hmm, I think this kind of stochastic separability assumption implies risk-neutrality (under the assumption of independence of irrelevant alternatives?), since it will force your rankings to be shift-invariant. If you do maximize the expected value of some function of the total utilitarian sum (youâre a vNM-rational utilitarian), then I think it should rule out non-linear functions of that sum.
However, what if we maximize the expected value of some function of the difference we make (e.g. compared to a âbusiness as usualâ option, subtracting the value of that option)? This way, we have to ignore the independent background B since it gets cancelled, and we can use a bounded vNM utility function on whatâs left. One argument Iâve heard against this (from section 4.2 here) is that itâs too agent-relative, but the intuition for stochastic separability itself seems kind of agent-relative, too. I suppose there are slightly different ways of framing stochastic separability, âWhat I canât affect shouldnât change what I should doâ vs âWhat isnât affected shouldnât change whatâs bestâ, with only the former agent-relative, although also more plausible given agent-relative ethics. If I reject agent relative ethics, neither seems so obvious.
Oh, also you wrote âLa is better than Lbâ in the definition of Minimal Tradeoffs, but I think you meant the reverse?
Isnât the problem if the râs approach 1? Specifically, for each lottery, get the infimum of the râs that work (it should be â¤1), and then take the supremum of those over each lottery. Your definition requires that this supremum is < 1.
Hmm, I think this kind of stochastic separability assumption implies risk-neutrality (under the assumption of independence of irrelevant alternatives?), since it will force your rankings to be shift-invariant. If you do maximize the expected value of some function of the total utilitarian sum (youâre a vNM-rational utilitarian), then I think it should rule out non-linear functions of that sum.
However, what if we maximize the expected value of some function of the difference we make (e.g. compared to a âbusiness as usualâ option, subtracting the value of that option)? This way, we have to ignore the independent background B since it gets cancelled, and we can use a bounded vNM utility function on whatâs left. One argument Iâve heard against this (from section 4.2 here) is that itâs too agent-relative, but the intuition for stochastic separability itself seems kind of agent-relative, too. I suppose there are slightly different ways of framing stochastic separability, âWhat I canât affect shouldnât change what I should doâ vs âWhat isnât affected shouldnât change whatâs bestâ, with only the former agent-relative, although also more plausible given agent-relative ethics. If I reject agent relative ethics, neither seems so obvious.