Great to see actual defences of fanaticism! As you say, the arguments get stronger as you proceed, although I personally only found section 6 compelling, and the argument there depends on your precise background uncertainty. Depending on your background uncertainty, you could instead reject a lottery with a positive probability of an infinite payoff (as per Tarsney or the last paragraph in this comment) for a more probable finite payoff. To me, this still seems like a rejection of a kind of fanaticism, defined slightly differently.
I think your Minimal Tradeoffs is not that minimal, since itâs uniform in r, i.e. the same r can always be used. A weaker assumption would just be that for any binary lottery with payoff v or 0, there is another binary lottery with lower probability of nonzero payoff thatâs better. And this is compatible with a bounded vNM utility function. I would guess, assuming a vNM utility function, your Minimal Tradeoffs is equivalent to mine + unbounded above.
As someone whoâs sympathetic to bounded social welfare functions, Scale Consistency doesnât seem obvious (then again, neither does cardinal welfare to me, and with it, totalism), and it doesnât seem motivated in the paper. Maybe you could use Scale Invariance instead:
For any lotteries La and Lb, if La<Lb, then kâ La<kâ Lb for any positive integer k.
where the multiplication by k is not multiplying value directly, but by duplicating the world/âoutcome ktimes (with perfect correlation between the duplicates). This is (I think) Scale Invariance from this paper, which generalizes Harsanyiâs utilitarian theorem using weaker assumptions (in particular, it does not assume the vNM rationality axioms), and their representation theorems could also lead to fanaticism. Scale Invariance is a consequence of âstochasticâ separability, that if A > B, then A + C > B + C, for any lotteries A, B and C such that A and B act on disjoint populations from C.
Iâm confused about the following bit when you construct the background uncertainty in section 6, although I think your argument still goes through. First,
And Stochastic Dominance says that they must make the judgement Lrisky+BâťLsafe+B. But, if Fanaticism is not true, they cannot say that Lrisky alone is better than Lsafe alone.
This seems fine. Next, you write:
Nor can they say that Lsafe plus an additional payoff b is better than Lrisky plus the same b.
They canât say this for all b, but they can for some b, right? Arenât they saying exactly this when they deny Fanaticism (âIf you deny Fanaticism, you know that no matter how your background uncertainty is resolved, you will deny that Lrisky plus b is better than Lsafe plus b.â)? Is this meant to follow from Lrisky+BâťLsafe+B? I think thatâs what youâre trying to argue after, though.
Then,
(Adding a constant to both lotteries cannot change the verdict because totalism implies that outcomes can be compared with cardinal values alone.)
Arenât we comparing lotteries, not definite outcomes? Your vNM utility function could be arctan(âiui), where the function inside the arctan is just the total utilitarian sum. Let Lsafe=Ď2, and Lrisky=â with probability 0.5 (which is not small, but this is just to illustrate) and 0 otherwise. Then these have the same expected value without a background payoff (or b=0), but with b>0, the safe option has higher EV, while with b<0, the risky option has higher EV. Of course, then, this utility function doesnât deny Fanaticism under all possible background uncertainty.
Good point about Minimal Tradeoffs. But there is a worry that if you donât make it a fixed r then you could have an infinite sequence of decreasing rs but they donât go arbitrarily low. (e.g., 1, 3â4, 5â8, 9â16, 17â32, 33â64, âŚ)
I agree that Scale-Consistency isnât as compelling as some of the other key principles in there. And, with totalism, it could be replaced with the principle you suggest in which multiplication is just duplicating the world k. Assuming totalism, thatâd be a weaker claim, which is good. I guess one minor worry is that, if we reject totalism, duplicating a world k times wouldnât scale its value by k. So Scale-Consistency is maybe the better principle for arguing in greater generality. But yeah, not needed for totalism.
>Nor can they say that Lsafe plus an additional payoff b is better than Lrisky plus the same b.
They canât say this for all b, but they can for some b, right? Arenât they saying exactly this when they deny Fanaticism (âIf you deny Fanaticism, you know that no matter how your background uncertainty is resolved, you will deny that Lrisky plus b is better than Lsafe plus b.â)? Is this meant to follow from Lrisky+BâťLsafe+B? I think thatâs what youâre trying to argue after, though.
Nope, wasnât meaning for the statement involving little b to follow from the one about big B. b is a certain payoff, while B is a lottery. When we add b to either lottery, weâre just adding a constant to all of the payoffs. Then, if lotteries can be evaluated by their cardinal payoffs, weâve got to say that L_1 +b > L_2 +b iff L_1 > L_2.
Arenât we comparing lotteries, not definite outcomes? Your vNM utility function could be arctan(âiui), where the function inside the arctan is just the total utilitarian sum. Let Lsafe=Ď2, and Lrisky=â with probability 0.5 (which is not small, but this is just to illustrate) and 0 otherwise. Then these have the same expected value without a background payoff (or b=0), but with b>0, the safe option has higher EV, while with b<0, the risky option has higher EV.
Yep, that utility function is bounded, so using it and EU theory will avoid Fanaticism and bring on this problem. So much the worse for that utility function, I reckon.
And, in a sense, weâre not just comparing lotteries here. L_risky + B is two independent lotteries summed together, and we know in advance that youâre not going to affect B at all. In fact, it seems like B is the sort of thing you shouldnât have to worry about at all in your decision-making. (After all, itâs a bunch of events off in ancient India or in far distant space, outside your lightcone.) In the moral setting weâre dealing with, it seems entirely appropriate to cancel B from both sides of the comparison and just look at L_risky and L_safe, or to conditionalise the comparison on whatever B will actually turn out as: some b. Thatâs roughly whatâs going on there.
Oh, also you wrote âLa is better than Lbâ in the definition of Minimal Tradeoffs, but I think you meant the reverse?
But there is a worry that if you donât make it a fixed r then you could have an infinite sequence of decreasing rs but they donât go arbitrarily low. (e.g., 1, 3â4, 5â8, 9â16, 17â32, 33â64, âŚ)
Isnât the problem if the râs approach 1? Specifically, for each lottery, get the infimum of the râs that work (it should be â¤1), and then take the supremum of those over each lottery. Your definition requires that this supremum is < 1.
Yep, that utility function is bounded, so using it and EU theory will avoid Fanaticism and bring on this problem. So much the worse for that utility function, I reckon.
And, in a sense, weâre not just comparing lotteries here. L_risky + B is two independent lotteries summed together, and we know in advance that youâre not going to affect B at all. In fact, it seems like B is the sort of thing you shouldnât have to worry about at all in your decision-making. (After all, itâs a bunch of events off in ancient India or in far distant space, outside your lightcone.) In the moral setting weâre dealing with, it seems entirely appropriate to cancel B from both sides of the comparison and just look at L_risky and L_safe, or to conditionalise the comparison on whatever B will actually turn out as: some b. Thatâs roughly whatâs going on there.
Hmm, I think this kind of stochastic separability assumption implies risk-neutrality (under the assumption of independence of irrelevant alternatives?), since it will force your rankings to be shift-invariant. If you do maximize the expected value of some function of the total utilitarian sum (youâre a vNM-rational utilitarian), then I think it should rule out non-linear functions of that sum.
However, what if we maximize the expected value of some function of the difference we make (e.g. compared to a âbusiness as usualâ option, subtracting the value of that option)? This way, we have to ignore the independent background B since it gets cancelled, and we can use a bounded vNM utility function on whatâs left. One argument Iâve heard against this (from section 4.2 here) is that itâs too agent-relative, but the intuition for stochastic separability itself seems kind of agent-relative, too. I suppose there are slightly different ways of framing stochastic separability, âWhat I canât affect shouldnât change what I should doâ vs âWhat isnât affected shouldnât change whatâs bestâ, with only the former agent-relative, although also more plausible given agent-relative ethics. If I reject agent relative ethics, neither seems so obvious.
Great to see actual defences of fanaticism! As you say, the arguments get stronger as you proceed, although I personally only found section 6 compelling, and the argument there depends on your precise background uncertainty. Depending on your background uncertainty, you could instead reject a lottery with a positive probability of an infinite payoff (as per Tarsney or the last paragraph in this comment) for a more probable finite payoff. To me, this still seems like a rejection of a kind of fanaticism, defined slightly differently.
I think your Minimal Tradeoffs is not that minimal, since itâs uniform in r, i.e. the same r can always be used. A weaker assumption would just be that for any binary lottery with payoff v or 0, there is another binary lottery with lower probability of nonzero payoff thatâs better. And this is compatible with a bounded vNM utility function. I would guess, assuming a vNM utility function, your Minimal Tradeoffs is equivalent to mine + unbounded above.
As someone whoâs sympathetic to bounded social welfare functions, Scale Consistency doesnât seem obvious (then again, neither does cardinal welfare to me, and with it, totalism), and it doesnât seem motivated in the paper. Maybe you could use Scale Invariance instead:
where the multiplication by k is not multiplying value directly, but by duplicating the world/âoutcome ktimes (with perfect correlation between the duplicates). This is (I think) Scale Invariance from this paper, which generalizes Harsanyiâs utilitarian theorem using weaker assumptions (in particular, it does not assume the vNM rationality axioms), and their representation theorems could also lead to fanaticism. Scale Invariance is a consequence of âstochasticâ separability, that if A > B, then A + C > B + C, for any lotteries A, B and C such that A and B act on disjoint populations from C.
Iâm confused about the following bit when you construct the background uncertainty in section 6, although I think your argument still goes through. First,
This seems fine. Next, you write:
They canât say this for all b, but they can for some b, right? Arenât they saying exactly this when they deny Fanaticism (âIf you deny Fanaticism, you know that no matter how your background uncertainty is resolved, you will deny that Lrisky plus b is better than Lsafe plus b.â)? Is this meant to follow from Lrisky+BâťLsafe+B? I think thatâs what youâre trying to argue after, though.
Then,
Arenât we comparing lotteries, not definite outcomes? Your vNM utility function could be arctan(âiui), where the function inside the arctan is just the total utilitarian sum. Let Lsafe=Ď2, and Lrisky=â with probability 0.5 (which is not small, but this is just to illustrate) and 0 otherwise. Then these have the same expected value without a background payoff (or b=0), but with b>0, the safe option has higher EV, while with b<0, the risky option has higher EV. Of course, then, this utility function doesnât deny Fanaticism under all possible background uncertainty.
Thanks!
Good point about Minimal Tradeoffs. But there is a worry that if you donât make it a fixed r then you could have an infinite sequence of decreasing rs but they donât go arbitrarily low. (e.g., 1, 3â4, 5â8, 9â16, 17â32, 33â64, âŚ)
I agree that Scale-Consistency isnât as compelling as some of the other key principles in there. And, with totalism, it could be replaced with the principle you suggest in which multiplication is just duplicating the world k. Assuming totalism, thatâd be a weaker claim, which is good. I guess one minor worry is that, if we reject totalism, duplicating a world k times wouldnât scale its value by k. So Scale-Consistency is maybe the better principle for arguing in greater generality. But yeah, not needed for totalism.
Nope, wasnât meaning for the statement involving little b to follow from the one about big B. b is a certain payoff, while B is a lottery. When we add b to either lottery, weâre just adding a constant to all of the payoffs. Then, if lotteries can be evaluated by their cardinal payoffs, weâve got to say that L_1 +b > L_2 +b iff L_1 > L_2.
Yep, that utility function is bounded, so using it and EU theory will avoid Fanaticism and bring on this problem. So much the worse for that utility function, I reckon.
And, in a sense, weâre not just comparing lotteries here. L_risky + B is two independent lotteries summed together, and we know in advance that youâre not going to affect B at all. In fact, it seems like B is the sort of thing you shouldnât have to worry about at all in your decision-making. (After all, itâs a bunch of events off in ancient India or in far distant space, outside your lightcone.) In the moral setting weâre dealing with, it seems entirely appropriate to cancel B from both sides of the comparison and just look at L_risky and L_safe, or to conditionalise the comparison on whatever B will actually turn out as: some b. Thatâs roughly whatâs going on there.
Oh, also you wrote âLa is better than Lbâ in the definition of Minimal Tradeoffs, but I think you meant the reverse?
Isnât the problem if the râs approach 1? Specifically, for each lottery, get the infimum of the râs that work (it should be â¤1), and then take the supremum of those over each lottery. Your definition requires that this supremum is < 1.
Hmm, I think this kind of stochastic separability assumption implies risk-neutrality (under the assumption of independence of irrelevant alternatives?), since it will force your rankings to be shift-invariant. If you do maximize the expected value of some function of the total utilitarian sum (youâre a vNM-rational utilitarian), then I think it should rule out non-linear functions of that sum.
However, what if we maximize the expected value of some function of the difference we make (e.g. compared to a âbusiness as usualâ option, subtracting the value of that option)? This way, we have to ignore the independent background B since it gets cancelled, and we can use a bounded vNM utility function on whatâs left. One argument Iâve heard against this (from section 4.2 here) is that itâs too agent-relative, but the intuition for stochastic separability itself seems kind of agent-relative, too. I suppose there are slightly different ways of framing stochastic separability, âWhat I canât affect shouldnât change what I should doâ vs âWhat isnât affected shouldnât change whatâs bestâ, with only the former agent-relative, although also more plausible given agent-relative ethics. If I reject agent relative ethics, neither seems so obvious.