One response to Infectiousness is that expected value is derivative of more fundamental rationality axioms with certain non-rational assumptions, and those rationality axioms on their own can still work fine to lead to the wager if used directly (similar to Huemerâs argument). From Rejecting Supererogationism by Christian Tarsney:
Strengthened Genuine Dominance over Theories (GDoT*) â If some theories in which you have credence give you subjective reason to choose x over y , and all other theories in which you have credence give you equal* subjective reason to choose x as to choose y , then, rationally, you should choose x over y .
and
Final Dominance over Theories (FDoT) â If (i) every theory in which an agent A has positive credence implies that, conditional on her choosing option O , she has equal* or greater subjective reason to choose O as to choose P , (ii) one or more theories in which she has positive credence imply that, conditional on her choosing O , she has greater subjective reason to choose O than to choose P , and (iii) one or more theories in which she has positive credence imply that, conditional on her choosing P , she has greater subjective reason to choose O than to choose P , then A is rationally prohibited from choosing P .
Here, âequal*â is defined this way:
ââx is equally as F as y â means that [i] x is not F er than y , and [ii] y is not F er than x , and [iii] anything that is F er than y is also F er than x , and [iv] y is F er than anything x is F er thanâ (Broome, 1997, p. 72). If nihilism is true, then all four clauses in Broomeâs definition are trivially satisfied for any x and y and any evaluative property F (e.g. âgood,â âright,â âchoiceworthy,â âsupported by objective/âsubjective reasonsâ): if nothing is better than anything else, then x is not better than y , y is not better than x , and since neither x nor y is better than anything, it is vacuously true that for anything either x or y is better than, the other is better as well. Furthermore, by virtue of these last two clauses, Broomeâs definition distinguishes (as Broome intends it to) between equality and other relations like parity and incomparability in the context of nonânihilistic theories.
Of course, why should we accept GDoT* or FDoT or any kind of rationality/âdominance axioms in the first place?
Furthermore, besides equality*, GDoT* and FDoT being pretty contrived, the dominance principles discussed in Tarsneyâs paper are all pretty weak, and to imply that we should choose x over y, we must have exactly 0 credence in all theories that imply we should choose y over x. How can we justify assigning exactly 0 credence to any specific moral claim and positive credence to others? If we canât, shouldnât we assign them all positive credence? How do we rule out ethical egoism? How do we rule out the possibility that involuntary suffering is actually good (or a specific theory which says to maximize aggregate involuntary suffering)? If we canât rule out anything, these principles can never actually be applied, and the wager fails. (This ignores the problem of more than countably many mutually exclusive claims, since they canât all be assigned positive credence, as the sum of credences would be infinite > 1.)
We also have reason to believe that a moral parliament approach is wrong, since it ignores the relative strengths of claims across different theories, and as far as I can tell, thereâs no good way to incorporate the relative strengths of claims between theories, either, so it doesnât seem like thereâs any good way to deal with this problem. And again, thereâs no convincing positive reason to choose any such approach at all anyway, rather than reject them all.
Maybe you ought to assign them all positive credence (and push the problem up a level), but this says nothing about how much, or that I shouldnât assign equal or more credence to the âexact oppositeâ principles, e.g. if I have more credence in x > y than y > x, then I should choose y over x.
Furthermore, Tarsney points out that GDoT* undermines itself for at least one particular form of nihilism in section 2.2.
Thanks for sharing these points. For people interested in this topic, Iâd also recommend Tarsneyâs full thesis, or dipping into relevant chapters of it. (I only read about a quarter of it myself and am not an expert in the area, but it seemed quite interesting and like it was probably making quite substantive contributions.)
Also on infectiousness, I thought Iâd note that MacAskill himself provides what he calls a âa solution to the infectious incomparability problemâ. He does this in his thesis, rather than the paper he published a year earlier and which Lukas referenced in this post. (I canât actually remember the details of this proposed solution, but it was at the end of Chapter 5, for anyone interested.)
One response to Infectiousness is that expected value is derivative of more fundamental rationality axioms with certain non-rational assumptions, and those rationality axioms on their own can still work fine to lead to the wager if used directly (similar to Huemerâs argument). From Rejecting Supererogationism by Christian Tarsney:
and
Here, âequal*â is defined this way:
Of course, why should we accept GDoT* or FDoT or any kind of rationality/âdominance axioms in the first place?
Furthermore, besides equality*, GDoT* and FDoT being pretty contrived, the dominance principles discussed in Tarsneyâs paper are all pretty weak, and to imply that we should choose x over y, we must have exactly 0 credence in all theories that imply we should choose y over x. How can we justify assigning exactly 0 credence to any specific moral claim and positive credence to others? If we canât, shouldnât we assign them all positive credence? How do we rule out ethical egoism? How do we rule out the possibility that involuntary suffering is actually good (or a specific theory which says to maximize aggregate involuntary suffering)? If we canât rule out anything, these principles can never actually be applied, and the wager fails. (This ignores the problem of more than countably many mutually exclusive claims, since they canât all be assigned positive credence, as the sum of credences would be infinite > 1.)
We also have reason to believe that a moral parliament approach is wrong, since it ignores the relative strengths of claims across different theories, and as far as I can tell, thereâs no good way to incorporate the relative strengths of claims between theories, either, so it doesnât seem like thereâs any good way to deal with this problem. And again, thereâs no convincing positive reason to choose any such approach at all anyway, rather than reject them all.
Maybe you ought to assign them all positive credence (and push the problem up a level), but this says nothing about how much, or that I shouldnât assign equal or more credence to the âexact oppositeâ principles, e.g. if I have more credence in x > y than y > x, then I should choose y over x.
Furthermore, Tarsney points out that GDoT* undermines itself for at least one particular form of nihilism in section 2.2.
Thanks for sharing these points. For people interested in this topic, Iâd also recommend Tarsneyâs full thesis, or dipping into relevant chapters of it. (I only read about a quarter of it myself and am not an expert in the area, but it seemed quite interesting and like it was probably making quite substantive contributions.)
Also on infectiousness, I thought Iâd note that MacAskill himself provides what he calls a âa solution to the infectious incomparability problemâ. He does this in his thesis, rather than the paper he published a year earlier and which Lukas referenced in this post. (I canât actually remember the details of this proposed solution, but it was at the end of Chapter 5, for anyone interested.)