One response to Infectiousness is that expected value is derivative of more fundamental rationality axioms with certain non-rational assumptions, and those rationality axioms on their own can still work fine to lead to the wager if used directly (similar to Huemer’s argument). From Rejecting Supererogationism by Christian Tarsney:
Strengthened Genuine Dominance over Theories (GDoT*) – If some theories in which you have credence give you subjective reason to choose x over y , and all other theories in which you have credence give you equal* subjective reason to choose x as to choose y , then, rationally, you should choose x over y .
and
Final Dominance over Theories (FDoT) – If (i) every theory in which an agent A has positive credence implies that, conditional on her choosing option O , she has equal* or greater subjective reason to choose O as to choose P , (ii) one or more theories in which she has positive credence imply that, conditional on her choosing O , she has greater subjective reason to choose O than to choose P , and (iii) one or more theories in which she has positive credence imply that, conditional on her choosing P , she has greater subjective reason to choose O than to choose P , then A is rationally prohibited from choosing P .
Here, “equal*” is defined this way:
‘“x is equally as F as y ” means that [i] x is not F er than y , and [ii] y is not F er than x , and [iii] anything that is F er than y is also F er than x , and [iv] y is F er than anything x is F er than’ (Broome, 1997, p. 72). If nihilism is true, then all four clauses in Broome’s definition are trivially satisfied for any x and y and any evaluative property F (e.g. ‘good,’ ‘right,’ ‘choiceworthy,’ ‘supported by objective/subjective reasons’): if nothing is better than anything else, then x is not better than y , y is not better than x , and since neither x nor y is better than anything, it is vacuously true that for anything either x or y is better than, the other is better as well. Furthermore, by virtue of these last two clauses, Broome’s definition distinguishes (as Broome intends it to) between equality and other relations like parity and incomparability in the context of non‐nihilistic theories.
Of course, why should we accept GDoT* or FDoT or any kind of rationality/dominance axioms in the first place?
Furthermore, besides equality*, GDoT* and FDoT being pretty contrived, the dominance principles discussed in Tarsney’s paper are all pretty weak, and to imply that we should choose x over y, we must have exactly 0 credence in all theories that imply we should choose y over x. How can we justify assigning exactly 0 credence to any specific moral claim and positive credence to others? If we can’t, shouldn’t we assign them all positive credence? How do we rule out ethical egoism? How do we rule out the possibility that involuntary suffering is actually good (or a specific theory which says to maximize aggregate involuntary suffering)? If we can’t rule out anything, these principles can never actually be applied, and the wager fails. (This ignores the problem of more than countably many mutually exclusive claims, since they can’t all be assigned positive credence, as the sum of credences would be infinite > 1.)
We also have reason to believe that a moral parliament approach is wrong, since it ignores the relative strengths of claims across different theories, and as far as I can tell, there’s no good way to incorporate the relative strengths of claims between theories, either, so it doesn’t seem like there’s any good way to deal with this problem. And again, there’s no convincing positive reason to choose any such approach at all anyway, rather than reject them all.
Maybe you ought to assign them all positive credence (and push the problem up a level), but this says nothing about how much, or that I shouldn’t assign equal or more credence to the “exact opposite” principles, e.g. if I have more credence in x > y than y > x, then I should choose y over x.
Furthermore, Tarsney points out that GDoT* undermines itself for at least one particular form of nihilism in section 2.2.
Thanks for sharing these points. For people interested in this topic, I’d also recommend Tarsney’s full thesis, or dipping into relevant chapters of it. (I only read about a quarter of it myself and am not an expert in the area, but it seemed quite interesting and like it was probably making quite substantive contributions.)
Also on infectiousness, I thought I’d note that MacAskill himself provides what he calls a “a solution to the infectious incomparability problem”. He does this in his thesis, rather than the paper he published a year earlier and which Lukas referenced in this post. (I can’t actually remember the details of this proposed solution, but it was at the end of Chapter 5, for anyone interested.)
One response to Infectiousness is that expected value is derivative of more fundamental rationality axioms with certain non-rational assumptions, and those rationality axioms on their own can still work fine to lead to the wager if used directly (similar to Huemer’s argument). From Rejecting Supererogationism by Christian Tarsney:
and
Here, “equal*” is defined this way:
Of course, why should we accept GDoT* or FDoT or any kind of rationality/dominance axioms in the first place?
Furthermore, besides equality*, GDoT* and FDoT being pretty contrived, the dominance principles discussed in Tarsney’s paper are all pretty weak, and to imply that we should choose x over y, we must have exactly 0 credence in all theories that imply we should choose y over x. How can we justify assigning exactly 0 credence to any specific moral claim and positive credence to others? If we can’t, shouldn’t we assign them all positive credence? How do we rule out ethical egoism? How do we rule out the possibility that involuntary suffering is actually good (or a specific theory which says to maximize aggregate involuntary suffering)? If we can’t rule out anything, these principles can never actually be applied, and the wager fails. (This ignores the problem of more than countably many mutually exclusive claims, since they can’t all be assigned positive credence, as the sum of credences would be infinite > 1.)
We also have reason to believe that a moral parliament approach is wrong, since it ignores the relative strengths of claims across different theories, and as far as I can tell, there’s no good way to incorporate the relative strengths of claims between theories, either, so it doesn’t seem like there’s any good way to deal with this problem. And again, there’s no convincing positive reason to choose any such approach at all anyway, rather than reject them all.
Maybe you ought to assign them all positive credence (and push the problem up a level), but this says nothing about how much, or that I shouldn’t assign equal or more credence to the “exact opposite” principles, e.g. if I have more credence in x > y than y > x, then I should choose y over x.
Furthermore, Tarsney points out that GDoT* undermines itself for at least one particular form of nihilism in section 2.2.
Thanks for sharing these points. For people interested in this topic, I’d also recommend Tarsney’s full thesis, or dipping into relevant chapters of it. (I only read about a quarter of it myself and am not an expert in the area, but it seemed quite interesting and like it was probably making quite substantive contributions.)
Also on infectiousness, I thought I’d note that MacAskill himself provides what he calls a “a solution to the infectious incomparability problem”. He does this in his thesis, rather than the paper he published a year earlier and which Lukas referenced in this post. (I can’t actually remember the details of this proposed solution, but it was at the end of Chapter 5, for anyone interested.)