I think there are legitimate possibilities for infinities and value lexicality, though (for me personally, extremely intense suffering seems like it could matter infinitely more), and MEC with intertheoretic comparisons would just mean infinity-chasing fanaticism.[1] It can be a race to the bottom to less plausible views, because you can have infinities that lexically dominate other infinities, with a lexicographic order. You’re stuck with at least one of the following:
infinity-chasing fanaticism (with MEC with intertheoretic comparisons),
ruling out these views with certainty,
ruling out the intertheoretic comparisons,
not using MEC.
The full MEC argument in response to a view X on which humans matter infinitely more than nonhuman animals, allowing lexicographic orders, is not very intuitive. There are (at least) two possible groups of views to compare X to:
Y. Humans and nonhuman animals both matter only finitely.
Y’. Humans and nonhuman animals both matter infinitely, an infinite “amplification” of Y.
(Also Z. Humans matter finitely, and some nonhuman animals matter infinitely.)
When you take expected values/choiceworthiness over X, Y and Y’ (and Z), you will get that Y is effectively ignored, and you end up with X and Y’ (and Z) deciding everything, and the interests of nonhuman animals wouldn’t be lexically dominated. We can amplify X infinitely, too, and then do the same to Y’, just shifting along the lexicographic order to higher infinities. And we can keep shifting lexicographically further and further. Then, the actual reason nonhuman animals’ interests aren’t lexically dominated, if they’re not, will be because of exotic implausible views where nonhuman animals matter infinitely, to some high infinity. Even if it’s the right answer, that doesn’t seem like the right way to get to it.
If you don’t allow lexical amplifications, then you have to rule out one of Y or Y’. Or maybe you only allow certain lexical amplifications.
I think the intuition of neutrality is sometimes just called “the person-affecting restriction”, and any view satisfying it is a person-affecting view, but there are other person-affecting views (like asymmetric ones, wide ones). I consider it to be one among many person-affecting views.
Although you can also “amplify” any nonlexical view into a lexical one, by basically multiplying everything by infinity, e.g. shifting everything under a lexicographic order.
This is a good critique of MEC. Thanks for spelling it out, as I’ve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascal’s mugging.
I could play the game with the “humans matter infinitely more than animals” person by saying “well, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humans”. Of course, they could then say, “no, my lexicographic position of humanity is one degree greater than yours”, and so on.
This reminds me of Gödel’s Incompleteness Theorem, where you can’t just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. There’s no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a “powerful” system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which “exploit” MEC in a way that other reconciliations aren’t (as) susceptible to.
I think there are legitimate possibilities for infinities and value lexicality, though (for me personally, extremely intense suffering seems like it could matter infinitely more), and MEC with intertheoretic comparisons would just mean infinity-chasing fanaticism.[1] It can be a race to the bottom to less plausible views, because you can have infinities that lexically dominate other infinities, with a lexicographic order. You’re stuck with at least one of the following:
infinity-chasing fanaticism (with MEC with intertheoretic comparisons),
ruling out these views with certainty,
ruling out the intertheoretic comparisons,
not using MEC.
The full MEC argument in response to a view X on which humans matter infinitely more than nonhuman animals, allowing lexicographic orders, is not very intuitive. There are (at least) two possible groups of views to compare X to:
Y. Humans and nonhuman animals both matter only finitely.
Y’. Humans and nonhuman animals both matter infinitely, an infinite “amplification” of Y.
(Also Z. Humans matter finitely, and some nonhuman animals matter infinitely.)
When you take expected values/choiceworthiness over X, Y and Y’ (and Z), you will get that Y is effectively ignored, and you end up with X and Y’ (and Z) deciding everything, and the interests of nonhuman animals wouldn’t be lexically dominated. We can amplify X infinitely, too, and then do the same to Y’, just shifting along the lexicographic order to higher infinities. And we can keep shifting lexicographically further and further. Then, the actual reason nonhuman animals’ interests aren’t lexically dominated, if they’re not, will be because of exotic implausible views where nonhuman animals matter infinitely, to some high infinity. Even if it’s the right answer, that doesn’t seem like the right way to get to it.
If you don’t allow lexical amplifications, then you have to rule out one of Y or Y’. Or maybe you only allow certain lexical amplifications.
For another critique of MEC’s handling of infinities, see A dilemma for Maximize Expected Choiceworthiness (MEC), and the comments.
I think the intuition of neutrality is sometimes just called “the person-affecting restriction”, and any view satisfying it is a person-affecting view, but there are other person-affecting views (like asymmetric ones, wide ones). I consider it to be one among many person-affecting views.
Although you can also “amplify” any nonlexical view into a lexical one, by basically multiplying everything by infinity, e.g. shifting everything under a lexicographic order.
This is a good critique of MEC. Thanks for spelling it out, as I’ve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascal’s mugging.
I could play the game with the “humans matter infinitely more than animals” person by saying “well, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humans”. Of course, they could then say, “no, my lexicographic position of humanity is one degree greater than yours”, and so on.
This reminds me of Gödel’s Incompleteness Theorem, where you can’t just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. There’s no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a “powerful” system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which “exploit” MEC in a way that other reconciliations aren’t (as) susceptible to.