This is a good critique of MEC. Thanks for spelling it out, as I’ve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascal’s mugging.
I could play the game with the “humans matter infinitely more than animals” person by saying “well, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humans”. Of course, they could then say, “no, my lexicographic position of humanity is one degree greater than yours”, and so on.
This reminds me of Gödel’s Incompleteness Theorem, where you can’t just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. There’s no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a “powerful” system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which “exploit” MEC in a way that other reconciliations aren’t (as) susceptible to.
This is a good critique of MEC. Thanks for spelling it out, as I’ve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascal’s mugging.
I could play the game with the “humans matter infinitely more than animals” person by saying “well, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humans”. Of course, they could then say, “no, my lexicographic position of humanity is one degree greater than yours”, and so on.
This reminds me of Gödel’s Incompleteness Theorem, where you can’t just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. There’s no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a “powerful” system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which “exploit” MEC in a way that other reconciliations aren’t (as) susceptible to.