Your statements about PAV make sense. I typically think about PAV as you wrote:
A person-affecting view could ground value by using a total view-compatible welfare scale and then just restricting its use in a person-affecting way
But there could be other conceptions. Somewhat tangentially, Iâm deeply suspicious of views which donât allow comparison to other views, which I see as a handwave to avoid having to engage critically with alternative perspectives.
If Iâm talking to a person who doesnât care about animals, and I try to persuade them using moral uncertainty, and they say âno, but one human is worth infinity animals, so I can just ignore whatever magnitude of animal suffering you throw at meâ, and theyâre unwilling to actually quantify their scales and critically discuss what could change their mind, thatâs evidence that theyâre engaging in motivated reasoning.
As a result, I hold very low credence in views which donât admit some approach to intertheoretic comparison. I havenât spent much time thinking about which approach to resolving moral uncertainty is the best, but MEC has always seemed to me to be a clear default, as with maximizing EV in everyday decisionmaking. As with maximizing EV, MEC can also be fairly accused of fanaticism, which is a legitimate concern.
On neutrality, Iâve always considered the intuition of neutrality to be approximately lumpable with PAV, so please let me know if Iâm just wrong there. From what I recall, Chapter 8 of What we Owe the Future argues strenuously against both the intuition of neutrality and PAV, and when I was reading it, I didnât detect much of a difference between MacAskillâs treatment of the two.
I think there are legitimate possibilities for infinities and value lexicality, though (for me personally, extremely intense suffering seems like it could matter infinitely more), and MEC with intertheoretic comparisons would just mean infinity-chasing fanaticism.[1] It can be a race to the bottom to less plausible views, because you can have infinities that lexically dominate other infinities, with a lexicographic order. Youâre stuck with at least one of the following:
infinity-chasing fanaticism (with MEC with intertheoretic comparisons),
ruling out these views with certainty,
ruling out the intertheoretic comparisons,
not using MEC.
The full MEC argument in response to a view X on which humans matter infinitely more than nonhuman animals, allowing lexicographic orders, is not very intuitive. There are (at least) two possible groups of views to compare X to:
Y. Humans and nonhuman animals both matter only finitely.
Yâ. Humans and nonhuman animals both matter infinitely, an infinite âamplificationâ of Y.
(Also Z. Humans matter finitely, and some nonhuman animals matter infinitely.)
When you take expected values/âchoiceworthiness over X, Y and Yâ (and Z), you will get that Y is effectively ignored, and you end up with X and Yâ (and Z) deciding everything, and the interests of nonhuman animals wouldnât be lexically dominated. We can amplify X infinitely, too, and then do the same to Yâ, just shifting along the lexicographic order to higher infinities. And we can keep shifting lexicographically further and further. Then, the actual reason nonhuman animalsâ interests arenât lexically dominated, if theyâre not, will be because of exotic implausible views where nonhuman animals matter infinitely, to some high infinity. Even if itâs the right answer, that doesnât seem like the right way to get to it.
If you donât allow lexical amplifications, then you have to rule out one of Y or Yâ. Or maybe you only allow certain lexical amplifications.
I think the intuition of neutrality is sometimes just called âthe person-affecting restrictionâ, and any view satisfying it is a person-affecting view, but there are other person-affecting views (like asymmetric ones, wide ones). I consider it to be one among many person-affecting views.
Although you can also âamplifyâ any nonlexical view into a lexical one, by basically multiplying everything by infinity, e.g. shifting everything under a lexicographic order.
This is a good critique of MEC. Thanks for spelling it out, as Iâve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascalâs mugging.
I could play the game with the âhumans matter infinitely more than animalsâ person by saying âwell, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humansâ. Of course, they could then say, âno, my lexicographic position of humanity is one degree greater than yoursâ, and so on.
This reminds me of Gödelâs Incompleteness Theorem, where you canât just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. Thereâs no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a âpowerfulâ system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which âexploitâ MEC in a way that other reconciliations arenât (as) susceptible to.
Your statements about PAV make sense. I typically think about PAV as you wrote:
But there could be other conceptions. Somewhat tangentially, Iâm deeply suspicious of views which donât allow comparison to other views, which I see as a handwave to avoid having to engage critically with alternative perspectives.
If Iâm talking to a person who doesnât care about animals, and I try to persuade them using moral uncertainty, and they say âno, but one human is worth infinity animals, so I can just ignore whatever magnitude of animal suffering you throw at meâ, and theyâre unwilling to actually quantify their scales and critically discuss what could change their mind, thatâs evidence that theyâre engaging in motivated reasoning.
As a result, I hold very low credence in views which donât admit some approach to intertheoretic comparison. I havenât spent much time thinking about which approach to resolving moral uncertainty is the best, but MEC has always seemed to me to be a clear default, as with maximizing EV in everyday decisionmaking. As with maximizing EV, MEC can also be fairly accused of fanaticism, which is a legitimate concern.
On neutrality, Iâve always considered the intuition of neutrality to be approximately lumpable with PAV, so please let me know if Iâm just wrong there. From what I recall, Chapter 8 of What we Owe the Future argues strenuously against both the intuition of neutrality and PAV, and when I was reading it, I didnât detect much of a difference between MacAskillâs treatment of the two.
I think there are legitimate possibilities for infinities and value lexicality, though (for me personally, extremely intense suffering seems like it could matter infinitely more), and MEC with intertheoretic comparisons would just mean infinity-chasing fanaticism.[1] It can be a race to the bottom to less plausible views, because you can have infinities that lexically dominate other infinities, with a lexicographic order. Youâre stuck with at least one of the following:
infinity-chasing fanaticism (with MEC with intertheoretic comparisons),
ruling out these views with certainty,
ruling out the intertheoretic comparisons,
not using MEC.
The full MEC argument in response to a view X on which humans matter infinitely more than nonhuman animals, allowing lexicographic orders, is not very intuitive. There are (at least) two possible groups of views to compare X to:
Y. Humans and nonhuman animals both matter only finitely.
Yâ. Humans and nonhuman animals both matter infinitely, an infinite âamplificationâ of Y.
(Also Z. Humans matter finitely, and some nonhuman animals matter infinitely.)
When you take expected values/âchoiceworthiness over X, Y and Yâ (and Z), you will get that Y is effectively ignored, and you end up with X and Yâ (and Z) deciding everything, and the interests of nonhuman animals wouldnât be lexically dominated. We can amplify X infinitely, too, and then do the same to Yâ, just shifting along the lexicographic order to higher infinities. And we can keep shifting lexicographically further and further. Then, the actual reason nonhuman animalsâ interests arenât lexically dominated, if theyâre not, will be because of exotic implausible views where nonhuman animals matter infinitely, to some high infinity. Even if itâs the right answer, that doesnât seem like the right way to get to it.
If you donât allow lexical amplifications, then you have to rule out one of Y or Yâ. Or maybe you only allow certain lexical amplifications.
For another critique of MECâs handling of infinities, see A dilemma for Maximize Expected Choiceworthiness (MEC), and the comments.
I think the intuition of neutrality is sometimes just called âthe person-affecting restrictionâ, and any view satisfying it is a person-affecting view, but there are other person-affecting views (like asymmetric ones, wide ones). I consider it to be one among many person-affecting views.
Although you can also âamplifyâ any nonlexical view into a lexical one, by basically multiplying everything by infinity, e.g. shifting everything under a lexicographic order.
This is a good critique of MEC. Thanks for spelling it out, as Iâve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascalâs mugging.
I could play the game with the âhumans matter infinitely more than animalsâ person by saying âwell, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humansâ. Of course, they could then say, âno, my lexicographic position of humanity is one degree greater than yoursâ, and so on.
This reminds me of Gödelâs Incompleteness Theorem, where you canât just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. Thereâs no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a âpowerfulâ system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which âexploitâ MEC in a way that other reconciliations arenât (as) susceptible to.