It’s less about valuing individual welfare at a greater rate under PAVs (although that could happen in principle) and more about grounding value in ways that don’t allow intertheoretic comparsions at all with total views, or just refusing to attempt such intertheortic comparisons altogether, or refusing to apply MEC using them. It could be like trying to compare temperature and weight, which seems absurd because they measure very different things. Even if the targets are at least superficially similar, like welfare in both cases, the units could still be incompatible, with no justifiable common scale or conversion rate between them.
A person-affecting view could ground value by using a total view-compatible welfare scale and then just restricting its use in a person-affecting way, and that would be a good candidate for a common scale with the total view and so intertheoretic comparsions under MEC in the obvious way: valuing welfare an existing individual’s welfare identically across the views. However, it’s not clear that this is the only plausible or preferred way to ground person-affecting views.
Stepping back, your argument depends on high confidence in multiple controversial assumptions:
the use of MEC at all (possibly alongside other approaches, rather than any other approaches to moral uncertainty not involving MEC, like a moral parliament or a property rights approach, which tend to be more generally applicable including to non-quantitative views, less fanatical, and, in my view, more fair),
the use of MEC with intertheoretic comparisons at all (possibly alongside other normalization approaches, rather than other normalization approaches for MEC without intertheoretic comparisons),
for almost each plausible grounding of a plausible PAV, the existence and use of a specific common scale for intertheoretic comparisons with some grounding of a total view (or similar) under MEC,
MEC with the given intertheoretic comparisons from 3 generally disapproving of family planning.
Your second consideration makes sense, and might result in a modest dampening effect on the 99% number, if the increase in mothers’ standard of living due to FEM’s intervention is highly weighed.
Ah, I meant to point this out because your quotes from MacAskill and Ord are critical of neutrality, and I don’t expect neutrality to be very representative of those holding person-affecting views or who would otherwise support family planning for person-affecting reasons. It could be a strawman.
Your statements about PAV make sense. I typically think about PAV as you wrote:
A person-affecting view could ground value by using a total view-compatible welfare scale and then just restricting its use in a person-affecting way
But there could be other conceptions. Somewhat tangentially, I’m deeply suspicious of views which don’t allow comparison to other views, which I see as a handwave to avoid having to engage critically with alternative perspectives.
If I’m talking to a person who doesn’t care about animals, and I try to persuade them using moral uncertainty, and they say “no, but one human is worth infinity animals, so I can just ignore whatever magnitude of animal suffering you throw at me”, and they’re unwilling to actually quantify their scales and critically discuss what could change their mind, that’s evidence that they’re engaging in motivated reasoning.
As a result, I hold very low credence in views which don’t admit some approach to intertheoretic comparison. I haven’t spent much time thinking about which approach to resolving moral uncertainty is the best, but MEC has always seemed to me to be a clear default, as with maximizing EV in everyday decisionmaking. As with maximizing EV, MEC can also be fairly accused of fanaticism, which is a legitimate concern.
On neutrality, I’ve always considered the intuition of neutrality to be approximately lumpable with PAV, so please let me know if I’m just wrong there. From what I recall, Chapter 8 of What we Owe the Future argues strenuously against both the intuition of neutrality and PAV, and when I was reading it, I didn’t detect much of a difference between MacAskill’s treatment of the two.
I think there are legitimate possibilities for infinities and value lexicality, though (for me personally, extremely intense suffering seems like it could matter infinitely more), and MEC with intertheoretic comparisons would just mean infinity-chasing fanaticism.[1] It can be a race to the bottom to less plausible views, because you can have infinities that lexically dominate other infinities, with a lexicographic order. You’re stuck with at least one of the following:
infinity-chasing fanaticism (with MEC with intertheoretic comparisons),
ruling out these views with certainty,
ruling out the intertheoretic comparisons,
not using MEC.
The full MEC argument in response to a view X on which humans matter infinitely more than nonhuman animals, allowing lexicographic orders, is not very intuitive. There are (at least) two possible groups of views to compare X to:
Y. Humans and nonhuman animals both matter only finitely.
Y’. Humans and nonhuman animals both matter infinitely, an infinite “amplification” of Y.
(Also Z. Humans matter finitely, and some nonhuman animals matter infinitely.)
When you take expected values/choiceworthiness over X, Y and Y’ (and Z), you will get that Y is effectively ignored, and you end up with X and Y’ (and Z) deciding everything, and the interests of nonhuman animals wouldn’t be lexically dominated. We can amplify X infinitely, too, and then do the same to Y’, just shifting along the lexicographic order to higher infinities. And we can keep shifting lexicographically further and further. Then, the actual reason nonhuman animals’ interests aren’t lexically dominated, if they’re not, will be because of exotic implausible views where nonhuman animals matter infinitely, to some high infinity. Even if it’s the right answer, that doesn’t seem like the right way to get to it.
If you don’t allow lexical amplifications, then you have to rule out one of Y or Y’. Or maybe you only allow certain lexical amplifications.
I think the intuition of neutrality is sometimes just called “the person-affecting restriction”, and any view satisfying it is a person-affecting view, but there are other person-affecting views (like asymmetric ones, wide ones). I consider it to be one among many person-affecting views.
Although you can also “amplify” any nonlexical view into a lexical one, by basically multiplying everything by infinity, e.g. shifting everything under a lexicographic order.
This is a good critique of MEC. Thanks for spelling it out, as I’ve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascal’s mugging.
I could play the game with the “humans matter infinitely more than animals” person by saying “well, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humans”. Of course, they could then say, “no, my lexicographic position of humanity is one degree greater than yours”, and so on.
This reminds me of Gödel’s Incompleteness Theorem, where you can’t just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. There’s no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a “powerful” system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which “exploit” MEC in a way that other reconciliations aren’t (as) susceptible to.
It’s less about valuing individual welfare at a greater rate under PAVs (although that could happen in principle) and more about grounding value in ways that don’t allow intertheoretic comparsions at all with total views, or just refusing to attempt such intertheortic comparisons altogether, or refusing to apply MEC using them. It could be like trying to compare temperature and weight, which seems absurd because they measure very different things. Even if the targets are at least superficially similar, like welfare in both cases, the units could still be incompatible, with no justifiable common scale or conversion rate between them.
A person-affecting view could ground value by using a total view-compatible welfare scale and then just restricting its use in a person-affecting way, and that would be a good candidate for a common scale with the total view and so intertheoretic comparsions under MEC in the obvious way: valuing welfare an existing individual’s welfare identically across the views. However, it’s not clear that this is the only plausible or preferred way to ground person-affecting views.
Stepping back, your argument depends on high confidence in multiple controversial assumptions:
the use of MEC at all (possibly alongside other approaches, rather than any other approaches to moral uncertainty not involving MEC, like a moral parliament or a property rights approach, which tend to be more generally applicable including to non-quantitative views, less fanatical, and, in my view, more fair),
the use of MEC with intertheoretic comparisons at all (possibly alongside other normalization approaches, rather than other normalization approaches for MEC without intertheoretic comparisons),
for almost each plausible grounding of a plausible PAV, the existence and use of a specific common scale for intertheoretic comparisons with some grounding of a total view (or similar) under MEC,
MEC with the given intertheoretic comparisons from 3 generally disapproving of family planning.
Ah, I meant to point this out because your quotes from MacAskill and Ord are critical of neutrality, and I don’t expect neutrality to be very representative of those holding person-affecting views or who would otherwise support family planning for person-affecting reasons. It could be a strawman.
Your statements about PAV make sense. I typically think about PAV as you wrote:
But there could be other conceptions. Somewhat tangentially, I’m deeply suspicious of views which don’t allow comparison to other views, which I see as a handwave to avoid having to engage critically with alternative perspectives.
If I’m talking to a person who doesn’t care about animals, and I try to persuade them using moral uncertainty, and they say “no, but one human is worth infinity animals, so I can just ignore whatever magnitude of animal suffering you throw at me”, and they’re unwilling to actually quantify their scales and critically discuss what could change their mind, that’s evidence that they’re engaging in motivated reasoning.
As a result, I hold very low credence in views which don’t admit some approach to intertheoretic comparison. I haven’t spent much time thinking about which approach to resolving moral uncertainty is the best, but MEC has always seemed to me to be a clear default, as with maximizing EV in everyday decisionmaking. As with maximizing EV, MEC can also be fairly accused of fanaticism, which is a legitimate concern.
On neutrality, I’ve always considered the intuition of neutrality to be approximately lumpable with PAV, so please let me know if I’m just wrong there. From what I recall, Chapter 8 of What we Owe the Future argues strenuously against both the intuition of neutrality and PAV, and when I was reading it, I didn’t detect much of a difference between MacAskill’s treatment of the two.
I think there are legitimate possibilities for infinities and value lexicality, though (for me personally, extremely intense suffering seems like it could matter infinitely more), and MEC with intertheoretic comparisons would just mean infinity-chasing fanaticism.[1] It can be a race to the bottom to less plausible views, because you can have infinities that lexically dominate other infinities, with a lexicographic order. You’re stuck with at least one of the following:
infinity-chasing fanaticism (with MEC with intertheoretic comparisons),
ruling out these views with certainty,
ruling out the intertheoretic comparisons,
not using MEC.
The full MEC argument in response to a view X on which humans matter infinitely more than nonhuman animals, allowing lexicographic orders, is not very intuitive. There are (at least) two possible groups of views to compare X to:
Y. Humans and nonhuman animals both matter only finitely.
Y’. Humans and nonhuman animals both matter infinitely, an infinite “amplification” of Y.
(Also Z. Humans matter finitely, and some nonhuman animals matter infinitely.)
When you take expected values/choiceworthiness over X, Y and Y’ (and Z), you will get that Y is effectively ignored, and you end up with X and Y’ (and Z) deciding everything, and the interests of nonhuman animals wouldn’t be lexically dominated. We can amplify X infinitely, too, and then do the same to Y’, just shifting along the lexicographic order to higher infinities. And we can keep shifting lexicographically further and further. Then, the actual reason nonhuman animals’ interests aren’t lexically dominated, if they’re not, will be because of exotic implausible views where nonhuman animals matter infinitely, to some high infinity. Even if it’s the right answer, that doesn’t seem like the right way to get to it.
If you don’t allow lexical amplifications, then you have to rule out one of Y or Y’. Or maybe you only allow certain lexical amplifications.
For another critique of MEC’s handling of infinities, see A dilemma for Maximize Expected Choiceworthiness (MEC), and the comments.
I think the intuition of neutrality is sometimes just called “the person-affecting restriction”, and any view satisfying it is a person-affecting view, but there are other person-affecting views (like asymmetric ones, wide ones). I consider it to be one among many person-affecting views.
Although you can also “amplify” any nonlexical view into a lexical one, by basically multiplying everything by infinity, e.g. shifting everything under a lexicographic order.
This is a good critique of MEC. Thanks for spelling it out, as I’ve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascal’s mugging.
I could play the game with the “humans matter infinitely more than animals” person by saying “well, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humans”. Of course, they could then say, “no, my lexicographic position of humanity is one degree greater than yours”, and so on.
This reminds me of Gödel’s Incompleteness Theorem, where you can’t just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. There’s no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a “powerful” system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which “exploit” MEC in a way that other reconciliations aren’t (as) susceptible to.