The sign of the effect of FEM seems to rely crucially on a very high credence in the person-affecting view, where the interests of future people are not considered.
In Kano, Anambra, and Ondo, FEM prevents one maternal death by preventing 281, 268, and 249 unintended pregnancies respectively. Even if only ~40% of these unintended pregnancies would have counterfactually been carried to term (due to abortion, replacement, and other factors), that still means preventing one maternal death prevents the creation of ~100 human beings. In other words, FEMâs intervention prevents ~100x as much human life experience as it creates by averting a maternal death. If one desires to maximize expected choice-worthiness under moral uncertainty, assuming the value of human experience is independent of the person-affecting view, one must be ~99% confident that the person-affecting view is true for FEM to be net positive.
However, many EAs, especially longtermists, argue that the person-affecting view is unlikely to be true. For example, Will MacAskill spends most of Chapter 8 of What We Owe The Future arguing that âall proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objectionsâ. Toby Ord writes in The Precipice p. 263 that âAny plausible account of population ethics will involveâŠmaking sacrifices on behalf of merely possible people.â
If thereâs a significant probability that the person-affecting view may be false, then FEMâs effect could in reality be up to 100x as negative as its effect on mothers is positive.
Even if one rejects the person-affecting view, but supports FEM for its (definitely positive) effects on farmed animals, they should then be sure to not support lifesaving charities like AMF, which have the opposite effect on farmed animals. They should also find FEM saving mothersâ lives to be an unfortunate side-effect of FEMâs intervention, because saving mothersâ lives is bad for the farmed animals the mothers eat.
Also, more of the farmed animals helped by reducing the human population donât exist yet and will be created in the future. So itâs curious that one would account for the interests of farmed animals that donât exist yet, but ignore the interests of human beings that donât exist yet. (To be fair, there are views like the procreation asymmetry which could justify this.)
On the whole, whether or not thereâs a significant probability that the person-affecting view may be false seems to be a crucial consideration for the sign of the effect of family planning charities such as FEM and MHI. Iâd be interested in how Rethink Priorities would approach incorporating moral uncertainty regarding the person-affecting view into its report on FEM.
Edit: Added âassuming the value of human experience is independent of the person-affecting viewâ for precision, as MichaelStJules pointed out.
I am just coming from a What We Owe the Future reading groupâthanks for reminding me of the gap between my moral untuitions and total utilitarianism!
One reason why I am not convinded by your argument is that I am not sure that the additional lifes lived due to the unintended pregnancies are globally net-positive:
on the one hand, it does seem quite likely that their lives will be subjectively worth living (the majority of people agrees with this statement and it does not seem to me that these lives would be too different) and that they would have net-positive relationships in the future.
but on the other hand, given a level of human technology, there is some finite number of people on earth which is optimal form a total utility standpoint. And given the current state of biodiversity loss, soil erosion and global warming, it does not seem obvious that humanity is below that number[1]
as a third part, given that these are unintended pregnancies, it does seem likely that there are resource limitations which would lead to hardships if a person is born. We would need to know a lot about the life situation and social support structures of the potential parents if we wanted to estimate how significant this effect is, but it could easily be non-trivial.
edited to add and remove:the number of 100 pregnancies averted does not correspond to 100 fewer children being born in the end. A significant part of the pregnancies would only be shifted in time. I would be surprised if the true number is larger than 10 and expect it to be lower than this. My reasoning here is that the total number of children each set of parents is going to have will hardly be reduced by 100x from access to contraception. If this number started at 10 children and is reduced to a single child, we have a reduction that corresponds to 10 fewer births per death averted. And stated like this, even the number 10 seems quite high (sorry, there were a few confusions in this argument)
This being said, the main reason why I am emotionally unconvinced by the argument you give is probably that I am on some level unable to contemplate âfailing to have childrenâ as something that is morally bad.
My intuitions have somewhat cought up with the arguments that giving happy lives the opportunity to exist is a great thing, but they do not agree to the sign-flipped case for now. Probably, a part of this is that I do not trust myself (or others) to actually reason clearly on this topic and this just feels like âdo not go thereâ emotionally.
It also does not seem obvious that we are above that number. Especially when trying to include topics like wild animal suffering. At least I feel confident that human population isnât off from the optimum by a huge factor.
Thanks for your comment Ariel. We havenât attempted to assess the value of different population ethics views, or how those would affect the (cost-)effectiveness of FEMâs work. We believe that that is a highly complex topic that would take more time than the short period we had to conduct this research. Work on this would benefit from the Worldview Investigations Team at Rethink Priorities, which could explore family planning topics in the future. Iâm sorry we neglected to add that to the editorial note and disclaimer. I will edit it to reflect this.
FWIW, basically the same argument would also undermine almost all global health work and other neartermist work. Why work on saving hundreds or thousands or even millions of lives when you can reduce the probability of extinction and marginally increase the probability of 10^50 (or whatever) happy conscious beings coming into existence?
The difference is mostly a matter of degree: in extinction prevention compared to family planning, we have much smaller probabilities for the high payoff possibility (of preventing extinction+total view) and much larger payoffs conditional on the payoff.
I donât think it makes sense to single out family planning in particular with this kind of argument.
I think thereâs a big difference between strong longtermism (the argument you state) and my commentâs argument that FEMâs intervention is net negative.
My comment argues that while FEMâs intentions are well-meaning, their intervention may be net negative because it prevents people from experiencing lives they would have been glad to have lived. For my commentâs argument to be plausible, all one needs to believe is that the loves and friendships future people may have is a positive good. Yes, my comment appeals to longtermismâs endorsement of this view, but its claims and requirements are far more modest than those of strong longtermism.
There is no double standard or singling out here. I think global health work is good, and support funding for it on the margin. I believe the same about animal welfare, and about longtermism. Yes, some interventions are more cost-effective than others, and I think broadly similar arguments (e.g. even if you think animals donât matter, a small chance that they do matter should be enough to prioritize animal welfare over global health due to animal welfareâs scale and neglectedness) do indeed go through.
If you provided me another example of a neartermist intervention which prevents people from experiencing lives they would have been glad to have lived, I would make the same argument against it as in my earlier comment. It could be family planning, or it could be something else (e.g. advocacy of a one-child policy, perhaps for environmentalist purposes).
Iâm also quite sympathetic to the pure philosophical case for strong longtermism, though I have some caveats in practice. So yes, I donât think your statement of strong longtermism is unreasonable.
tl;dr: I think your strong argument based on MEC depends on pretty controversial assumptions, and your more modest argument doesnât imply we shouldnât support family planning at all in our portfolio, all-things-considered.
Your original argument does depend on MEC and (roughly) risk neutral EV maximization for the total view, or else high credence (>50%?) in moral views according to which itâs good to make more happy people. You were multiplying the number of lives prevented by the credence in totalism. The standard argument for strong longtermism does essentially the same.
Similar arguments donât go through on most other popular approaches to moral uncertainty without high credence in the good of making more happy people. Iâd guess most EA interventions are net negative according to some view that isnât totally implausible, and this isnât enough to stop us from pursuing them. Some approaches to moral uncertainty, like the property rights approach, could support family planning even if itâs (very) net negative according to a credal supermajority of views. (Maybe that counts against the approach, though!)
Your more modest (non-MEC) argument about recognizing goods (love, friendship) impersonally, on the other hand, would not be very persuasive to most people who endorse person-affecting views. It might be the most basic standard objection to them, and pretty close to just the direct assertion that PAVs are false. It also wouldnât even follow that we shouldnât support family planning, if we consider moral uncertainty and assign some credence to person-affecting views. That would depend on the specifics.
If the difference between family planning work and the next best opportunities are small enough for welfare maximizers with person-affecting views, those without person-affecting views can pay those with PAVs for the difference to prevent family planning work. Or maybe those with PAVs avoid family planning work to cooperate with other EAs, but I donât think other EAs are very against family planning, and cooperation with non-EAs might actually support family planning instead (perhaps depending on the kind, maybe less so abortion specifically).
I agree that we can distinguish between net negative (compared to doing nothing or some other default) and net positive but worse than something else, but the result is the same under consequentialism and MEC (if the argument succeeds): you shouldnât support either family planning or neartermist work generally, because there are better options, namely doing nothing (for family planning), or extinction risk reduction (for both). Again under other approaches to moral uncertainty, it isnât obvious that family planning shouldnât be supported at all.
(This is ignoring some movement building effects of neartermist work and probably some other indirect effects. Under strong longtermism, maybe GiveWell did a lot of good by funneling people towards AI safety, building general expertise or buying reputation.)
I think the conclusion should instead be that we should take the impact of neartermist interventions on the experiences of future beings very seriously.
Itâs not necessary to endorse total utilitarianism or strong longtermism for my commentâs argument to go through. If you see the loves and friendships future people may have as a positive good, even if they may not exist yet, and even if you donât weigh them as highly as those of people living in the present, then I think you should carefully consider what my comment has to say.
When people feel like they have to choose between a cherished belief and a philosophical argument, their instinct is often to keep the cherished belief and dismiss the philosophical argument. Itâs entirely understandable that people do that! It takes strength to listen to oneâs beliefs being questioned, and it takes courage to really deeply probe at whether or not oneâs cherished belief is actually true. However:
What is true is already so.
Owning up to it doesnât make it worse.
Not being open about it doesnât make it go away.
And because itâs true, it is what is there to be interacted with.
Anything untrue isnât there to be lived.
People can stand what is true,
for they are already enduring it.
Eugene T. Gendlin, Focusing (Bantam Books, 1982).[1]
(Edited slightly for accuracy/âprecision and grammar.)
MEC=maximizing expected choiceworthiness, and PAV=person-affecting view.
~99% doesnât follow from MEC alone. You need MEC+specific intertheoretic comparisons where individual welfare under PAVs has similar or lower absolute moral value as individual welfare under the total view (or similar enough views). And you need to ground such intertheoretic comparisons. There may be a case for it, but some versions of the PAVs will probably ground value quite differently and in basically incompatible ways from any total view, so these comparisons wouldnât be justified between those versions and the total view. Youâd have to use another approach to moral uncertainty (possibly along with MEC+intertheoretic comparisons in more limited cases), and the other approaches wouldnât generally be nearly as sensitive to the foregone welfare of the children not born.
Iâd also guess wide person-affecting views (prioritize quality and longevity, not population size) and asymmetric person-affecting views are more popular than person-affecting neutrality, and would probably be endorsed under further reflection by most people initially attracted to neutrality over neutrality. These views still endorse making people happy over making happy people.
That being said, once you include the effects on a group of farmed animals, you should also probably include the effects on all (including wild) animals with similar or greater average moral weight and probability of moral patienthood, at least if you care about outcomes somewhat regardless of active contribution, e.g. if youâre a utilitarian of any kind. This complicates things further.
Thanks for these caveats! I largely agree, but they seem to only have a modest impact on the 99% claim.
Regarding intertheoretic comparison, my prior is that a person-affecting view (PAV) should have little to no effect on oneâs valuation of welfare. I donât really see why PAV vs non-PAV would radically disagree on how important it is to help others. In this case, the disagreement would indeed have to be radicalâeven if for some reason, PAV caused someone to 10x their valuation of welfare, theyâd still have to be 90% certain PAV was true for FEM to be positive.
For PAVs where value is grounded quite differently, I donât have an informed prior on just how different the PAVâs grounding of value may be. If there are highly supported PAVs where welfare is clearly valued far greater than non-PAV, then that would update the 99% claim. However, I donât know of any such PAV, nor of any non-PAV where welfare is valued far greater than PAV (which would have the opposite effect).
Your second consideration makes sense, and might result in a modest dampening effect on the 99% number, if the increase in mothersâ standard of living due to FEMâs intervention is highly weighed.
Couldnât agree more on the farmed and wild animal effects :) I wonât pretend to have any degree of certainty about how it all shakes out.
Itâs less about valuing individual welfare at a greater rate under PAVs (although that could happen in principle) and more about grounding value in ways that donât allow intertheoretic comparsions at all with total views, or just refusing to attempt such intertheortic comparisons altogether, or refusing to apply MEC using them. It could be like trying to compare temperature and weight, which seems absurd because they measure very different things. Even if the targets are at least superficially similar, like welfare in both cases, the units could still be incompatible, with no justifiable common scale or conversion rate between them.
A person-affecting view could ground value by using a total view-compatible welfare scale and then just restricting its use in a person-affecting way, and that would be a good candidate for a common scale with the total view and so intertheoretic comparsions under MEC in the obvious way: valuing welfare an existing individualâs welfare identically across the views. However, itâs not clear that this is the only plausible or preferred way to ground person-affecting views.
Stepping back, your argument depends on high confidence in multiple controversial assumptions:
the use of MEC at all (possibly alongside other approaches, rather than any other approaches to moral uncertainty not involving MEC, like a moral parliament or a property rights approach, which tend to be more generally applicable including to non-quantitative views, less fanatical, and, in my view, more fair),
the use of MEC with intertheoretic comparisons at all (possibly alongside other normalization approaches, rather than other normalization approaches for MEC without intertheoretic comparisons),
for almost each plausible grounding of a plausible PAV, the existence and use of a specific common scale for intertheoretic comparisons with some grounding of a total view (or similar) under MEC,
MEC with the given intertheoretic comparisons from 3 generally disapproving of family planning.
Your second consideration makes sense, and might result in a modest dampening effect on the 99% number, if the increase in mothersâ standard of living due to FEMâs intervention is highly weighed.
Ah, I meant to point this out because your quotes from MacAskill and Ord are critical of neutrality, and I donât expect neutrality to be very representative of those holding person-affecting views or who would otherwise support family planning for person-affecting reasons. It could be a strawman.
Your statements about PAV make sense. I typically think about PAV as you wrote:
A person-affecting view could ground value by using a total view-compatible welfare scale and then just restricting its use in a person-affecting way
But there could be other conceptions. Somewhat tangentially, Iâm deeply suspicious of views which donât allow comparison to other views, which I see as a handwave to avoid having to engage critically with alternative perspectives.
If Iâm talking to a person who doesnât care about animals, and I try to persuade them using moral uncertainty, and they say âno, but one human is worth infinity animals, so I can just ignore whatever magnitude of animal suffering you throw at meâ, and theyâre unwilling to actually quantify their scales and critically discuss what could change their mind, thatâs evidence that theyâre engaging in motivated reasoning.
As a result, I hold very low credence in views which donât admit some approach to intertheoretic comparison. I havenât spent much time thinking about which approach to resolving moral uncertainty is the best, but MEC has always seemed to me to be a clear default, as with maximizing EV in everyday decisionmaking. As with maximizing EV, MEC can also be fairly accused of fanaticism, which is a legitimate concern.
On neutrality, Iâve always considered the intuition of neutrality to be approximately lumpable with PAV, so please let me know if Iâm just wrong there. From what I recall, Chapter 8 of What we Owe the Future argues strenuously against both the intuition of neutrality and PAV, and when I was reading it, I didnât detect much of a difference between MacAskillâs treatment of the two.
I think there are legitimate possibilities for infinities and value lexicality, though (for me personally, extremely intense suffering seems like it could matter infinitely more), and MEC with intertheoretic comparisons would just mean infinity-chasing fanaticism.[1] It can be a race to the bottom to less plausible views, because you can have infinities that lexically dominate other infinities, with a lexicographic order. Youâre stuck with at least one of the following:
infinity-chasing fanaticism (with MEC with intertheoretic comparisons),
ruling out these views with certainty,
ruling out the intertheoretic comparisons,
not using MEC.
The full MEC argument in response to a view X on which humans matter infinitely more than nonhuman animals, allowing lexicographic orders, is not very intuitive. There are (at least) two possible groups of views to compare X to:
Y. Humans and nonhuman animals both matter only finitely.
Yâ. Humans and nonhuman animals both matter infinitely, an infinite âamplificationâ of Y.
(Also Z. Humans matter finitely, and some nonhuman animals matter infinitely.)
When you take expected values/âchoiceworthiness over X, Y and Yâ (and Z), you will get that Y is effectively ignored, and you end up with X and Yâ (and Z) deciding everything, and the interests of nonhuman animals wouldnât be lexically dominated. We can amplify X infinitely, too, and then do the same to Yâ, just shifting along the lexicographic order to higher infinities. And we can keep shifting lexicographically further and further. Then, the actual reason nonhuman animalsâ interests arenât lexically dominated, if theyâre not, will be because of exotic implausible views where nonhuman animals matter infinitely, to some high infinity. Even if itâs the right answer, that doesnât seem like the right way to get to it.
If you donât allow lexical amplifications, then you have to rule out one of Y or Yâ. Or maybe you only allow certain lexical amplifications.
I think the intuition of neutrality is sometimes just called âthe person-affecting restrictionâ, and any view satisfying it is a person-affecting view, but there are other person-affecting views (like asymmetric ones, wide ones). I consider it to be one among many person-affecting views.
Although you can also âamplifyâ any nonlexical view into a lexical one, by basically multiplying everything by infinity, e.g. shifting everything under a lexicographic order.
This is a good critique of MEC. Thanks for spelling it out, as Iâve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascalâs mugging.
I could play the game with the âhumans matter infinitely more than animalsâ person by saying âwell, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humansâ. Of course, they could then say, âno, my lexicographic position of humanity is one degree greater than yoursâ, and so on.
This reminds me of Gödelâs Incompleteness Theorem, where you canât just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. Thereâs no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a âpowerfulâ system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which âexploitâ MEC in a way that other reconciliations arenât (as) susceptible to.
The sign of the effect of FEM seems to rely crucially on a very high credence in the person-affecting view, where the interests of future people are not considered.
In Kano, Anambra, and Ondo, FEM prevents one maternal death by preventing 281, 268, and 249 unintended pregnancies respectively. Even if only ~40% of these unintended pregnancies would have counterfactually been carried to term (due to abortion, replacement, and other factors), that still means preventing one maternal death prevents the creation of ~100 human beings. In other words, FEMâs intervention prevents ~100x as much human life experience as it creates by averting a maternal death. If one desires to maximize expected choice-worthiness under moral uncertainty, assuming the value of human experience is independent of the person-affecting view, one must be ~99% confident that the person-affecting view is true for FEM to be net positive.
However, many EAs, especially longtermists, argue that the person-affecting view is unlikely to be true. For example, Will MacAskill spends most of Chapter 8 of What We Owe The Future arguing that âall proposed defences of the intuition of neutrality [i.e. person-affecting view] suffer from devastating objectionsâ. Toby Ord writes in The Precipice p. 263 that âAny plausible account of population ethics will involveâŠmaking sacrifices on behalf of merely possible people.â
If thereâs a significant probability that the person-affecting view may be false, then FEMâs effect could in reality be up to 100x as negative as its effect on mothers is positive.
Even if one rejects the person-affecting view, but supports FEM for its (definitely positive) effects on farmed animals, they should then be sure to not support lifesaving charities like AMF, which have the opposite effect on farmed animals. They should also find FEM saving mothersâ lives to be an unfortunate side-effect of FEMâs intervention, because saving mothersâ lives is bad for the farmed animals the mothers eat.
Also, more of the farmed animals helped by reducing the human population donât exist yet and will be created in the future. So itâs curious that one would account for the interests of farmed animals that donât exist yet, but ignore the interests of human beings that donât exist yet. (To be fair, there are views like the procreation asymmetry which could justify this.)
On the whole, whether or not thereâs a significant probability that the person-affecting view may be false seems to be a crucial consideration for the sign of the effect of family planning charities such as FEM and MHI. Iâd be interested in how Rethink Priorities would approach incorporating moral uncertainty regarding the person-affecting view into its report on FEM.
Edit: Added âassuming the value of human experience is independent of the person-affecting viewâ for precision, as MichaelStJules pointed out.
I am just coming from a What We Owe the Future reading groupâthanks for reminding me of the gap between my moral untuitions and total utilitarianism!
One reason why I am not convinded by your argument is that I am not sure that the additional lifes lived due to the unintended pregnancies are globally net-positive:
on the one hand, it does seem quite likely that their lives will be subjectively worth living (the majority of people agrees with this statement and it does not seem to me that these lives would be too different) and that they would have net-positive relationships in the future.
but on the other hand, given a level of human technology, there is some finite number of people on earth which is optimal form a total utility standpoint. And given the current state of biodiversity loss, soil erosion and global warming, it does not seem obvious that humanity is below that number[1]
as a third part, given that these are unintended pregnancies, it does seem likely that there are resource limitations which would lead to hardships if a person is born. We would need to know a lot about the life situation and social support structures of the potential parents if we wanted to estimate how significant this effect is, but it could easily be non-trivial.
edited to add and remove:
the number of 100 pregnancies averted does not correspond to 100 fewer children being born in the end. A significant part of the pregnancies would only be shifted in time. I would be surprised if the true number is larger than 10 and expect it to be lower than this. My reasoning here is that the total number of children each set of parents is going to have will hardly be reduced by 100x from access to contraception. If this number started at 10 children and is reduced to a single child, we have a reduction that corresponds to 10 fewer births per death averted. And stated like this, even the number 10 seems quite high(sorry, there were a few confusions in this argument)This being said, the main reason why I am emotionally unconvinced by the argument you give is probably that I am on some level unable to contemplate âfailing to have childrenâ as something that is morally bad. My intuitions have somewhat cought up with the arguments that giving happy lives the opportunity to exist is a great thing, but they do not agree to the sign-flipped case for now. Probably, a part of this is that I do not trust myself (or others) to actually reason clearly on this topic and this just feels like âdo not go thereâ emotionally.
It also does not seem obvious that we are above that number. Especially when trying to include topics like wild animal suffering. At least I feel confident that human population isnât off from the optimum by a huge factor.
Thanks for your comment Ariel. We havenât attempted to assess the value of different population ethics views, or how those would affect the (cost-)effectiveness of FEMâs work. We believe that that is a highly complex topic that would take more time than the short period we had to conduct this research. Work on this would benefit from the Worldview Investigations Team at Rethink Priorities, which could explore family planning topics in the future. Iâm sorry we neglected to add that to the editorial note and disclaimer. I will edit it to reflect this.
FWIW, basically the same argument would also undermine almost all global health work and other neartermist work. Why work on saving hundreds or thousands or even millions of lives when you can reduce the probability of extinction and marginally increase the probability of 10^50 (or whatever) happy conscious beings coming into existence?
The difference is mostly a matter of degree: in extinction prevention compared to family planning, we have much smaller probabilities for the high payoff possibility (of preventing extinction+total view) and much larger payoffs conditional on the payoff.
I donât think it makes sense to single out family planning in particular with this kind of argument.
I think thereâs a big difference between strong longtermism (the argument you state) and my commentâs argument that FEMâs intervention is net negative.
My comment argues that while FEMâs intentions are well-meaning, their intervention may be net negative because it prevents people from experiencing lives they would have been glad to have lived. For my commentâs argument to be plausible, all one needs to believe is that the loves and friendships future people may have is a positive good. Yes, my comment appeals to longtermismâs endorsement of this view, but its claims and requirements are far more modest than those of strong longtermism.
There is no double standard or singling out here. I think global health work is good, and support funding for it on the margin. I believe the same about animal welfare, and about longtermism. Yes, some interventions are more cost-effective than others, and I think broadly similar arguments (e.g. even if you think animals donât matter, a small chance that they do matter should be enough to prioritize animal welfare over global health due to animal welfareâs scale and neglectedness) do indeed go through.
If you provided me another example of a neartermist intervention which prevents people from experiencing lives they would have been glad to have lived, I would make the same argument against it as in my earlier comment. It could be family planning, or it could be something else (e.g. advocacy of a one-child policy, perhaps for environmentalist purposes).
Iâm also quite sympathetic to the pure philosophical case for strong longtermism, though I have some caveats in practice. So yes, I donât think your statement of strong longtermism is unreasonable.
tl;dr: I think your strong argument based on MEC depends on pretty controversial assumptions, and your more modest argument doesnât imply we shouldnât support family planning at all in our portfolio, all-things-considered.
Your original argument does depend on MEC and (roughly) risk neutral EV maximization for the total view, or else high credence (>50%?) in moral views according to which itâs good to make more happy people. You were multiplying the number of lives prevented by the credence in totalism. The standard argument for strong longtermism does essentially the same.
Similar arguments donât go through on most other popular approaches to moral uncertainty without high credence in the good of making more happy people. Iâd guess most EA interventions are net negative according to some view that isnât totally implausible, and this isnât enough to stop us from pursuing them. Some approaches to moral uncertainty, like the property rights approach, could support family planning even if itâs (very) net negative according to a credal supermajority of views. (Maybe that counts against the approach, though!)
Your more modest (non-MEC) argument about recognizing goods (love, friendship) impersonally, on the other hand, would not be very persuasive to most people who endorse person-affecting views. It might be the most basic standard objection to them, and pretty close to just the direct assertion that PAVs are false. It also wouldnât even follow that we shouldnât support family planning, if we consider moral uncertainty and assign some credence to person-affecting views. That would depend on the specifics.
If the difference between family planning work and the next best opportunities are small enough for welfare maximizers with person-affecting views, those without person-affecting views can pay those with PAVs for the difference to prevent family planning work. Or maybe those with PAVs avoid family planning work to cooperate with other EAs, but I donât think other EAs are very against family planning, and cooperation with non-EAs might actually support family planning instead (perhaps depending on the kind, maybe less so abortion specifically).
I agree that we can distinguish between net negative (compared to doing nothing or some other default) and net positive but worse than something else, but the result is the same under consequentialism and MEC (if the argument succeeds): you shouldnât support either family planning or neartermist work generally, because there are better options, namely doing nothing (for family planning), or extinction risk reduction (for both). Again under other approaches to moral uncertainty, it isnât obvious that family planning shouldnât be supported at all.
(This is ignoring some movement building effects of neartermist work and probably some other indirect effects. Under strong longtermism, maybe GiveWell did a lot of good by funneling people towards AI safety, building general expertise or buying reputation.)
This kind of conclusion is a great example of why a totalist utilitarian view is absurd.
I think the conclusion should instead be that we should take the impact of neartermist interventions on the experiences of future beings very seriously.
Itâs not necessary to endorse total utilitarianism or strong longtermism for my commentâs argument to go through. If you see the loves and friendships future people may have as a positive good, even if they may not exist yet, and even if you donât weigh them as highly as those of people living in the present, then I think you should carefully consider what my comment has to say.
When people feel like they have to choose between a cherished belief and a philosophical argument, their instinct is often to keep the cherished belief and dismiss the philosophical argument. Itâs entirely understandable that people do that! It takes strength to listen to oneâs beliefs being questioned, and it takes courage to really deeply probe at whether or not oneâs cherished belief is actually true. However:
Eugene T. Gendlin, Focusing (Bantam Books, 1982).[1]
Quoted by Eliezer Yudkowsky in âAvoiding Your Beliefâs Real Weak Pointsâ.
(Edited slightly for accuracy/âprecision and grammar.)
MEC=maximizing expected choiceworthiness, and PAV=person-affecting view.
~99% doesnât follow from MEC alone. You need MEC+specific intertheoretic comparisons where individual welfare under PAVs has similar or lower absolute moral value as individual welfare under the total view (or similar enough views). And you need to ground such intertheoretic comparisons. There may be a case for it, but some versions of the PAVs will probably ground value quite differently and in basically incompatible ways from any total view, so these comparisons wouldnât be justified between those versions and the total view. Youâd have to use another approach to moral uncertainty (possibly along with MEC+intertheoretic comparisons in more limited cases), and the other approaches wouldnât generally be nearly as sensitive to the foregone welfare of the children not born.
Iâd also guess wide person-affecting views (prioritize quality and longevity, not population size) and asymmetric person-affecting views are more popular than person-affecting neutrality, and would probably be endorsed under further reflection by most people initially attracted to neutrality over neutrality. These views still endorse making people happy over making happy people.
That being said, once you include the effects on a group of farmed animals, you should also probably include the effects on all (including wild) animals with similar or greater average moral weight and probability of moral patienthood, at least if you care about outcomes somewhat regardless of active contribution, e.g. if youâre a utilitarian of any kind. This complicates things further.
Thanks for these caveats! I largely agree, but they seem to only have a modest impact on the 99% claim.
Regarding intertheoretic comparison, my prior is that a person-affecting view (PAV) should have little to no effect on oneâs valuation of welfare. I donât really see why PAV vs non-PAV would radically disagree on how important it is to help others. In this case, the disagreement would indeed have to be radicalâeven if for some reason, PAV caused someone to 10x their valuation of welfare, theyâd still have to be 90% certain PAV was true for FEM to be positive.
For PAVs where value is grounded quite differently, I donât have an informed prior on just how different the PAVâs grounding of value may be. If there are highly supported PAVs where welfare is clearly valued far greater than non-PAV, then that would update the 99% claim. However, I donât know of any such PAV, nor of any non-PAV where welfare is valued far greater than PAV (which would have the opposite effect).
Your second consideration makes sense, and might result in a modest dampening effect on the 99% number, if the increase in mothersâ standard of living due to FEMâs intervention is highly weighed.
Couldnât agree more on the farmed and wild animal effects :) I wonât pretend to have any degree of certainty about how it all shakes out.
Itâs less about valuing individual welfare at a greater rate under PAVs (although that could happen in principle) and more about grounding value in ways that donât allow intertheoretic comparsions at all with total views, or just refusing to attempt such intertheortic comparisons altogether, or refusing to apply MEC using them. It could be like trying to compare temperature and weight, which seems absurd because they measure very different things. Even if the targets are at least superficially similar, like welfare in both cases, the units could still be incompatible, with no justifiable common scale or conversion rate between them.
A person-affecting view could ground value by using a total view-compatible welfare scale and then just restricting its use in a person-affecting way, and that would be a good candidate for a common scale with the total view and so intertheoretic comparsions under MEC in the obvious way: valuing welfare an existing individualâs welfare identically across the views. However, itâs not clear that this is the only plausible or preferred way to ground person-affecting views.
Stepping back, your argument depends on high confidence in multiple controversial assumptions:
the use of MEC at all (possibly alongside other approaches, rather than any other approaches to moral uncertainty not involving MEC, like a moral parliament or a property rights approach, which tend to be more generally applicable including to non-quantitative views, less fanatical, and, in my view, more fair),
the use of MEC with intertheoretic comparisons at all (possibly alongside other normalization approaches, rather than other normalization approaches for MEC without intertheoretic comparisons),
for almost each plausible grounding of a plausible PAV, the existence and use of a specific common scale for intertheoretic comparisons with some grounding of a total view (or similar) under MEC,
MEC with the given intertheoretic comparisons from 3 generally disapproving of family planning.
Ah, I meant to point this out because your quotes from MacAskill and Ord are critical of neutrality, and I donât expect neutrality to be very representative of those holding person-affecting views or who would otherwise support family planning for person-affecting reasons. It could be a strawman.
Your statements about PAV make sense. I typically think about PAV as you wrote:
But there could be other conceptions. Somewhat tangentially, Iâm deeply suspicious of views which donât allow comparison to other views, which I see as a handwave to avoid having to engage critically with alternative perspectives.
If Iâm talking to a person who doesnât care about animals, and I try to persuade them using moral uncertainty, and they say âno, but one human is worth infinity animals, so I can just ignore whatever magnitude of animal suffering you throw at meâ, and theyâre unwilling to actually quantify their scales and critically discuss what could change their mind, thatâs evidence that theyâre engaging in motivated reasoning.
As a result, I hold very low credence in views which donât admit some approach to intertheoretic comparison. I havenât spent much time thinking about which approach to resolving moral uncertainty is the best, but MEC has always seemed to me to be a clear default, as with maximizing EV in everyday decisionmaking. As with maximizing EV, MEC can also be fairly accused of fanaticism, which is a legitimate concern.
On neutrality, Iâve always considered the intuition of neutrality to be approximately lumpable with PAV, so please let me know if Iâm just wrong there. From what I recall, Chapter 8 of What we Owe the Future argues strenuously against both the intuition of neutrality and PAV, and when I was reading it, I didnât detect much of a difference between MacAskillâs treatment of the two.
I think there are legitimate possibilities for infinities and value lexicality, though (for me personally, extremely intense suffering seems like it could matter infinitely more), and MEC with intertheoretic comparisons would just mean infinity-chasing fanaticism.[1] It can be a race to the bottom to less plausible views, because you can have infinities that lexically dominate other infinities, with a lexicographic order. Youâre stuck with at least one of the following:
infinity-chasing fanaticism (with MEC with intertheoretic comparisons),
ruling out these views with certainty,
ruling out the intertheoretic comparisons,
not using MEC.
The full MEC argument in response to a view X on which humans matter infinitely more than nonhuman animals, allowing lexicographic orders, is not very intuitive. There are (at least) two possible groups of views to compare X to:
Y. Humans and nonhuman animals both matter only finitely.
Yâ. Humans and nonhuman animals both matter infinitely, an infinite âamplificationâ of Y.
(Also Z. Humans matter finitely, and some nonhuman animals matter infinitely.)
When you take expected values/âchoiceworthiness over X, Y and Yâ (and Z), you will get that Y is effectively ignored, and you end up with X and Yâ (and Z) deciding everything, and the interests of nonhuman animals wouldnât be lexically dominated. We can amplify X infinitely, too, and then do the same to Yâ, just shifting along the lexicographic order to higher infinities. And we can keep shifting lexicographically further and further. Then, the actual reason nonhuman animalsâ interests arenât lexically dominated, if theyâre not, will be because of exotic implausible views where nonhuman animals matter infinitely, to some high infinity. Even if itâs the right answer, that doesnât seem like the right way to get to it.
If you donât allow lexical amplifications, then you have to rule out one of Y or Yâ. Or maybe you only allow certain lexical amplifications.
For another critique of MECâs handling of infinities, see A dilemma for Maximize Expected Choiceworthiness (MEC), and the comments.
I think the intuition of neutrality is sometimes just called âthe person-affecting restrictionâ, and any view satisfying it is a person-affecting view, but there are other person-affecting views (like asymmetric ones, wide ones). I consider it to be one among many person-affecting views.
Although you can also âamplifyâ any nonlexical view into a lexical one, by basically multiplying everything by infinity, e.g. shifting everything under a lexicographic order.
This is a good critique of MEC. Thanks for spelling it out, as Iâve never critically engaged with it before. At a high level, these arguments seem very similar to reductios of fanaticism in utilitarianism generally, such as the thought experiment of a 51% chance of double utility versus 49% chance of zero utility, and Pascalâs mugging.
I could play the game with the âhumans matter infinitely more than animalsâ person by saying âwell, in my philosophical theory, humans matter the same as in yours, but animals are on the same lexicographic position as humansâ. Of course, they could then say, âno, my lexicographic position of humanity is one degree greater than yoursâ, and so on.
This reminds me of Gödelâs Incompleteness Theorem, where you canât just fix your axiomatization of mathematics by adding the Gödel statement to the list of axioms, because then a new Gödel statement pops into existence. Even if you include an axiom schema where all of the Gödel statements get added to the list of axioms, a new kind of Gödel statement pops into existence. Thereâs no getting around the incompleteness result, because the incompleteness result comes from the power of the axiomatization of mathematics, not from some weakness which can be filled. Similarly, MEC can be said to be a âpowerfulâ system for reconciling moral uncertainty, because it can incorporate all moral views in some way, but that also allows views to be created which âexploitâ MEC in a way that other reconciliations arenât (as) susceptible to.