(I liked this post, but apologise for pushing into object level concerns.)
It isn’t wholly clear to me that ‘EA concerns’ will generally push against broadly liberal views on abortion—it seems to depend a lot on what considerations one weighed heavily either pro- or con- abortion in the first place. I definitely agree that common EA considerations could well undermine reasons which led someone to a liberal view of abortion, but it may also undermine reasons which led to a conservative one as well. A few examples of the latter:
1) One may think that human life is uniquely sacred. Yet perhaps one might find this belief challenged (among other ways) by folks interested in animal welfare, and instead of setting humans on a different moral order to other animals, rather they should be on a continuum according to some metric that ‘really’ explains moral value (degree of consciousness, or similar). Although fetuses may well remain objects of moral concern, they become less sacrosanct, and so abortion becomes more conscionable.
2) As Amanda alludes to, consequentialist/counterfactual reasoning will supply many cases where abortion becomes permissible depending on particular circumstances, especially where one lives in a world where QALYs are relatively cheap (suppose Sarah wants to continue her pregnancy, but I offer to give £20,000 to AMF if she has an abortion instead—ignoring all second-order effects, this is likely a good deal; more realistically, one could include factors like expected loss-of-earnings, stigma, etc. etc.). If one’s opposition to abortion is motivated by deontological concerns, a move to teleological ways of thinking may lead one to accept that there will be many cases when abortion is morally preferable.
3) Total views and replaceability could also be supportive, if Sarah anticipates to be in a better position to start a family later (so both her and the child stand to benefit), then the problem reduces to a different number population problem (imagine a 3x2 table—Sarah gains some QALYs by aborting over adopting, the ‘current fetus’ loses some QALYs by abortion, yet the ‘counter factual’ fetus gains even more QALYs by abortion, as it will be broad into existence and has—ex hypothesi—a better life than the current fetus would have). So the choice to abort (on the proviso to have a child in better circumstances) looks pretty good. (You are right to note there are other corrections, but they seem less significant).
The all-things-considered calculus is unclear, but it is not obvious to me the ‘direction of travel’ after contemplating EA-related principles is generally against abortion, and I would guess even an ‘EA perspective’ constructed from the resources you suggest would not be considered a conservative view. It may be that the inconvenience and risk of being pregnant would be insufficient to justify an abortion by the lights of this ‘EA perspective’, but most women who had an abortion had weightier concerns motivating them—concerns the ‘EA perspective’ might well endorse as sufficient justification in the majority of cases abortions have actually occurred.
There might be a regression to the mean issue where the considerations found compelling by-the-lights of one perspective will generally look less compelling when that perspective changes (so if one was ‘pre-EA’ strongly in favour of a liberal view on abortion, this enthusiasm would likely regress). There’s also a related point that one should not expect all updates to be psychologically pleasant, and they’re probably not that much more likely to comfort our prior convictions as to offend them. But I’m not sure the object-level example provides good support to the more meta considerations.
Although fetuses may well remain objects of moral concern, they become less sacrosanct, and so abortion becomes more conscionable.
Good point. I suspect this may end up being mathematically equivalent to the moral uncertainty argument, which in practice equates to counting fetuses as having 30% of the value of an adult. In both cases EA-concerns lead to taking a midpoint between the “fetuses don’t matter” and “unborn babies are equally valuable” points of view.
So the choice to abort (on the proviso to have a child in better circumstances) looks pretty good. (You are right to note there are other corrections, but they seem less significant).
My impression is that the largest determinant of child success is genetic, followed by shared environment—direct parental environmental influences are small. On the one hand, this suggests that simply waiting will not improve things all that much, especially as she should be uncertain as to whether her situation really will improve. On the other hand, it suggests an easy way to improve the ‘quality’ of the later baby—choose a higher quality father. This improvement quite possibly wouldn’t be included in QALYs (as they are bounded above by 1) but seems significant nonetheless.
There might be a regression to the mean issue where the considerations found compelling by-the-lights of one perspective will generally look less compelling when that perspective changes (so if one was ‘pre-EA’ strongly in favour of a liberal view on abortion, this enthusiasm would likely regress).
That’s a good point. Indeed, it seems that in basically every instance EA-considerations degrade the quality of the standard object-level arguments. They then make up for this by supplying entirely new arguments—moral uncertainty, replaceability.
Do you have any cases in mind where EA considerations case psychologically unpleasant updates?
(I liked this post, but apologise for pushing into object level concerns.)
It isn’t wholly clear to me that ‘EA concerns’ will generally push against broadly liberal views on abortion—it seems to depend a lot on what considerations one weighed heavily either pro- or con- abortion in the first place. I definitely agree that common EA considerations could well undermine reasons which led someone to a liberal view of abortion, but it may also undermine reasons which led to a conservative one as well. A few examples of the latter:
1) One may think that human life is uniquely sacred. Yet perhaps one might find this belief challenged (among other ways) by folks interested in animal welfare, and instead of setting humans on a different moral order to other animals, rather they should be on a continuum according to some metric that ‘really’ explains moral value (degree of consciousness, or similar). Although fetuses may well remain objects of moral concern, they become less sacrosanct, and so abortion becomes more conscionable.
2) As Amanda alludes to, consequentialist/counterfactual reasoning will supply many cases where abortion becomes permissible depending on particular circumstances, especially where one lives in a world where QALYs are relatively cheap (suppose Sarah wants to continue her pregnancy, but I offer to give £20,000 to AMF if she has an abortion instead—ignoring all second-order effects, this is likely a good deal; more realistically, one could include factors like expected loss-of-earnings, stigma, etc. etc.). If one’s opposition to abortion is motivated by deontological concerns, a move to teleological ways of thinking may lead one to accept that there will be many cases when abortion is morally preferable.
3) Total views and replaceability could also be supportive, if Sarah anticipates to be in a better position to start a family later (so both her and the child stand to benefit), then the problem reduces to a different number population problem (imagine a 3x2 table—Sarah gains some QALYs by aborting over adopting, the ‘current fetus’ loses some QALYs by abortion, yet the ‘counter factual’ fetus gains even more QALYs by abortion, as it will be broad into existence and has—ex hypothesi—a better life than the current fetus would have). So the choice to abort (on the proviso to have a child in better circumstances) looks pretty good. (You are right to note there are other corrections, but they seem less significant).
The all-things-considered calculus is unclear, but it is not obvious to me the ‘direction of travel’ after contemplating EA-related principles is generally against abortion, and I would guess even an ‘EA perspective’ constructed from the resources you suggest would not be considered a conservative view. It may be that the inconvenience and risk of being pregnant would be insufficient to justify an abortion by the lights of this ‘EA perspective’, but most women who had an abortion had weightier concerns motivating them—concerns the ‘EA perspective’ might well endorse as sufficient justification in the majority of cases abortions have actually occurred.
There might be a regression to the mean issue where the considerations found compelling by-the-lights of one perspective will generally look less compelling when that perspective changes (so if one was ‘pre-EA’ strongly in favour of a liberal view on abortion, this enthusiasm would likely regress). There’s also a related point that one should not expect all updates to be psychologically pleasant, and they’re probably not that much more likely to comfort our prior convictions as to offend them. But I’m not sure the object-level example provides good support to the more meta considerations.
Good point. I suspect this may end up being mathematically equivalent to the moral uncertainty argument, which in practice equates to counting fetuses as having 30% of the value of an adult. In both cases EA-concerns lead to taking a midpoint between the “fetuses don’t matter” and “unborn babies are equally valuable” points of view.
My impression is that the largest determinant of child success is genetic, followed by shared environment—direct parental environmental influences are small. On the one hand, this suggests that simply waiting will not improve things all that much, especially as she should be uncertain as to whether her situation really will improve. On the other hand, it suggests an easy way to improve the ‘quality’ of the later baby—choose a higher quality father. This improvement quite possibly wouldn’t be included in QALYs (as they are bounded above by 1) but seems significant nonetheless.
That’s a good point. Indeed, it seems that in basically every instance EA-considerations degrade the quality of the standard object-level arguments. They then make up for this by supplying entirely new arguments—moral uncertainty, replaceability.
Do you have any cases in mind where EA considerations case psychologically unpleasant updates?