I think this is a pretty repugnant attitude in general, and a particularly objectionable instance of it, but I include it here for completeness.
It sounds like your statement here amounts to “this attitude triggers a disgust response in me, therefore it’s incorrect”. I’m not persuaded. A more persuasive argument: there’s a danger that our hypothetical woman aborts her fetus, gets rich, and then uses her developing powers of rationalization to find some reason not to give very much money to charity.
Thought experiment: Let’s say instead of being a woman, we’re dealing with a female android. Unlike humans, androids always know their own minds perfectly, never rationalize, keep all their promises, etc. The android tells you in her robotic voice that she’s aborting her android fetus so she can make more money and save more lives, and you know for a fact that she’s telling the truth. Does your answer to her stay the same?
Another thought: Maybe the reason you feel that this attitude is repugnant is because it sounds hypocritical. In that case, it might be useful to distinguish between preferences and advocacy. For example, maybe as an EA I would prefer that non-EA women carry their unwanted fetuses to term for the reasons you outline. But that doesn’t mean that I have to start protesting at abortion clinics. If someone came to me and asked me whether they should carry their baby to term or not, it seems reasonable for me to respond and say “Well, what are your opportunity costs like? Where will the time and energy raising your baby be spent if you choose to abort it?” and listen to her before giving my answer.
In fact, I would argue that endorsing many universal moral principles, such as “don’t abort fetuses”, “don’t eat animals”, and “borders should be open”, effectively amount to compartmentalization—the very thing you wrote this essay against. The real world is complicated. Our values are complicated. Simple principles like “don’t eat animals” are intuitively appealing and easy for groups to rally around. But, inconveniently, the world is not simple and our moral principles will conflict. When our moral principles conflict, we should have a process for resolving those conflicts. I’m not sure that process should favor simplicity in the result. In the context of discussing a single principle, the way you discuss your “don’t abort fetuses” principle here, it’s easy to compartmentalize and avoid letting the “replaceability” principle in to the “don’t abort fetuses” compartment. I wonder whether if your essay had been about replaceability instead of abortion, you would have come to the opposite conclusion. In other words, I wonder if humans have a bias towards letting moral conflicts resolve in favor of whichever moral principle is most salient. That seems suboptimal.
Another thought: If women are morally obligated to carry unwanted babies to term, are they also obligated to pump out as many babies as possible during their fertile years? Personally, the idea that the two cases are different strikes me as status quo bias. (Idea based on this paper.)
“this attitude triggers a disgust response in me, therefore it’s incorrect
Ironically I was partly worried about being accused of offering a straw (wo)man so wanted to distance my self from it.
Does your answer to her stay the same?
Good analogy. I think I’m basically concerned that this is not the sort of reasoning we can accept from fallible humans, not that it is inherently wrong, so I would be much more tolerant of the android. I Earn to Give, so vaguely similar considerations come up—some activities which are generally self-serving, but are transformed into altruistic resulting donations. In general I think the potential for personal self-deception is sufficiently high that I don’t endorse such activities in myself. I guess perhaps I think we’re less serious about altruism than we often like to think we are.
You have a good point about second-order compartmentalization.
If women are morally obligated to carry unwanted babies to term, are they also obligated to pump out as many babies as possible during their fertile years? My moral intuitions suggest they are.
Yeah I think this is also credible, albeit slightly less so. I frequently feel guilty about not having had children yet. Nor can I simply defend myself by arguing they are too expensive as:
I think Bryan Caplan’s argument that children can be raised with dramatically less effort than most middle-class westerners put in is correct.
Jeff and Julia’s recent accounts do suggest it can be done.
I know my friends and family would be supportive, or even actively pleased.
I reject this argument in the abortion case
I am a consequentialist and am not very keen on making an omissions-act distinction.
It’s also an unusual case because it’s one where I am actually falling morally short of the majority of women throughout history. I guess this is how people who think that the 20th century was a disaster because of factory farming feel.
When people like Jeff, Julia, Bryan say childrearing is cheap, they’re saying that it shouldn’t stop someone from doing it if they want to. Not that it’s the cheapest way to make there be more people. That honour would go to existential risk reduction, or failing that to global health measures. Noone is trying to get effective altruists to have kids if they don’t want to.
I think I’m basically concerned that this is not the sort of reasoning we can accept from fallible humans, not that it is inherently wrong, so I would be much more tolerant of the android.
Cool. I don’t know how I feel about Eliezer’s ethical injunctions sequence. I’d say I basically agree with it, with the caveat that I’m maybe half as concerned about it as he is. I’m happy to pirate books, leave lousy tips, “forget” to put away dishes so non-EA roommates do it, etc. in the service of EA, but I’d think very hard before, say, murdering someone.
That said, I’m glad that Eliezer is as concerned as he is… it does a good job of making up for the fact that he’s so willing to disregard the opinions of others (to his discredit, in my opinion). You’ve got to have some kind of safeguard. I guess maybe in my case I feel like I’m well safeguarded by thinking carefully before straying outside the bounds of what friends & society regard as non-horrible ethical behavior, which is why I’m not concerned about aborting a baby leading to some kind of slippery slope of unethicalness… it’s on the “sufficiently ethical” side of my “publicly regarded as non-horrible” fence.
I frequently feel guilty about not having had children yet.
I think you’re only morally obligated to have kids insofar as they’re the cheapest way to purchase QALYs with your time, energy, and money. I expect existential risk reduction is the cheapest way to do this if you think future lives have value comparable to present lives.* I’m not sure how it compares to Givewell’s top charities. If it turns out having kids really is the cheapest way to purchase QALYs, I wonder if you’re best off focusing on efforts to get other people to have kids (or improving gender relations so people get married more, or something like that), and only have kids to facilitate your advocacy and make it clear that you aren’t a hypocrite.
* This is the strongest argument I’ve seen for this perspective from a valuing-future-life perspective, but others have argued that decreased fertility and a slowing economy will be good for x-risk reduction—the issue seems complicated.
One final point: I tend to think that even in well-developed countries, many people live lives that are full of misery (I’m wealthy and privileged and employed with hundreds of facebook friends and I still feel intense misery much more often than I feel intense joy). That’s part of the reason why I’m so bullish on H+ causes.
This seems to be a further example of what the OP’s point that she overlooked. There are multiple principles one might use to differentiate the two views:
It is bad to abort a foetus
It is good to have children
but many of them are not available to EAs:
EAs tend not to believe in the acts-omissions distinction
EAs tend not to hold person-effecting views of ethics
EAs tend to value the welfare of those who have not even been conceived yet.
EAs are unlikely to believe that having been responsibly chaste / using contraception and thereby not becoming pregnant relieves you of any responsibility for the unborn
I agree with pretty much all of this, especially the last two paragraphs.
That last comparison also highlights that the argument against abortion in the popular space is overwhelmingly deontological, rather than consequentialist; it revolves around whether ‘foetuses are people’ or not, and therefore whether ‘abortion is murder’ or not, with the assumption granted (on both sides) that murder is automatically wrong. Which is why the popular space doesn’t make the link between women having abortions and women simply choosing not to have children.
And if you are deontological in your thinking, the argument about moral uncertainty is pretty strong; a 10% chance (say) of committing murder sounds terrible.
But for me personally, as I’ve become more consequentialist in my thinking (which EA has been a part of, though not the only part), it’s actually pushed me more and more pro-abortion.
And if you are deontological in your thinking, the argument about moral uncertainty is pretty strong; a 10% chance (say) of committing murder sounds terrible. But for me personally, as I’ve become more consequentialist in my thinking (which EA has been a part of, though not the only part), it’s actually pushed me more and more pro-abortion.
I think this is missing the point of the post. I’m sure you are pretty consequentialist in your thinking, so am I. But are you certain that consequentialism is the correct moral theory? Such certainty seems implausible, there are lots of examples where consequentialism is in favour of things that seem crazy. If you do have some credence in moral theories that ban murder, and that include abortion as a case of murder, you are going to have to take that into account if you wish to act morally.
Maybe I’m misunderstanding you, but I actually don’t think I missed that point at all if you read to the end of my post.
The whole point is that I’m not certain that consequentialism is correct but that my internal probability of it being so has been sharply rising, which is why “as I’ve become more consequentialist in my thinking...it’s actually pushed me more and more pro-abortion”. The ‘more’ implies lack of certainty/conviction here both for my current and (especially) my past self.
I’m claiming that deontology broadly provides more of the anti-abortion arguments than consequentialism does, certainly in the popular space. So it’s reasonable for more consequentialist groups (like EAs) to be more pro-abortion.
If your only point is that people with greater degrees of consequence should be more pro-abortion, then I would agree. However, I interpreted your comment as also saying or implying that you were, in fact, pro-abortion, which is of course different (and I apologise if you didn’t imply this).
Upvoted; good piece.
It sounds like your statement here amounts to “this attitude triggers a disgust response in me, therefore it’s incorrect”. I’m not persuaded. A more persuasive argument: there’s a danger that our hypothetical woman aborts her fetus, gets rich, and then uses her developing powers of rationalization to find some reason not to give very much money to charity.
Thought experiment: Let’s say instead of being a woman, we’re dealing with a female android. Unlike humans, androids always know their own minds perfectly, never rationalize, keep all their promises, etc. The android tells you in her robotic voice that she’s aborting her android fetus so she can make more money and save more lives, and you know for a fact that she’s telling the truth. Does your answer to her stay the same?
Another thought: Maybe the reason you feel that this attitude is repugnant is because it sounds hypocritical. In that case, it might be useful to distinguish between preferences and advocacy. For example, maybe as an EA I would prefer that non-EA women carry their unwanted fetuses to term for the reasons you outline. But that doesn’t mean that I have to start protesting at abortion clinics. If someone came to me and asked me whether they should carry their baby to term or not, it seems reasonable for me to respond and say “Well, what are your opportunity costs like? Where will the time and energy raising your baby be spent if you choose to abort it?” and listen to her before giving my answer.
In fact, I would argue that endorsing many universal moral principles, such as “don’t abort fetuses”, “don’t eat animals”, and “borders should be open”, effectively amount to compartmentalization—the very thing you wrote this essay against. The real world is complicated. Our values are complicated. Simple principles like “don’t eat animals” are intuitively appealing and easy for groups to rally around. But, inconveniently, the world is not simple and our moral principles will conflict. When our moral principles conflict, we should have a process for resolving those conflicts. I’m not sure that process should favor simplicity in the result. In the context of discussing a single principle, the way you discuss your “don’t abort fetuses” principle here, it’s easy to compartmentalize and avoid letting the “replaceability” principle in to the “don’t abort fetuses” compartment. I wonder whether if your essay had been about replaceability instead of abortion, you would have come to the opposite conclusion. In other words, I wonder if humans have a bias towards letting moral conflicts resolve in favor of whichever moral principle is most salient. That seems suboptimal.
Another thought: If women are morally obligated to carry unwanted babies to term, are they also obligated to pump out as many babies as possible during their fertile years? Personally, the idea that the two cases are different strikes me as status quo bias. (Idea based on this paper.)
Ironically I was partly worried about being accused of offering a straw (wo)man so wanted to distance my self from it.
Good analogy. I think I’m basically concerned that this is not the sort of reasoning we can accept from fallible humans, not that it is inherently wrong, so I would be much more tolerant of the android. I Earn to Give, so vaguely similar considerations come up—some activities which are generally self-serving, but are transformed into altruistic resulting donations. In general I think the potential for personal self-deception is sufficiently high that I don’t endorse such activities in myself. I guess perhaps I think we’re less serious about altruism than we often like to think we are.
You have a good point about second-order compartmentalization.
Yeah I think this is also credible, albeit slightly less so. I frequently feel guilty about not having had children yet. Nor can I simply defend myself by arguing they are too expensive as:
I think Bryan Caplan’s argument that children can be raised with dramatically less effort than most middle-class westerners put in is correct.
Jeff and Julia’s recent accounts do suggest it can be done.
I know my friends and family would be supportive, or even actively pleased.
I reject this argument in the abortion case
I am a consequentialist and am not very keen on making an omissions-act distinction.
It’s also an unusual case because it’s one where I am actually falling morally short of the majority of women throughout history. I guess this is how people who think that the 20th century was a disaster because of factory farming feel.
When people like Jeff, Julia, Bryan say childrearing is cheap, they’re saying that it shouldn’t stop someone from doing it if they want to. Not that it’s the cheapest way to make there be more people. That honour would go to existential risk reduction, or failing that to global health measures. Noone is trying to get effective altruists to have kids if they don’t want to.
Cool. I don’t know how I feel about Eliezer’s ethical injunctions sequence. I’d say I basically agree with it, with the caveat that I’m maybe half as concerned about it as he is. I’m happy to pirate books, leave lousy tips, “forget” to put away dishes so non-EA roommates do it, etc. in the service of EA, but I’d think very hard before, say, murdering someone.
That said, I’m glad that Eliezer is as concerned as he is… it does a good job of making up for the fact that he’s so willing to disregard the opinions of others (to his discredit, in my opinion). You’ve got to have some kind of safeguard. I guess maybe in my case I feel like I’m well safeguarded by thinking carefully before straying outside the bounds of what friends & society regard as non-horrible ethical behavior, which is why I’m not concerned about aborting a baby leading to some kind of slippery slope of unethicalness… it’s on the “sufficiently ethical” side of my “publicly regarded as non-horrible” fence.
I think you’re only morally obligated to have kids insofar as they’re the cheapest way to purchase QALYs with your time, energy, and money. I expect existential risk reduction is the cheapest way to do this if you think future lives have value comparable to present lives.* I’m not sure how it compares to Givewell’s top charities. If it turns out having kids really is the cheapest way to purchase QALYs, I wonder if you’re best off focusing on efforts to get other people to have kids (or improving gender relations so people get married more, or something like that), and only have kids to facilitate your advocacy and make it clear that you aren’t a hypocrite.
* This is the strongest argument I’ve seen for this perspective from a valuing-future-life perspective, but others have argued that decreased fertility and a slowing economy will be good for x-risk reduction—the issue seems complicated.
One final point: I tend to think that even in well-developed countries, many people live lives that are full of misery (I’m wealthy and privileged and employed with hundreds of facebook friends and I still feel intense misery much more often than I feel intense joy). That’s part of the reason why I’m so bullish on H+ causes.
This seems to be a further example of what the OP’s point that she overlooked. There are multiple principles one might use to differentiate the two views:
It is bad to abort a foetus
It is good to have children
but many of them are not available to EAs:
EAs tend not to believe in the acts-omissions distinction
EAs tend not to hold person-effecting views of ethics
EAs tend to value the welfare of those who have not even been conceived yet.
EAs are unlikely to believe that having been responsibly chaste / using contraception and thereby not becoming pregnant relieves you of any responsibility for the unborn
I agree with pretty much all of this, especially the last two paragraphs.
That last comparison also highlights that the argument against abortion in the popular space is overwhelmingly deontological, rather than consequentialist; it revolves around whether ‘foetuses are people’ or not, and therefore whether ‘abortion is murder’ or not, with the assumption granted (on both sides) that murder is automatically wrong. Which is why the popular space doesn’t make the link between women having abortions and women simply choosing not to have children.
And if you are deontological in your thinking, the argument about moral uncertainty is pretty strong; a 10% chance (say) of committing murder sounds terrible.
But for me personally, as I’ve become more consequentialist in my thinking (which EA has been a part of, though not the only part), it’s actually pushed me more and more pro-abortion.
I think this is missing the point of the post. I’m sure you are pretty consequentialist in your thinking, so am I. But are you certain that consequentialism is the correct moral theory? Such certainty seems implausible, there are lots of examples where consequentialism is in favour of things that seem crazy. If you do have some credence in moral theories that ban murder, and that include abortion as a case of murder, you are going to have to take that into account if you wish to act morally.
Maybe I’m misunderstanding you, but I actually don’t think I missed that point at all if you read to the end of my post.
The whole point is that I’m not certain that consequentialism is correct but that my internal probability of it being so has been sharply rising, which is why “as I’ve become more consequentialist in my thinking...it’s actually pushed me more and more pro-abortion”. The ‘more’ implies lack of certainty/conviction here both for my current and (especially) my past self.
I’m claiming that deontology broadly provides more of the anti-abortion arguments than consequentialism does, certainly in the popular space. So it’s reasonable for more consequentialist groups (like EAs) to be more pro-abortion.
If your only point is that people with greater degrees of consequence should be more pro-abortion, then I would agree. However, I interpreted your comment as also saying or implying that you were, in fact, pro-abortion, which is of course different (and I apologise if you didn’t imply this).