I think I’m basically concerned that this is not the sort of reasoning we can accept from fallible humans, not that it is inherently wrong, so I would be much more tolerant of the android.
Cool. I don’t know how I feel about Eliezer’s ethical injunctions sequence. I’d say I basically agree with it, with the caveat that I’m maybe half as concerned about it as he is. I’m happy to pirate books, leave lousy tips, “forget” to put away dishes so non-EA roommates do it, etc. in the service of EA, but I’d think very hard before, say, murdering someone.
That said, I’m glad that Eliezer is as concerned as he is… it does a good job of making up for the fact that he’s so willing to disregard the opinions of others (to his discredit, in my opinion). You’ve got to have some kind of safeguard. I guess maybe in my case I feel like I’m well safeguarded by thinking carefully before straying outside the bounds of what friends & society regard as non-horrible ethical behavior, which is why I’m not concerned about aborting a baby leading to some kind of slippery slope of unethicalness… it’s on the “sufficiently ethical” side of my “publicly regarded as non-horrible” fence.
I frequently feel guilty about not having had children yet.
I think you’re only morally obligated to have kids insofar as they’re the cheapest way to purchase QALYs with your time, energy, and money. I expect existential risk reduction is the cheapest way to do this if you think future lives have value comparable to present lives.* I’m not sure how it compares to Givewell’s top charities. If it turns out having kids really is the cheapest way to purchase QALYs, I wonder if you’re best off focusing on efforts to get other people to have kids (or improving gender relations so people get married more, or something like that), and only have kids to facilitate your advocacy and make it clear that you aren’t a hypocrite.
* This is the strongest argument I’ve seen for this perspective from a valuing-future-life perspective, but others have argued that decreased fertility and a slowing economy will be good for x-risk reduction—the issue seems complicated.
One final point: I tend to think that even in well-developed countries, many people live lives that are full of misery (I’m wealthy and privileged and employed with hundreds of facebook friends and I still feel intense misery much more often than I feel intense joy). That’s part of the reason why I’m so bullish on H+ causes.
Upvoted; good piece.
It sounds like your statement here amounts to “this attitude triggers a disgust response in me, therefore it’s incorrect”. I’m not persuaded. A more persuasive argument: there’s a danger that our hypothetical woman aborts her fetus, gets rich, and then uses her developing powers of rationalization to find some reason not to give very much money to charity.
Thought experiment: Let’s say instead of being a woman, we’re dealing with a female android. Unlike humans, androids always know their own minds perfectly, never rationalize, keep all their promises, etc. The android tells you in her robotic voice that she’s aborting her android fetus so she can make more money and save more lives, and you know for a fact that she’s telling the truth. Does your answer to her stay the same?
Another thought: Maybe the reason you feel that this attitude is repugnant is because it sounds hypocritical. In that case, it might be useful to distinguish between preferences and advocacy. For example, maybe as an EA I would prefer that non-EA women carry their unwanted fetuses to term for the reasons you outline. But that doesn’t mean that I have to start protesting at abortion clinics. If someone came to me and asked me whether they should carry their baby to term or not, it seems reasonable for me to respond and say “Well, what are your opportunity costs like? Where will the time and energy raising your baby be spent if you choose to abort it?” and listen to her before giving my answer.
In fact, I would argue that endorsing many universal moral principles, such as “don’t abort fetuses”, “don’t eat animals”, and “borders should be open”, effectively amount to compartmentalization—the very thing you wrote this essay against. The real world is complicated. Our values are complicated. Simple principles like “don’t eat animals” are intuitively appealing and easy for groups to rally around. But, inconveniently, the world is not simple and our moral principles will conflict. When our moral principles conflict, we should have a process for resolving those conflicts. I’m not sure that process should favor simplicity in the result. In the context of discussing a single principle, the way you discuss your “don’t abort fetuses” principle here, it’s easy to compartmentalize and avoid letting the “replaceability” principle in to the “don’t abort fetuses” compartment. I wonder whether if your essay had been about replaceability instead of abortion, you would have come to the opposite conclusion. In other words, I wonder if humans have a bias towards letting moral conflicts resolve in favor of whichever moral principle is most salient. That seems suboptimal.
Another thought: If women are morally obligated to carry unwanted babies to term, are they also obligated to pump out as many babies as possible during their fertile years? Personally, the idea that the two cases are different strikes me as status quo bias. (Idea based on this paper.)