I’d expect that interventions on reducing miscarriages to be probably more tractable and scalable, and way less costly—so more effective than reducing intentional abortions.
That may well be true. I’ll confess on being quite ignorant on the subject. Jeff Kaufman gave a great laundry list of interventions in that regard, and I think analyzing their effectiveness is worth taking a poke at.
perhaps there’s a way to rephrase and clarify the disclaimer to account for this; e g., you’re less concerned about abortion as a cause area and more about a moral constraint regarding projects—i.e., just like we shouldn’t fund projects leading to work abuse, we shouldn’t fund projects leading to abortion
This is a very fair point. There’s been a bunch of very understandable discussion of how I shoehorn certain interventions to fit a very tight mold—“voluntary abortion reduction”—when there are either broader/less divisive lessons to take from this, and/or the lesson isn’t even wide enough. In retrospect, there was much I could have arranged/emphasized differently in this post. My reply to Julia’s similar concern here is the best clarification I’ve been able to give on what I was actually trying to say.
is there a particular moral difference between preventing a (statistically predictable) spontaneous abortion and preventing an intentional one, per se?
If in both cases, the outcome is one more child being born and living a happy life than otherwise, then I don’t see a moral difference.
there’s another Scourge unmet by the appendix: discarded frozen embryos
There it is! I actually think this is a pretty compelling pro-embryo-abortion objection in the deontological case. However, in my personal opinion, abortion is morally wrong specifically because it prevents a child from being born and living a happy life. If we had some process through which we could thaw out frozen embryos, place them in artificial wombs, and give them happy lives, then I would agree that destroying a frozen embryo when the counterfactual is putting them through this process and having one more happy person would be morally wrong. However, we seem to be well off from doing that. To needlessly speculate, I think it’s much more likely aligned AGI teaches us to simulate arbitrarily happy people before we ever get to that point. In that case, I would even argue that diverting computational power from the AGI such that it loses the ability to simulate one person would be morally wrong the same way. I basically see all of these cases as just other ways to add (or not add) one more happy person.
Thanks for your questions, Ramiro!
That may well be true. I’ll confess on being quite ignorant on the subject. Jeff Kaufman gave a great laundry list of interventions in that regard, and I think analyzing their effectiveness is worth taking a poke at.
This is a very fair point. There’s been a bunch of very understandable discussion of how I shoehorn certain interventions to fit a very tight mold—“voluntary abortion reduction”—when there are either broader/less divisive lessons to take from this, and/or the lesson isn’t even wide enough. In retrospect, there was much I could have arranged/emphasized differently in this post. My reply to Julia’s similar concern here is the best clarification I’ve been able to give on what I was actually trying to say.
If in both cases, the outcome is one more child being born and living a happy life than otherwise, then I don’t see a moral difference.
There it is! I actually think this is a pretty compelling pro-embryo-abortion objection in the deontological case. However, in my personal opinion, abortion is morally wrong specifically because it prevents a child from being born and living a happy life. If we had some process through which we could thaw out frozen embryos, place them in artificial wombs, and give them happy lives, then I would agree that destroying a frozen embryo when the counterfactual is putting them through this process and having one more happy person would be morally wrong. However, we seem to be well off from doing that. To needlessly speculate, I think it’s much more likely aligned AGI teaches us to simulate arbitrarily happy people before we ever get to that point. In that case, I would even argue that diverting computational power from the AGI such that it loses the ability to simulate one person would be morally wrong the same way. I basically see all of these cases as just other ways to add (or not add) one more happy person.