Iâd expect that interventions on reducing miscarriages to be probably more tractable and scalable, and way less costlyâso more effective than reducing intentional abortions.
That may well be true. Iâll confess on being quite ignorant on the subject. Jeff Kaufman gave a great laundry list of interventions in that regard, and I think analyzing their effectiveness is worth taking a poke at.
perhaps thereâs a way to rephrase and clarify the disclaimer to account for this; e g., youâre less concerned about abortion as a cause area and more about a moral constraint regarding projectsâi.e., just like we shouldnât fund projects leading to work abuse, we shouldnât fund projects leading to abortion
This is a very fair point. Thereâs been a bunch of very understandable discussion of how I shoehorn certain interventions to fit a very tight moldââvoluntary abortion reductionââwhen there are either broader/âless divisive lessons to take from this, and/âor the lesson isnât even wide enough. In retrospect, there was much I could have arranged/âemphasized differently in this post. My reply to Juliaâs similar concern here is the best clarification Iâve been able to give on what I was actually trying to say.
is there a particular moral difference between preventing a (statistically predictable) spontaneous abortion and preventing an intentional one, per se?
If in both cases, the outcome is one more child being born and living a happy life than otherwise, then I donât see a moral difference.
thereâs another Scourge unmet by the appendix: discarded frozen embryos
There it is! I actually think this is a pretty compelling pro-embryo-abortion objection in the deontological case. However, in my personal opinion, abortion is morally wrong specifically because it prevents a child from being born and living a happy life. If we had some process through which we could thaw out frozen embryos, place them in artificial wombs, and give them happy lives, then I would agree that destroying a frozen embryo when the counterfactual is putting them through this process and having one more happy person would be morally wrong. However, we seem to be well off from doing that. To needlessly speculate, I think itâs much more likely aligned AGI teaches us to simulate arbitrarily happy people before we ever get to that point. In that case, I would even argue that diverting computational power from the AGI such that it loses the ability to simulate one person would be morally wrong the same way. I basically see all of these cases as just other ways to add (or not add) one more happy person.
Thanks for your questions, Ramiro!
That may well be true. Iâll confess on being quite ignorant on the subject. Jeff Kaufman gave a great laundry list of interventions in that regard, and I think analyzing their effectiveness is worth taking a poke at.
This is a very fair point. Thereâs been a bunch of very understandable discussion of how I shoehorn certain interventions to fit a very tight moldââvoluntary abortion reductionââwhen there are either broader/âless divisive lessons to take from this, and/âor the lesson isnât even wide enough. In retrospect, there was much I could have arranged/âemphasized differently in this post. My reply to Juliaâs similar concern here is the best clarification Iâve been able to give on what I was actually trying to say.
If in both cases, the outcome is one more child being born and living a happy life than otherwise, then I donât see a moral difference.
There it is! I actually think this is a pretty compelling pro-embryo-abortion objection in the deontological case. However, in my personal opinion, abortion is morally wrong specifically because it prevents a child from being born and living a happy life. If we had some process through which we could thaw out frozen embryos, place them in artificial wombs, and give them happy lives, then I would agree that destroying a frozen embryo when the counterfactual is putting them through this process and having one more happy person would be morally wrong. However, we seem to be well off from doing that. To needlessly speculate, I think itâs much more likely aligned AGI teaches us to simulate arbitrarily happy people before we ever get to that point. In that case, I would even argue that diverting computational power from the AGI such that it loses the ability to simulate one person would be morally wrong the same way. I basically see all of these cases as just other ways to add (or not add) one more happy person.