I’m a managing partner at AltX, an EA-aligned quantitative crypto hedge fund. I previously earned to give as a Quant Trading Analyst at DRW. In my free time, I enjoy reading, discussing moral philosophy, and exploring Wikipedia rabbit holes.
Ariel Simnegar
You outline a moral dichotomy between the following:
Actions which negatively affect a person’s future interests,
e.g. a mother taking a drug which causes birth defects in her child,
which are morally wrong
Actions which prevent the occurrence of a person having future interests,
e.g. a mother preventing the birth of her child,
which are morally neutral
It seems to me that longtermism explicitly rejects this dichotomy, because longtermists believe the prevention of the occurrence of the interests of innumerable future people would be a catastrophic moral loss. A believer in this dichotomy would argue that a human extinction event is morally neutral with respect to the interests of innumerable future people who would have lived, because the extinction event simply “prevents those interests from arising in the first place”. Do you agree that this dichotomy is inconsistent with longtermism?
You argue that the value added by saving a life is separable into two categories:
Person-directed: The value added by positively affecting an existing person’s interests.
Undirected: The value added simply by increasing the total amount of happy life lived.
Let’s define the “coefficient of undirected value” C between 0 and 1 to be the proportion of value added for undirected reasons, as opposed to person-directed reasons. The totalist view would set C=1, arguing that there is no intrinsic value to helping a particular person. The person-affecting view would set C=0, arguing that it is only coherent to add value when it positively affects an existing person. You argue that this is a false dichotomy, and that C should be “low,” i.e. giving low moral weight to interventions which only produce undirected value (e.g. increasing fertility) relative to interventions which produce both categories of value (e.g. saving a life).
I think the totalist view should be lent more credence than you lend it in your post, and that C should be adjusted upwards accordingly by moral uncertainty to be “high.” I would endorse the implication that causing a baby to be born who otherwise would not is “close to as good” as saving a life.
Consider choosing between the following situations (in the vein of your post’s discussion of the intrinsic harm of death):
A woman wants a child. You use your instant artificial womb to create an infant for her.
A woman just gave birth to an infant. The infant is about to painlessly die due to a genetic defect. You prevent that death.
For the sake of argument, let’s assume that the woman’s interests are identical in both cases. (i.e. the sadness Woman 1 would have had if you didn’t make her a child is the same as the sadness Woman 2 would have had if her child painlessly died, and the happiness of both women from being able to raise their child is the same.)
To me, it seems most intuitive that one should have little to no preference between Case 1 and Case 2. The outcomes for both the woman and the child are (by construction) identical. Of course, the value added in Case 1 is undirected, since the child doesn’t yet exist for its interests to be positively affected by your decision, and the value added in Case 2 includes both directed and undirected components. If we follow this intuition, we must conclude that C=1, or C is very close to 1. Even if you still have a significant intuitive preference for Case 2, let’s say you’re choosing between two occurrences of Case 1 and one occurrence of Case 2. Many now would switch to prefer the two occurrences of Case 1, since now we have two happy mothers and children versus one. However, this still implies C>0.5. If we accept the idea that Case 1 is close to as good as Case 2, then it seems hard to escape the conclusion that C is “high,” and we should adjust the way we think about increasing fertility accordingly.
Let me know what you think!
Thanks for your post, Nikiz! I don’t mean this to offend, but this post reads a very much like it was generated by GPT-3/ChatGPT.
If you indeed wrote this, I’d advise you to include much more clarification, examples, and precise recommendations.
Thanks for the clarification, and for your explanation of your thought process!
What a gorgeous website!
One value to think about in this website’s design is how much to optimize it for persuading EA/rationalists, vegans, or normal people. The website’s discussion of the issue and possible objections seems primarily geared towards EA/rationalist people and vegans rather than towards normal people.
Has Vegan Hacktivists undertaken any marketing research into which specific groups it makes the most sense to appeal to, and how to best design the website in that vein?
Hi! I think ChatGPT could be useful as a “personal assistant” for common subtasks in essay writing (coming up with examples, rephrasing text to avoid misinterpretation, etc). However, I personally don’t think that fully AI generated essays are yet capable of adding real value to EA decisionmaking.
Thanks for that explainer! Given those target audiences, I think the team did a fantastic design job.
A Case for Voluntary Abortion Reduction
Hi Jeff, thanks for your comment!
The first three medical interventions you pointed out would have been excellent for me to include in this post’s Community Actions section, as the possible moral patienthood of embryos would indeed update their significance. Thanks for bringing them to my attention.
On the fourth, I agree, with the following caveats:
In the case that embryos have moral patienthood and we hold non-person-affecting / deprivationist views, there shouldn’t be much difference between preventing fertilization and preventing implantation, because the outcome in terms of (adjusted) life years is the same.
Replaceability seems to become a much more compelling objection at that stage. If a couple is trying for a baby and an embryo fails to implant, they likely won’t even notice and will keep trying until they get one.
I tentatively agree with you. Without a distinction between upvotes/downvotes and agree-votes/disagree-votes, high quality posts which provoke important but difficult community conversations may have their visibility systematically reduced. (I endeavored to write this post to fit that description.)
However, since we already have that distinction for comments, there must have been a specific decision to enable the distinction for comments but not for posts, and there was presumably a good reason.
Yep, you’re right about that! It’s a greater caveat than I pointed out.
Thanks! I’ve been thinking about this issue for a long time, and it was a substantial undertaking to try to tackle it in a way which adds value to the community and enables a productive conversation. Most of the people who helped me write these drafts strongly disagreed with me, and working together with them in pursuit of these goals was an enlightening and fulfilling experience.
Your understanding is right, but it’s not the only reason why it seems to me that abortion may be wrong. I sketch out the generalization in the “Increasing the Amount of Near-Term Future People” section, but it’s probably not sufficiently explicit. Many of the arguments for why abortion may be wrong generalize to arguments for why preventing a future person from coming into being is wrong:
If abortion is wrong because we shouldn’t hold person-affecting views (i.e. we should care about possible people, and fetuses are possible people, even if they might not be considered living persons), then any action which prevents a future person from coming into being is similarly wrong, as we’re violating the preferences they counterfactually would have had.
If abortion is wrong because of deprivationism (i.e. it prevents the (adjusted) life years of the child from being lived), then any action which prevents a future person from coming into being and living out their (adjusted) life years is wrong.
Richard Chappell sketches the implications of these views quite well here. It seems to me that he generally holds these views, but includes a factor to strongly discount the value of adding a future person versus saving a life now. This enables him to believe abortion is morally OK (since his discount factor applies), but longtermism is still an imperative (since heavily discounting 10^whatever possible future people still means they’re extremely important in aggregate).
Otherwise, interventions that increased how many babies people wanted to have would be roughly interchangeable with interventions that decrease abortions.
Under non-person-affecting/deprivationist views, I would argue that that’s correct.
- 20 Dec 2022 15:59 UTC; 6 points) 's comment on A Case for Voluntary Abortion Reduction by (
- 20 Dec 2022 16:53 UTC; 1 point) 's comment on A Case for Voluntary Abortion Reduction by (
Hi, thanks for your comment! You make a fair point that my essay isn’t precise enough about the potential moral caveats of these charities, and I’ll try to elaborate on that here.
It looks like one common source of confusion is what the precise reasons are for why abortion may be wrong. If abortion were wrong only because embryos could have personhood, then you’d be absolutely correct that we should donate more to family planning charities which reduce the number of abortions rather than less.
However, it seems to me that a stronger reason why abortion may be wrong is for the same reason longtermists oppose x-risk: It reduces the expected amount of future people. I briefly sketch the argument in the “Increasing the Amount of Near-Term Future People” section, but I could have done a better job of it, and elaborate some more in this comment. The magnitude of the difference between adding a future person and saving a living person is debated, but it seems that many prominent EAs consider it to be close to as good as saving a living person today. What we Owe the Future’s “Is it Good to Make Happy People?” (Chapter 8) does a great job of making that case, though some disagree.
For an example of where this consideration could be relevant, consider this statement from Family Empowerment Media (FEM)’s founders:
A commitment of $7 million would fund FEM’s scaling plans over the next four years, preventing ∼3100 maternal deaths and ∼340,000 unintended pregnancies.
Let’s assume 10% of those unintended pregnancies would have been carried to term and not counterfactually replaced (to avoid child replaceability concerns). In that case, this intervention would prevent 34,000 lives from being lived, far more than the 3100 maternal lives saved. If we’re sympathetic to the above arguments (as many longtermists are), then this well-meaning intervention could arguably be doing much more harm than good.
It’s critical to note that supporting women’s autonomy, maternal health, and economic outcomes is a deeply important cause, and CE’s family planning charities absolutely contribute to those good outcomes. However, it seems to me that the
case for not donating to such charities out of moral uncertainty reasons
you pointed to could be quite strong, and that there are many other interventions which support women’s empowerment and maternal health without this possible serious negative externality.
I’d also like to note that I’m not saying EAs should never donate to FEM or its related charities. I only believe that the moral considerations are serious enough that we should temporarily suspend our support for these charities until we’ve systematically reviewed the effect of these considerations.
It could be that a systematic review uses randomized controlled trials to verify that FEM’s interventions don’t reduce the expected amount of future people at all and only space out births. The review could also show that replaceability should be accorded much higher credence than many actually accord it, and argue that even with these moral considerations, the absolute effect of FEM’s interventions is good. In that case, the suspension of support for FEM should be reversed.
- 20 Dec 2022 20:09 UTC; 1 point) 's comment on A Case for Voluntary Abortion Reduction by (
- 20 Dec 2022 16:53 UTC; 1 point) 's comment on A Case for Voluntary Abortion Reduction by (
- 21 Dec 2022 17:54 UTC; 1 point) 's comment on A Case for Voluntary Abortion Reduction by (
- 20 Dec 2022 16:41 UTC; 1 point) 's comment on A Case for Voluntary Abortion Reduction by (
I don’t think we actually disagree :)
I think voluntary abortion reduction is just one of many ways to increase the amount of near-term future people. The post’s “In Our Personal Lives” section includes the suggestions you gave and more, which I agree are arguably more effective than voluntary abortion reduction in accomplishing that goal. I also agree with you that reducing x-risk is (probably) much more important than directly increasing the amount of near-term future people, and I think far more EA resources should be devoted to the former than the latter.
So why did I care enough about voluntary abortion reduction to write this post?
I do believe that adding one future person is close to as good as saving a life, so it still seems to me that when measured against other concerns which occupy the minds of the general public, voluntary abortion reduction is very important indeed, especially given abortion’s staggering scale.
I think bringing up ideas which provoke conversations and challenge preconceptions within the community is good for its own sake.
This concern is more debatable, but I’m personally deeply receptive to the idea that our values should cause us to make mini-interventions in our personal lives. Being a vegan is a drop in the bucket of animal suffering, but making a real change in a personal life in response to my moral principles is very important to me. With apologies to those who disagree, I think about voluntarily choosing to not have abortions in the exact same way.
- 21 Dec 2022 16:32 UTC; 6 points) 's comment on A Case for Voluntary Abortion Reduction by (
- 20 Dec 2022 16:35 UTC; 2 points) 's comment on A Case for Voluntary Abortion Reduction by (
- 24 Dec 2022 20:58 UTC; 1 point) 's comment on A Case for Voluntary Abortion Reduction by (
Thank YOU, Larks! That’s very kind of you to say.
Hi Denise! I agree that optimizing for increasing the amount of children that families want and are able to happily have is probably better than voluntary abortion reduction as a means of increasing the amount of near-term future people. I apologize if I wrote anything which could give the implication that I “think preventing abortions is the best way to do so” (emphasis mine), as that is not my opinion.
As for why I decided to write a whole post on abortion reduction, here are some of my reasons.
- 24 Dec 2022 20:58 UTC; 1 point) 's comment on A Case for Voluntary Abortion Reduction by (
Hi! You’re right that promoting contraception does reduce abortions on net. However, that’s not the only moral consideration at play, and I explain the others in much more detail here.
Hi Denise, thank you for your thoughtful comment!
On family planning, I explain the moral considerations behind the proposal to temporarily suspend support for family planning charities in more detail here.
Women will often not want to have children—so we should ensure they don’t conceive in the first place instead of terminating their pregnancies.
I agree with you completely that preventing a person from being born through contraception is much better than through abortion, because the former is much better for the woman’s physical, mental, and economic health. However, the loss of a future person is common in both cases, and I elaborate on why I think that’s a moral concern here.
Something I find lacking in your description is how much more fetuses matter morally over time in my view at least.
Your observation is very fair. Your description of the disvalue of fetal death best matches the time-relative interest account (TRIA), which you can read more about here. I do bring this measure up in a footnote, but you’re right that it could have warranted a more thorough treatment in the post. TRIA best matches our intuition that the fetus’s moral significance increases through the pregnancy, and through moral uncertainty, I would justify the same intuition. However, I personally find the deprivationist approach of purely measuring the disvalue of death through the amount of (adjusted) life years prevented to be more intuitive.
Thanks for linking a well-researched piece! The piece tries to piece together a coherent attribution for Alameda’s catastrophic losses, but in doing so, it makes broad conjectures which aren’t necessarily justified given the state of publicly available knowledge. The piece does aggregate public knowledge quite well, and I’d recommend it for anyone who’d like to learn more about the FTX/Alameda tragedy, provided it be read carefully and dispassionately.