I mean, what really are the chances that effective altruism, a largely liberal, left-of-center (i.e., pro-choice) community would broadly endorse the view that terminating a pregnancy is morally wrong?
For me the problem is, without explicit countervailing frameworks prioritizing the rights and needs of already existing people baked into it, abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value. This is deeply disturbing to me and a reasons I’ve been growing increasingly distant from EA despite, like the commentor you’re replying to, being involved in it for close to 8 years.
abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value
I’m a bit confused by this statement. Is a world where people don’t have access to abortion likely to have more aggregate well-being in the very long run? Naively, it feels like the opposite to me
To be clear I don’t think it’s worth discussing abortion at length, especially considering bruce’s comment. But I really don’t think the number of people currently existing says much about well-being in the very long run (arguably negatively correlated). And even if you wanted to increase near-term population, reducing access to abortion is a very bad way to that, with lots of negative knock-on effects.
What I was saying is that, for the type of longtermism that assumes that the average persons life will be of positive value, and that it is morally good to maximize the total number of people to maximize total happiness, and assumes that allowing a life to come into existence is as good as saving a life, abortion seems to be morally bad, unless you argue that abortion being banned will have enough of a negative effect to outweigh all the lives that would not have existed if it were banned (which I think one could definitely argue). I say type of longtermism because there are definitely different approaches to longtermism and these assumptions are not representative of all, and I disagree with many of the assumptions here. I particularly disagree that total value or wellbeing, as opposed to aggregate as you mention in your comment, is a meaningful metric, but I realize there are different views on that.
For total-view longtermism, I think the most important things are ~civilization is on a good trajectory, people are prudent/careful with powerful new technology, the world is lower conflict, investments are made to improve resilience to large catastrophes, etc. Restricting abortion seems kinda bad for several of those things, and positive for none. So it seems like total-view longtermism, even ignoring all other reasons to think this, says abortion-restriction is bad.
I guess part of this is a belief that in the long-run, the number of morally-valuable lives & total wellbeing (e.g. in a 10 million years) is very uncorrelated or anti-correlated with near-term world population. (though I also think restricting abortion is one of the worst ways to go about increasing near-term population, even for those who do think near-term & very-long-term are pretty positively correlated)
I don’t think near-term population is helpful for long-term population or wellbeing, e.g. in >10,000 years from now. More likely negative effect than positive effect imo, especially if the mechanism of trying to increase near-term population is to restrict abortion (this is not a random sample of lives!)
I also think it seems bad for general civilization trajectory (partially norm-damaging, but mostly just direct effects on women & children), probably bad for ability to make investments in resilience & be careful with powerful new technology. These seem like the most important effects from a longtermist perspective, so I think abortion-restriction is bad from a total-longtermist perspective.
Understandable! Would you still say, though, that abortion is intrinsically morally bad? (As in the above, that doesn’t at all mean you have to endorse involuntary methods of reducing it.)
No, though maybe you’re using the word “intrinsically” differently? For the (majority) consequentialist part of my moral portfolio: The main intrinsic bad is suffering, and wellbeing (somewhat broader) is intrinsically good.
I think any argument about creating people/etc is instrumental—will they or won’t they increase wellbeing? They can both potentially contain suffering/wellbeing themselves, and affect the world in ways that affect wellbeing/suffering now & in the future. This includes effects before they are born (e.g. on women’s lives). TBH given your above arguments, I’m confused about the focus on abortion—it seems like you should be just as opposed to people choosing not to have children, and focus on encouraging/supporting people having kids.
For now, I think the ~main thing that matters is from a total-view longtermist perspective is making it through “the technological precipice”, where risks of permanent loss of sentient life/our values is somewhat likely, so other total-view longtermist arguments flow through effects on this + influencing for good trajectory arguably. Since abortion access seems good for civilization trajectory (women can have children when the want, don’t have their lives & health derailed, etc), more women involved in the development of powerful technology probably makes these fields more cautious/less rash, fewer ‘unwanted children’ [probably worse life outcomes], etc. Then abortion access seems good.
Maybe related: in general when maximizing, I think it’s probably best to finding the most important 1-3 things, then focus on those things. (e.g. for temp of my house, focus on temp of thermostat + temp of outside + insulation quality, ignore body heat & similar small things)
Thanks for this detail! Yeah, I agree that encouraging/supporting people having kids is a more effective approach, and that other things matter more from a total longtermist perspective. (In particular, if human extinction does occur in the near term, then factory farming plausibly outweighs everything good we’ve ever done. Either way, we have much to catch up on.)
To be more precise on the question, do you think that with all else equal, choosing to have a child is better than choosing to abort, assuming that the child will live a net good life (in expectation)? (This is what I was trying to capture with the word “intrinsic”—without accounting for concerns of norms, opportunity costs, other interventions dominating, etc i.e. as a unitary yes-or-no decision.)
Your advice on optimization is definitely correct, and I have many regrets about the framing of this post, some of which I enumerate here.
abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value
Regardless of whether we’re for or against abortion, I think it’s meaningful that there was no attempt to debate this assertion in the comments. Here it is stated precisely in a single sentence:
If we have (1a) total/low critical level and (1b) non-person-affecting views in population ethics, and (2) believe that a child today will live a life above the critical level (in expectation), then (3) ignoring replaceability, preventing an abortion seems to be as good as saving a life today.
If you find this disturbing, there are a few philosophical outs:
Reject (1b): Richard Chappell does this, embracing a hybrid person-affecting and non-person-affecting view.
Reject (2): Some suffering-focused utilitarians / anti-natalists would be sympathetic to this.
Accept (3), but argue that saving lives today is bad. For example, you could use the meat-eater problem, or argue that adding a person today makes us more susceptible to the unilateralist’s curse, which could expose us to greater x-risks or s-risks.
Although it seems that this will fall upon deaf ears, if you really find this argument compelling and think you have to choose between (a) abortion is morally wrong and (b) distancing yourself from EA, why are you choosing (b)?
Even if abortion is morally wrong, it seems that virtually every EA cause is more important than abortion. People in extreme poverty don’t stop mattering. Neither do animals in factory farms, or people/animals/sentient AIs who may exist in the far future. Compared to the scale of these issues, abortion is comparatively small.
Partisanship and tribalism likely explain a majority of why it consumes so much of the public attention, including why you and I care about it so much. Instead of distancing yourself from EA, what’s preventing you from advocating for more “explicit countervailing frameworks prioritizing the rights and needs of already existing people?” It seems that many EAs already agree with you.
Partisanship and tribalism likely explain a majority of why it consumes so much of the public attention
My guess is that women who wish to have the option of having an abortion and not live with the stresses of feeling like they have no choice but to have a child they don’t want will probably disagree that they feel strongly about it because of “tribalism”.
Instead of distancing yourself from EA, what’s preventing you from advocating for more “explicit countervailing frameworks prioritizing the rights and needs of already existing people?”
One plausible reason is that this is not something they actually want to spend their time on.
Lets say you want to spend your time in the EA community discussing whether or not abortion is morally acceptable on longtermist grounds. But in the EA community, you frequently hear discussions around whether or not [insert your ethnicity here] can actually be capable of high quality intellectual contributions, or potentially whether people like you even deserve basic human rights!
You think this is clearly wrong by any moral framework you deem acceptable, and don’t particularly enjoy discussing this (because you want to focus on the more important discussion of whether abortion is morally acceptable or not), and find it surprising that so many people in this otherwise like-minded community have somehow come to a conclusion that you feel is so unintuitive and morally unacceptable, and have done so under what you believe to be the views of the community (or at least parts of it). You think to yourself, “Maybe these people aren’t as like-minded as I thought they were, maybe we don’t quite share the same values.” And when you express your dissatisfaction and desire to distance yourself, one of the people who were vocally against [your people]’s rights say, “Well, why don’t you stay in the community so you can make a case for your position and argue for it?”
I’m not claiming at all that this is the reason for lastmistborn’s distancing from the EA community, but just illustrating one plausible reason, mainly to indicate that they should have no obligation or expectation placed on them to stay, nor to argue against views they find disturbing before they choose to distance themselves (I recognise you’re not explicitly doing this!). But basically the costs of engagement here are asymmetrical because it’s much less (e.g. emotionally) costly for people to come up with a discussion point that is perceived to be against someone else’s rights than for those who perceive to have their rights challenged to engage and justify why they think they deserve this right.
Yes, this seems accurate. I’ve spent some time in liberal/left spaces talking about EA with folks who highly prioritize pro-choice policy in their politics (say that 5 times fast!). If they viewed OP’s arguments as being roughly synonymous with EA as a whole (it’s not, but that doesn’t mean the impression couldn’t exist) it would be totally understandable, I think, for them to dismiss the rest of EA. “This community doesn’t share my values,” they might say, as bruce alludes to.
Personally, I think EA is very, very compatible with mainstream left-of-center liberalism/leftism, and, in my view, a pro-choice ethic is probably a very significant part of that. Not to say that OP’s view is indefensible; it’s just that I think there is a tension between their stated arguments and the broader values and politics that are the foundation of most EA’s actually-existing political views.
Tentatively, I’m imagining there are a number of EAs who identify as longtermist first, and, to them, OP’s argument would have some purchase. Then there’s a second group who may find longtermism interesting, but they still have other commitments that they’re prioritizing (liberalism, rights, leftism, social justice, global health, and so on), and they’re unlikely to forsake those views in favor of a longtermist proposal that is, in a sense, pretty radical. I suspect the second group is larger than the first, but the impression that the former group is central to EA could lead to people viewing EA as not worth the time.
the impression that the former group is central to EA could lead to people viewing EA as not worth the time.
You’re completely right that EA should strive to be a big tent and alienate as few people as possible. Do you think it’s possible that the impression that EA is “very, very compatible with mainstream left-of-center liberalism/leftism” could contribute to less than 1% of EAs identifying as politically “right”?
Given this information, how do you think we should prioritize between appeals to one political group which could alienate a different political group?
(Note that I’m not arguing here that this particular post helps avoid alienating potential EAs on net—just that there are other groups we should consider too when thinking about what EA can do to help more people feel we’re compatible with their values.)
If you find this disturbing, there are a few philosophical outs:
I would like to object to framing these as “outs”: I disagree with several assumptions that aren’t stated here (that possibly existing people or fetuses have equivalent moral value to currently existing people, that we have equal moral duties to safeguard the well being of both groups, that allowing a life to come into existence is equally as good as saving a life), but it isn’t because I’m looking for an out from an inconvenient conclusion. I believe we have a duty to try and make the world a better place for future generations, and that we must avoid making it a worse one, but I don’t consider myself a longtermist and I strongly disagree that maximising the total number of people who are happier than the critical level is a worthwhile goal, as opposed to trying to maximise the median or average happiness of a smaller number of people. These are normative disagreements, and they are why I don’t find this argument compelling. Your case isn’t falling on deaf ears, I understand it and I simply disagree.
Partisanship and tribalism likely explain a majority of why it consumes so much of the public attention, including why you and I care about it so much.
I also strongly disagree with this. I did not grow up somewhere where abortion was a hot-button or partisan issue the way it is in the US, and I don’t believe that it’s a partisan or tribal issue. This may be the case for you, but I care about this issue because a) as bruce points out below, the right to have abortions is an issue that deeply affects an enormous number of people and I would very much like them, and myself, to have option to exercise my free will and bodily autonomy on issues that have huge impacts on my life, and b) I highly value rights, particularly social and positive rights, as a moral good, and this is one among many that are vitally important.
Instead of distancing yourself from EA, what’s preventing you from advocating for more “explicit countervailing frameworks prioritizing the rights and needs of already existing people?”
Again, bruce’s comment articulates my feelings on this very well. This is not the only problem I have with EA as it currently operates, but this is a major ideological one. I want to make it clear that the problem for me isn’t only “longtermist EA might be anti-abortion”, it’s how longtermist EA can be extended to reach this conclusion and the further implications of that. Overlooking or writing off the needs and rights of currently existing people in service of creating more future people is, overall, a stance that I find morally unacceptable and actively harmful. While I know that there are longtermist EAs who feel and think similarly, longtermist EA in general seems likely to tend towards (or in fact may actively be tending towards) this direction unless this is addressed in some way. That this issue exists in the first place is, to me, either indicative of willful oversight and disregard (which is possible and also obviously perfectly fine, if that is what longtermist EAs believe, but in my opinion morally bad), or a serious blindspot that is so enormous in size and implications that it lowers my regard of the intellectual rigor of longtermist EA. I’d also like to add that, in addition to asymmetrical costs and time issues, one reason why I’m not writing advocating for “maybe we should care about the human rights of currently existing people, and maybe we should care about it more than creating more people in the future” is that having to argue for caring about human rights in a social movement that is about doing the most good possible is, to me, a clear indicator that that movement is not the right place for me.
Thank you for explaining how you feel. For what it’s worth, is there anything I or others who believe this is an implication of longtermism can do which would help you or others who share your perspective or feelings about EA to feel more welcome?
(No need to answer if you feel like it would take asymmetric effort, or wouldn’t be worth it, or for any other reason.)
Thank you as well, it definitely is worth something. I’ve been thinking about this for the past couple of days and I can’t come up with a very satisfying answer, and this will likely be a bit of a digression so I apologize for that.
I think the short answer is perhaps not very much. (There may be others who are more imaginative than I who can come up with actual actionable things here.) The EA Forum in particular tends to value evidence based, well sourced and impartially argued posts. This is a good thing in many cases, but it does create trades offs in terms of being “welcoming”[1], and I think this is one of them: I recognize and respect that many people who oppose abortion do it from a place of believing that a fetus is a person and caring very deeply about the lives of unborn children, but the corollary to that is a reduction to the rights of bodily autonomy and self-determination of a huge number of people. It’s very challenging for me personally to find the desire and follow through to sit down and spend hours putting together a post that argues for respect for my rights because, to put it simply, the idea that I need to argue that my rights and the rights of roughly half the worlds population should be valued and respected in this type of format (or at all, even) in the context of a movement that values doing good is extremely disheartening and demotivating, particularly when those rights are already being eroded across the world. I imagine that there are people who are invested enough in EA and/or longtermism that the emotional and time costs of doing that will be worth it, but I don’t think I am.
Also, this is a relatively minor point and I only mention because you do seem to care about this and I in no way mean to come off as accusatory as I sincerely believe it was without malice or bad intent, but writing off people’s sincerely held beliefs and priorities as a result of partisanship or tribalism or assuming that a question will fall on deaf ears does have a chilling effect, at least for me. I have done my best to engage with this post respectfully and in good faith, even when I strongly disagree with many parts of it, and these comments make me feel like my disagreements are being treated as incorrect received knowledge rather than considered and examined beliefs that are treated with respect.
I put this in quotes because it doesn’t quite sit well with me, but I’m not sure what would work better—it’s not that I feel that a post that is of “lower quality” by these standard wouldn’t be welcome, necessarily, but that it would probably be met with a lot of questions and demands to have it conform more to those standards—which is, to some extent, fair enough, as every space is obviously allowed to have its own discursive norms, but it does come with costs in some cases.
Thanks for your reply. No worries if you feel like you weren’t able to come up with a satisfying answer—given that you’re engaging in a dialogue where you reasonably perceive your counterpart to be going after your rights, your reaction has been very understandable.
I’d like to apologize for the characterization of your sincerely held views as tribalism. It wasn’t empathetic or helpful to our dialogue.
I mean, what really are the chances that effective altruism, a largely liberal, left-of-center (i.e., pro-choice) community would broadly endorse the view that terminating a pregnancy is morally wrong?
For me the problem is, without explicit countervailing frameworks prioritizing the rights and needs of already existing people baked into it, abortion is morally wrong is a direct logical extension of a longtermist view that highly values maximizing the number of people on assumption that the average existing persons life will have positive value. This is deeply disturbing to me and a reasons I’ve been growing increasingly distant from EA despite, like the commentor you’re replying to, being involved in it for close to 8 years.
I’m a bit confused by this statement. Is a world where people don’t have access to abortion likely to have more aggregate well-being in the very long run? Naively, it feels like the opposite to me
To be clear I don’t think it’s worth discussing abortion at length, especially considering bruce’s comment. But I really don’t think the number of people currently existing says much about well-being in the very long run (arguably negatively correlated). And even if you wanted to increase near-term population, reducing access to abortion is a very bad way to that, with lots of negative knock-on effects.
I agree with everything you’ve said here.
What I was saying is that, for the type of longtermism that assumes that the average persons life will be of positive value, and that it is morally good to maximize the total number of people to maximize total happiness, and assumes that allowing a life to come into existence is as good as saving a life, abortion seems to be morally bad, unless you argue that abortion being banned will have enough of a negative effect to outweigh all the lives that would not have existed if it were banned (which I think one could definitely argue). I say type of longtermism because there are definitely different approaches to longtermism and these assumptions are not representative of all, and I disagree with many of the assumptions here. I particularly disagree that total value or wellbeing, as opposed to aggregate as you mention in your comment, is a meaningful metric, but I realize there are different views on that.
I guess I did mean aggregate in the ‘total’ well-being sense. I just feel pretty far from neutral about creating people who will live wonderful lives, and also pretty strongly disagree with the belief that restricting abortion will create more total well-being in the long run (or short tbh).
For total-view longtermism, I think the most important things are ~civilization is on a good trajectory, people are prudent/careful with powerful new technology, the world is lower conflict, investments are made to improve resilience to large catastrophes, etc. Restricting abortion seems kinda bad for several of those things, and positive for none. So it seems like total-view longtermism, even ignoring all other reasons to think this, says abortion-restriction is bad.
I guess part of this is a belief that in the long-run, the number of morally-valuable lives & total wellbeing (e.g. in a 10 million years) is very uncorrelated or anti-correlated with near-term world population. (though I also think restricting abortion is one of the worst ways to go about increasing near-term population, even for those who do think near-term & very-long-term are pretty positively correlated)
One could hypothetically believe that abortion is morally wrong, but that intervening to involuntarily reduce it is either:
Bad on net, because it damages the norm of personal autonomy, or
Insufficiently good on net, because there are better ways to increase the near-term population than by reducing abortion access
So rejecting the implications you outlined don’t necessarily mean rejecting the idea that abortion is intrinsically morally wrong.
I don’t think near-term population is helpful for long-term population or wellbeing, e.g. in >10,000 years from now. More likely negative effect than positive effect imo, especially if the mechanism of trying to increase near-term population is to restrict abortion (this is not a random sample of lives!)
I also think it seems bad for general civilization trajectory (partially norm-damaging, but mostly just direct effects on women & children), probably bad for ability to make investments in resilience & be careful with powerful new technology. These seem like the most important effects from a longtermist perspective, so I think abortion-restriction is bad from a total-longtermist perspective.
Understandable! Would you still say, though, that abortion is intrinsically morally bad? (As in the above, that doesn’t at all mean you have to endorse involuntary methods of reducing it.)
No, though maybe you’re using the word “intrinsically” differently? For the (majority) consequentialist part of my moral portfolio: The main intrinsic bad is suffering, and wellbeing (somewhat broader) is intrinsically good.
I think any argument about creating people/etc is instrumental—will they or won’t they increase wellbeing? They can both potentially contain suffering/wellbeing themselves, and affect the world in ways that affect wellbeing/suffering now & in the future. This includes effects before they are born (e.g. on women’s lives). TBH given your above arguments, I’m confused about the focus on abortion—it seems like you should be just as opposed to people choosing not to have children, and focus on encouraging/supporting people having kids.
For now, I think the ~main thing that matters is from a total-view longtermist perspective is making it through “the technological precipice”, where risks of permanent loss of sentient life/our values is somewhat likely, so other total-view longtermist arguments flow through effects on this + influencing for good trajectory arguably. Since abortion access seems good for civilization trajectory (women can have children when the want, don’t have their lives & health derailed, etc), more women involved in the development of powerful technology probably makes these fields more cautious/less rash, fewer ‘unwanted children’ [probably worse life outcomes], etc. Then abortion access seems good.
Maybe related: in general when maximizing, I think it’s probably best to finding the most important 1-3 things, then focus on those things. (e.g. for temp of my house, focus on temp of thermostat + temp of outside + insulation quality, ignore body heat & similar small things)
Thanks for this detail! Yeah, I agree that encouraging/supporting people having kids is a more effective approach, and that other things matter more from a total longtermist perspective. (In particular, if human extinction does occur in the near term, then factory farming plausibly outweighs everything good we’ve ever done. Either way, we have much to catch up on.)
To be more precise on the question, do you think that with all else equal, choosing to have a child is better than choosing to abort, assuming that the child will live a net good life (in expectation)? (This is what I was trying to capture with the word “intrinsic”—without accounting for concerns of norms, opportunity costs, other interventions dominating, etc i.e. as a unitary yes-or-no decision.)
Your advice on optimization is definitely correct, and I have many regrets about the framing of this post, some of which I enumerate here.
Regardless of whether we’re for or against abortion, I think it’s meaningful that there was no attempt to debate this assertion in the comments. Here it is stated precisely in a single sentence:
If you find this disturbing, there are a few philosophical outs:
Reject (1a).
Reject (1b): Richard Chappell does this, embracing a hybrid person-affecting and non-person-affecting view.
Reject (2): Some suffering-focused utilitarians / anti-natalists would be sympathetic to this.
Accept (3), but argue that saving lives today is bad. For example, you could use the meat-eater problem, or argue that adding a person today makes us more susceptible to the unilateralist’s curse, which could expose us to greater x-risks or s-risks.
Although it seems that this will fall upon deaf ears, if you really find this argument compelling and think you have to choose between (a) abortion is morally wrong and (b) distancing yourself from EA, why are you choosing (b)?
Even if abortion is morally wrong, it seems that virtually every EA cause is more important than abortion. People in extreme poverty don’t stop mattering. Neither do animals in factory farms, or people/animals/sentient AIs who may exist in the far future. Compared to the scale of these issues, abortion is comparatively small.
Partisanship and tribalism likely explain a majority of why it consumes so much of the public attention, including why you and I care about it so much. Instead of distancing yourself from EA, what’s preventing you from advocating for more “explicit countervailing frameworks prioritizing the rights and needs of already existing people?” It seems that many EAs already agree with you.
My guess is that women who wish to have the option of having an abortion and not live with the stresses of feeling like they have no choice but to have a child they don’t want will probably disagree that they feel strongly about it because of “tribalism”.
One plausible reason is that this is not something they actually want to spend their time on.
Lets say you want to spend your time in the EA community discussing whether or not abortion is morally acceptable on longtermist grounds. But in the EA community, you frequently hear discussions around whether or not [insert your ethnicity here] can actually be capable of high quality intellectual contributions, or potentially whether people like you even deserve basic human rights!
You think this is clearly wrong by any moral framework you deem acceptable, and don’t particularly enjoy discussing this (because you want to focus on the more important discussion of whether abortion is morally acceptable or not), and find it surprising that so many people in this otherwise like-minded community have somehow come to a conclusion that you feel is so unintuitive and morally unacceptable, and have done so under what you believe to be the views of the community (or at least parts of it). You think to yourself, “Maybe these people aren’t as like-minded as I thought they were, maybe we don’t quite share the same values.” And when you express your dissatisfaction and desire to distance yourself, one of the people who were vocally against [your people]’s rights say, “Well, why don’t you stay in the community so you can make a case for your position and argue for it?”
I’m not claiming at all that this is the reason for lastmistborn’s distancing from the EA community, but just illustrating one plausible reason, mainly to indicate that they should have no obligation or expectation placed on them to stay, nor to argue against views they find disturbing before they choose to distance themselves (I recognise you’re not explicitly doing this!). But basically the costs of engagement here are asymmetrical because it’s much less (e.g. emotionally) costly for people to come up with a discussion point that is perceived to be against someone else’s rights than for those who perceive to have their rights challenged to engage and justify why they think they deserve this right.
Yes, this seems accurate. I’ve spent some time in liberal/left spaces talking about EA with folks who highly prioritize pro-choice policy in their politics (say that 5 times fast!). If they viewed OP’s arguments as being roughly synonymous with EA as a whole (it’s not, but that doesn’t mean the impression couldn’t exist) it would be totally understandable, I think, for them to dismiss the rest of EA. “This community doesn’t share my values,” they might say, as bruce alludes to.
Personally, I think EA is very, very compatible with mainstream left-of-center liberalism/leftism, and, in my view, a pro-choice ethic is probably a very significant part of that. Not to say that OP’s view is indefensible; it’s just that I think there is a tension between their stated arguments and the broader values and politics that are the foundation of most EA’s actually-existing political views.
Tentatively, I’m imagining there are a number of EAs who identify as longtermist first, and, to them, OP’s argument would have some purchase. Then there’s a second group who may find longtermism interesting, but they still have other commitments that they’re prioritizing (liberalism, rights, leftism, social justice, global health, and so on), and they’re unlikely to forsake those views in favor of a longtermist proposal that is, in a sense, pretty radical. I suspect the second group is larger than the first, but the impression that the former group is central to EA could lead to people viewing EA as not worth the time.
You’re completely right that EA should strive to be a big tent and alienate as few people as possible. Do you think it’s possible that the impression that EA is “very, very compatible with mainstream left-of-center liberalism/leftism” could contribute to less than 1% of EAs identifying as politically “right”?
(source)
Given this information, how do you think we should prioritize between appeals to one political group which could alienate a different political group?
(Note that I’m not arguing here that this particular post helps avoid alienating potential EAs on net—just that there are other groups we should consider too when thinking about what EA can do to help more people feel we’re compatible with their values.)
This pretty much hits the nail on the head, thank you for articulating it so well.
I would like to object to framing these as “outs”: I disagree with several assumptions that aren’t stated here (that possibly existing people or fetuses have equivalent moral value to currently existing people, that we have equal moral duties to safeguard the well being of both groups, that allowing a life to come into existence is equally as good as saving a life), but it isn’t because I’m looking for an out from an inconvenient conclusion. I believe we have a duty to try and make the world a better place for future generations, and that we must avoid making it a worse one, but I don’t consider myself a longtermist and I strongly disagree that maximising the total number of people who are happier than the critical level is a worthwhile goal, as opposed to trying to maximise the median or average happiness of a smaller number of people. These are normative disagreements, and they are why I don’t find this argument compelling. Your case isn’t falling on deaf ears, I understand it and I simply disagree.
I also strongly disagree with this. I did not grow up somewhere where abortion was a hot-button or partisan issue the way it is in the US, and I don’t believe that it’s a partisan or tribal issue. This may be the case for you, but I care about this issue because a) as bruce points out below, the right to have abortions is an issue that deeply affects an enormous number of people and I would very much like them, and myself, to have option to exercise my free will and bodily autonomy on issues that have huge impacts on my life, and b) I highly value rights, particularly social and positive rights, as a moral good, and this is one among many that are vitally important.
Again, bruce’s comment articulates my feelings on this very well. This is not the only problem I have with EA as it currently operates, but this is a major ideological one. I want to make it clear that the problem for me isn’t only “longtermist EA might be anti-abortion”, it’s how longtermist EA can be extended to reach this conclusion and the further implications of that. Overlooking or writing off the needs and rights of currently existing people in service of creating more future people is, overall, a stance that I find morally unacceptable and actively harmful. While I know that there are longtermist EAs who feel and think similarly, longtermist EA in general seems likely to tend towards (or in fact may actively be tending towards) this direction unless this is addressed in some way. That this issue exists in the first place is, to me, either indicative of willful oversight and disregard (which is possible and also obviously perfectly fine, if that is what longtermist EAs believe, but in my opinion morally bad), or a serious blindspot that is so enormous in size and implications that it lowers my regard of the intellectual rigor of longtermist EA. I’d also like to add that, in addition to asymmetrical costs and time issues, one reason why I’m not writing advocating for “maybe we should care about the human rights of currently existing people, and maybe we should care about it more than creating more people in the future” is that having to argue for caring about human rights in a social movement that is about doing the most good possible is, to me, a clear indicator that that movement is not the right place for me.
Thank you for explaining how you feel. For what it’s worth, is there anything I or others who believe this is an implication of longtermism can do which would help you or others who share your perspective or feelings about EA to feel more welcome?
(No need to answer if you feel like it would take asymmetric effort, or wouldn’t be worth it, or for any other reason.)
Thank you as well, it definitely is worth something. I’ve been thinking about this for the past couple of days and I can’t come up with a very satisfying answer, and this will likely be a bit of a digression so I apologize for that.
I think the short answer is perhaps not very much. (There may be others who are more imaginative than I who can come up with actual actionable things here.) The EA Forum in particular tends to value evidence based, well sourced and impartially argued posts. This is a good thing in many cases, but it does create trades offs in terms of being “welcoming”[1], and I think this is one of them: I recognize and respect that many people who oppose abortion do it from a place of believing that a fetus is a person and caring very deeply about the lives of unborn children, but the corollary to that is a reduction to the rights of bodily autonomy and self-determination of a huge number of people. It’s very challenging for me personally to find the desire and follow through to sit down and spend hours putting together a post that argues for respect for my rights because, to put it simply, the idea that I need to argue that my rights and the rights of roughly half the worlds population should be valued and respected in this type of format (or at all, even) in the context of a movement that values doing good is extremely disheartening and demotivating, particularly when those rights are already being eroded across the world. I imagine that there are people who are invested enough in EA and/or longtermism that the emotional and time costs of doing that will be worth it, but I don’t think I am.
Also, this is a relatively minor point and I only mention because you do seem to care about this and I in no way mean to come off as accusatory as I sincerely believe it was without malice or bad intent, but writing off people’s sincerely held beliefs and priorities as a result of partisanship or tribalism or assuming that a question will fall on deaf ears does have a chilling effect, at least for me. I have done my best to engage with this post respectfully and in good faith, even when I strongly disagree with many parts of it, and these comments make me feel like my disagreements are being treated as incorrect received knowledge rather than considered and examined beliefs that are treated with respect.
I put this in quotes because it doesn’t quite sit well with me, but I’m not sure what would work better—it’s not that I feel that a post that is of “lower quality” by these standard wouldn’t be welcome, necessarily, but that it would probably be met with a lot of questions and demands to have it conform more to those standards—which is, to some extent, fair enough, as every space is obviously allowed to have its own discursive norms, but it does come with costs in some cases.
Thanks for your reply. No worries if you feel like you weren’t able to come up with a satisfying answer—given that you’re engaging in a dialogue where you reasonably perceive your counterpart to be going after your rights, your reaction has been very understandable.
I’d like to apologize for the characterization of your sincerely held views as tribalism. It wasn’t empathetic or helpful to our dialogue.