I am open to the idea that Longtermism is hard to disentangle from eugenics. How about it “things that are unacceptable” statement. eg
It is awful to force reproductive changes on people
Eugenics has a history of awful outcomes
While there is value in considering how humanity can be better we should be deeply skeptical of lines of thought that involve making choices for others
While there is value in deciding who we can help the most, making judgements based on skin colour has always been a reductive and cruel exercise rather than an exercise in truth-seeking. We reject racism and the narrowing of our circles of concern under the guise of pragmatism.
I don’t love seeming to agree to criticisms that Longtermism is racist/eugenicist but if it’s a thing people believe then then perhaps an open letter etc is a good response.
“Eugenics” as typically considered is very different from human genetic enhancement of the type in which parents voluntarily select embryos to implant during IVF, as discussed in Bostrom and Shulman’s article “Embryo Selection for Cognitive Enhancement: Curiosity or Game Changer?”
Eugenics of the twentieth century was bad because it harmed people. Who is harmed when a mother voluntarily decides to select an embryo that is genetically predisposed to be healthier, happier or more intelligent? To prevent a mother from having the right to genetically enhance (engage in “eugenics”) would be coercion in reproduction.
Seeing as embryo selection can extend a child’s healthspan and mental well-being by selecting against schizophrenia/depression/etc., I think it is extraordinarly moral to support this practice even if it could correctly be called “eugenics.” I will stand on the side of less premature death and suffering even if it means I can be grouped with other bad “eugenicists” for rather tenous reasons.
I currently think of Longtermism as an idea and not a set of people, and I don’t think the basic idea that we’re going to influence many people in the future and we should have their voice in mind is bad, or racist, or eugenicist.
However, I think some of the prevalent views among people who identify as Longtermists, and especially among the key figures promoting it—like Bostrom and MacAskill* - are indistinguishable from eugenics. They’re pretty easy to think of as different, due to the points you raised—but I’m convinced they’d lead to the same kinds of outcomes.
*Edit: I want to clarify that I think this much more strongly about Bostrom than about MacAskill—e.g. I think MacAskill isn’t racist—but my point still stands.
Bostrom endorses positive selection for beneficial traits (via e.g. iterated embryo selection), he doesn’t support negative selection (i.e. preventing people who have less of the beneficial trait from reproducing).
I think positive selection for beneficial traits/human enhancement more generally is good.
I like that you make a distinction between longtermism, the idea, and other “related” views that are prominent among longtermists, but logically distinct from longtermism.
I disagree with calling the other views (like transhumanism, though that’s a broad tent) “indistinguishable from eugenics.” I find that statement so wrong that I downvoted the comment even though I really liked that you pointed out the above distinction.
On transhumanism among longtermists, I like Cinera’s point about focus on positive selection, but I also want to make a quite different point in addition, on how many longtermists, as far as I’m aware, don’t expect “genetics” to play a big role in the future. (People might still have views on thought experiments that involve genes; I’m just saying those views are unlikely to influence anything in practice.) Many longtermists expect mind uploading to become possible, at which point people who want to be uploaded can enter virtual worlds (and ones who don’t want it can stay back in biological form in protected areas). Digital minds do not reproduce the biological way with fusion of gametes (I mean, maybe you could program them to do that, but what would be the point?), so the whole issue around “eugenics” no longer exists or has relevance in that context. There would then be lots of new ethical issues around digital minds, explored here, for instance. I think it’s important to highlight that many (arguably most?) longtermists who think transhumanism is important in practice mostly mean mind uploading rather than anything related to genes.
So, it might be interesting to talk about attitudes around mind uploading. I think it’s very reasonable if some people are against uploading themselves. It’s a different question whether someone wants to prohibit the technology for everyone else. Let’s assume that society thinks carefully about these options and decides not to ban all forms of mind uploading for everyone. In that scenario, everything related to mind uploading becomes “transhumanism.” There’ll be a lot of questions around it. In practice, current “transhumanists” are pretty much the only people who are concerned about bad things happening to digital minds or bad dynamics among such minds (e.g., Malthusian traps) – no one else is really thinking about these scenarios or considers them important. So, there’s a sense in which you have to be a transhumanist (or at least participating in the discourse) if you think it matters what’s going to happen with digital minds. And the motivation here seems very different from the motivation behind eugenics – I see it as forecasting (the possibility of) radical societal changes and thinking ahead about what are good vs. bad options and trajectories this could take.
I agree (strongly upvoted), but I think iterated embryo selection is likely to become feasible before mind uploading in the mainline. It may not be all that relevant to humanity’s longterm future (genetics bases human enhancement changes need decades to cause significant societal wide changes) except under long timelines, but long timelines are feasible, especially for mind uploading technology.
In the year of our Lord 2023, we still cannot upload C. Elegans.
I am open to the idea that Longtermism is hard to disentangle from eugenics. How about it “things that are unacceptable” statement. eg
It is awful to force reproductive changes on people
Eugenics has a history of awful outcomes
While there is value in considering how humanity can be better we should be deeply skeptical of lines of thought that involve making choices for others
While there is value in deciding who we can help the most, making judgements based on skin colour has always been a reductive and cruel exercise rather than an exercise in truth-seeking. We reject racism and the narrowing of our circles of concern under the guise of pragmatism.
I don’t love seeming to agree to criticisms that Longtermism is racist/eugenicist but if it’s a thing people believe then then perhaps an open letter etc is a good response.
“Eugenics” as typically considered is very different from human genetic enhancement of the type in which parents voluntarily select embryos to implant during IVF, as discussed in Bostrom and Shulman’s article “Embryo Selection for Cognitive Enhancement: Curiosity or Game Changer?”
Eugenics of the twentieth century was bad because it harmed people. Who is harmed when a mother voluntarily decides to select an embryo that is genetically predisposed to be healthier, happier or more intelligent? To prevent a mother from having the right to genetically enhance (engage in “eugenics”) would be coercion in reproduction.
Seeing as embryo selection can extend a child’s healthspan and mental well-being by selecting against schizophrenia/depression/etc., I think it is extraordinarly moral to support this practice even if it could correctly be called “eugenics.” I will stand on the side of less premature death and suffering even if it means I can be grouped with other bad “eugenicists” for rather tenous reasons.
I currently think of Longtermism as an idea and not a set of people, and I don’t think the basic idea that we’re going to influence many people in the future and we should have their voice in mind is bad, or racist, or eugenicist.
However, I think some of the prevalent views among people who identify as Longtermists, and especially among the key figures promoting it—like Bostrom and MacAskill* - are indistinguishable from eugenics. They’re pretty easy to think of as different, due to the points you raised—but I’m convinced they’d lead to the same kinds of outcomes.
*Edit: I want to clarify that I think this much more strongly about Bostrom than about MacAskill—e.g. I think MacAskill isn’t racist—but my point still stands.
Very strong disagree here.
Bostrom endorses positive selection for beneficial traits (via e.g. iterated embryo selection), he doesn’t support negative selection (i.e. preventing people who have less of the beneficial trait from reproducing).
I think positive selection for beneficial traits/human enhancement more generally is good.
I like that you make a distinction between longtermism, the idea, and other “related” views that are prominent among longtermists, but logically distinct from longtermism.
I disagree with calling the other views (like transhumanism, though that’s a broad tent) “indistinguishable from eugenics.” I find that statement so wrong that I downvoted the comment even though I really liked that you pointed out the above distinction.
On transhumanism among longtermists, I like Cinera’s point about focus on positive selection, but I also want to make a quite different point in addition, on how many longtermists, as far as I’m aware, don’t expect “genetics” to play a big role in the future. (People might still have views on thought experiments that involve genes; I’m just saying those views are unlikely to influence anything in practice.) Many longtermists expect mind uploading to become possible, at which point people who want to be uploaded can enter virtual worlds (and ones who don’t want it can stay back in biological form in protected areas). Digital minds do not reproduce the biological way with fusion of gametes (I mean, maybe you could program them to do that, but what would be the point?), so the whole issue around “eugenics” no longer exists or has relevance in that context. There would then be lots of new ethical issues around digital minds, explored here, for instance. I think it’s important to highlight that many (arguably most?) longtermists who think transhumanism is important in practice mostly mean mind uploading rather than anything related to genes.
So, it might be interesting to talk about attitudes around mind uploading. I think it’s very reasonable if some people are against uploading themselves. It’s a different question whether someone wants to prohibit the technology for everyone else. Let’s assume that society thinks carefully about these options and decides not to ban all forms of mind uploading for everyone. In that scenario, everything related to mind uploading becomes “transhumanism.” There’ll be a lot of questions around it. In practice, current “transhumanists” are pretty much the only people who are concerned about bad things happening to digital minds or bad dynamics among such minds (e.g., Malthusian traps) – no one else is really thinking about these scenarios or considers them important. So, there’s a sense in which you have to be a transhumanist (or at least participating in the discourse) if you think it matters what’s going to happen with digital minds. And the motivation here seems very different from the motivation behind eugenics – I see it as forecasting (the possibility of) radical societal changes and thinking ahead about what are good vs. bad options and trajectories this could take.
I agree (strongly upvoted), but I think iterated embryo selection is likely to become feasible before mind uploading in the mainline. It may not be all that relevant to humanity’s longterm future (genetics bases human enhancement changes need decades to cause significant societal wide changes) except under long timelines, but long timelines are feasible, especially for mind uploading technology.
In the year of our Lord 2023, we still cannot upload C. Elegans.