I’m not sure I’m in favor of a liberty as broad as what’s proposed in the links. Personally, I’d guess that for this to be acceptable (and adopted by institutions), we should initially propose the technology for less controversial goals, like removing diseases or promoting health. Increasing intelligence might also be a potentially non-controversial goal. But proposing to act immediately on personality and more “trivial” traits might backfire. I think a trajectory like that would be more effective in practice.
For the sake of honesty, and since everyone will be thinking about all those traits anyway, I think we may as well just have the discussion now. People are generally actually pretty open to talking about these things, I think.
It’s not some secret topic. There’s tons of academic papers in mainstream journals discussing all sorts of ethical, moral, social, regulatory, technical, scientific, and practical aspects of various sorts of reprogenetics and advanced ARTs (PGT, embryo editing, gamete selection, IVG, even ectogenesis and cloning). There’s even an academic paper looking at the mathematics of chromosome selection! People run big polls of the public’s opinions about these things; there are national and international committees (scientific, governmental) discussing how to regulate these technologies; there are panel discussions, talks at conferences, statements by advocacy groups, etc. There’s a lot of work to be done in clarifying, improving, and advancing these discussions, but it’s not like some alien taboo topic.
If you meant in terms of the actual rollout, I’m not sure. It’s true that people are more worried about cognitive traits (including intelligence) and appearance stuff than decreasing disease. My current guess is that people are less actually taking a strong reasoned-out stance against increasing intelligence, and rather they are just not sure how to separate out that use from other worse uses, but really I should talk to more people who actually hold various positions like this.
Intuitively I don’t get what’s so bad about affecting appearance, except for the runaway competition thing where everyone wants tall sons. But non-intuitively, I can also see that this would be a vector for “soft eugenics”; e.g. in a racist society parents could be diffusely pressured into making their kid lighter-skinned (cf. “face bleaching”). Part of my thinking here, is that genomic liberty works in the context of multi-generational feedback. In that context, it seems better to err on the side of more liberty rather than less, because we can regulate later when we see that things are going wrong, but deregulating is hard because you aren’t getting feedback about how the de-regulated version would go. (Cf. https://berkeleygenomics.org/articles/Genomic_emancipation.html#habermas-and-multigenerational-feedback )
A vision of genomic emancipation based on freedom of choice and plurality might work in the democratic West, but other states don’t necessarily see those as values, so it seems unlikely they would adopt a similar vision.
This might be right. I’m really unsure what would happen. I’m also not sure if this should be a crux.
I do, though, think it’s much better for reprogenetics to be developed in a strongly liberal democracy first, so that a good version of a society with reprogenetics can be worked out. Say what you will about it, but AFAIK the US is the most successfully diverse / pluralistic state in history, maybe by far, in terms of global languages, cultures, ethnicities, religious beliefs and practices, political views, etc. (Some empires are contenders, maybe; but that’s by conquering many nations and then in some cases being nice. India is highly diverse, but I think it’s not globally diverse in the same way.) I think an awesome liberal pluralistic version of reprogenetics is going to be hard to beat. (“Eugenics with Chinese characteristics”, as it were.)
I’m not sure they would do much, because AFAIK they already aren’t doing much. They already could do coercive person-wise eugenics, and AFAIK they aren’t? I guess in some cases, actual genocides could be motivated by eugenical reasoning? Of course, the Nazis were. If they wanted to do somewhat less coercive but still coercive eugenics, they could force IVF and preimplantation genetic testing on their subjects, but they aren’t AFAIK. Presumably the incentive (real or perceived) would increase as the effectiveness of reprogenetics increases, though, so this pattern could change. I would imagine that it’s ~inherently difficult to regulate reproduction, however. Like, what are you going to do? Stop people from screwing? You can do it, but you have to get really violent on a mass scale. (I hope this isn’t taken as a dismissal; I mean this as my first reaction in a conversation, to elicit a more specific plausible scenario. I’ve talked to at least one person living in an oppressive regime who was worried about the regime doing population control—specifically, controlling genetics of personality.)
Regarding whether this should be a crux, I’m also unsure. In general, I’m not trying to be straightforwardly (/naively/myopically) consequentialist. In other words, I wouldn’t simply count up the nations that would do a big bad thing with tech, and the ones that would do a big good thing, and then see which amounts to more. For one thing, it feels weird to think that I’m going to not use some technology to help my own child, just because you might use that technology to harm yours. I would also want to think about the longer term; the liberal pluralistic version could help usher in a great future (as part of broader progress), and I want to hasten that—I don’t think we want to progress at the rate of the least moral country, or something. IDK.
All that said, I do think we should work on international regulatory regimes for reprogenetics. I think there are probably some core aspects of genomic liberty that could be reasonably instituted at the international level, that might significantly alleviate these risks. For example “No regime should ever coerce any of its subjects to have children” or “No regime should ever coerce any of its subjects to have certain personality traits”. These might be hard to formalize / operationalize. Would take more work.
Another avenue is professional and scientific norms within those communities. These technologies take a lot of technical and scientific know-how. As an example, different ancestry groups—at least at the moment—need to collect genome data and construct new PGSes in order to use polygenic reprogenetics. (This isn’t a good thing because it can lead to unequal access, and hopefully it can be attenuated by better genetics models.) Another My point is just that this is an example where a country can’t just snaps its fingers and implement this stuff without some buy-in from scientists etc. Another example is that IVF is not trivial to do; you need ultrasound, medication expertise, anesthesiologists, and a surgeon. Another example: IVG would likely take quite a while to scale up and innovate so strongly that it’s a routine thing (I’m just guessing, here; are there cases where complex stem cell differentiation is done routinely in many many labs?).
There are also probably at least a few cases where the scientific community could avoid certain advances, or keep them private, at least partly / for some time. For example, I’d oppose doing any work to refine an “obedience PGS”, though it gets awkward because various things that you do want to have PGSes for could be correlated a bit with obedience. FWIW, personality seems significantly harder to model, at least for now.
All of this would make the relationship between parents and children even harder. Where before you could only blame chance for your traits, there would now be actual people responsible for many of your characteristics. This is even more true if parents choose not to modify you, leaving you at a disadvantage while everyone else “improved” their children.
I think that’s probably true in aggregate, but as someone who didn’t get reprogenetics but would like to give it to my future children, that’s a cost I’d be willing to pay. I hear that simply creating the option maybe automatically means everyone pays the cost. But I think this would prove too much? Like, it applies just as much to any new thing you create, which parents could in theory give to their kids, but might not want to.
Wouldn’t it be worth focusing, in parallel, on technologies that allow for this when someone is already an adult and can choose for themselves? Especially regarding HIA. This would solve several ethical problems, particularly the fact that it wouldn’t be a choice made by someone else. It would also be perceived as less “unnatural,” I think. In a way, people already try to do this with the limited tools we have now. I realize this is mostly a technological problem since such tech is currently “sci-fi,” but that probably won’t be the case forever.
Absolutely! I think there are several kinda-sorta-plausible paths to this. But, they’re all pretty speculative and also hard to accelerate, and in some cases potentially quite dangerous. See https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods Since that post, I’ve done bits of research about these on the side, but haven’t found any big updates that make it seem more feasible. One throughline is that reprogenetics is the only case where you can actually get longitudinal, end-to-end empirical data about the effects of potential interventions on intelligence and other interesting traits. You can observe actual people with different behaviors and different genes. But what are you going to do with your new brain drug that wipes out all the PNNs in someone’s association cortex? Just try it and hope that you don’t completely scramble their mind? Or try it on a chimpanzee, and hope that better termite-fishing or digit recall in chimps would translate to conceptually creative problem solving ability in humans? It coud work, but IDK. That said, there could totally be several plausible ways, and I’m interested in researching those. You do also get the advantage of slightly faster iteration cycles.
Personally, I’d guess that for this to be acceptable (and adopted by institutions), we should initially propose the technology for less controversial goals, like removing diseases or promoting health. Increasing intelligence might also be a potentially non-controversial goal. But proposing to act immediately on personality and more “trivial” traits might backfire. I think a trajectory like that would be more effective in practice
If you meant in terms of the actual rollout,
Yeah, I meant in terms of practical adoption. A democratic state will initially face strong pressure to restrict or ban technologies that the majority of the population strongly disagrees with. Even though this topic is already debated, this debate probably still feels pretty ‘alien’ to ordinary people. I don’t think a large portion of the public could easily accept it, especially in its broad ‘total liberty’ version.
Human reproduction is seen as something sacred. To intervene in a way that feels justifiable to common people, you’d need a justification that’s just as ‘sacred’ or important. Fighting diseases definitely fits that for most reasonable people. Even increasing intelligence or creativity could be seen as obviously useful, even if not sacred. But claiming the right to choose the fine details of your child’s personality would look like the classic ‘playing God’ scenario, which could turn a lot of people against the whole thing. Even worse, allowing total liberty over ‘trivial’ traits (though I agree they aren’t often actually so trivial) would act as a perfect strawman for anyone wanting to attack this. It gives the idea of children as ‘consumer products’ you pick at a supermarket based on trends, like choosing a dog breed because it’s fashionable. These associations would be horrific for many people and maybe would overshadow the actual concrete benefits of these technologies.
I think we tend to underestimate how much people would resist change when it comes to deeply rooted traditions, and probably even more for basic biological functions like natural reproduction. We can just look at the rejection of GMOs: they are mostly proven to be safe, yet they are still banned or hated in many places.
My point is that by strongly advocating for everything at once, we may risk an ‘all-or-nothing’ rejection. Giving people time to get used to the technology and seeing that nothing ‘demonic’ happens seems like a more plausible way to gain long-term acceptance. Not that discussing everything now is unreasonable, but we should be aware that it might be a hard thing to pull off. And therefore try to focus on at least saving the less controversial interventions (such as preventing disease and improving intelligence).
That said, the fact that this could potentially be a big new business might be a strong incentive, especially in a country like the US. So maybe I’m being too pessimistic here.
I agree with the rest of your observations. I don’t think the critical points I raised are, in themselves, sufficient reasons not to adopt the technology, but it’s obviously important to have them clear from the start and try to prevent them as much as possible.
For the sake of honesty, and since everyone will be thinking about all those traits anyway, I think we may as well just have the discussion now. People are generally actually pretty open to talking about these things, I think.
It’s not some secret topic. There’s tons of academic papers in mainstream journals discussing all sorts of ethical, moral, social, regulatory, technical, scientific, and practical aspects of various sorts of reprogenetics and advanced ARTs (PGT, embryo editing, gamete selection, IVG, even ectogenesis and cloning). There’s even an academic paper looking at the mathematics of chromosome selection! People run big polls of the public’s opinions about these things; there are national and international committees (scientific, governmental) discussing how to regulate these technologies; there are panel discussions, talks at conferences, statements by advocacy groups, etc. There’s a lot of work to be done in clarifying, improving, and advancing these discussions, but it’s not like some alien taboo topic.
If you meant in terms of the actual rollout, I’m not sure. It’s true that people are more worried about cognitive traits (including intelligence) and appearance stuff than decreasing disease. My current guess is that people are less actually taking a strong reasoned-out stance against increasing intelligence, and rather they are just not sure how to separate out that use from other worse uses, but really I should talk to more people who actually hold various positions like this.
Intuitively I don’t get what’s so bad about affecting appearance, except for the runaway competition thing where everyone wants tall sons. But non-intuitively, I can also see that this would be a vector for “soft eugenics”; e.g. in a racist society parents could be diffusely pressured into making their kid lighter-skinned (cf. “face bleaching”). Part of my thinking here, is that genomic liberty works in the context of multi-generational feedback. In that context, it seems better to err on the side of more liberty rather than less, because we can regulate later when we see that things are going wrong, but deregulating is hard because you aren’t getting feedback about how the de-regulated version would go. (Cf. https://berkeleygenomics.org/articles/Genomic_emancipation.html#habermas-and-multigenerational-feedback )
This might be right. I’m really unsure what would happen. I’m also not sure if this should be a crux.
I do, though, think it’s much better for reprogenetics to be developed in a strongly liberal democracy first, so that a good version of a society with reprogenetics can be worked out. Say what you will about it, but AFAIK the US is the most successfully diverse / pluralistic state in history, maybe by far, in terms of global languages, cultures, ethnicities, religious beliefs and practices, political views, etc. (Some empires are contenders, maybe; but that’s by conquering many nations and then in some cases being nice. India is highly diverse, but I think it’s not globally diverse in the same way.) I think an awesome liberal pluralistic version of reprogenetics is going to be hard to beat. (“Eugenics with Chinese characteristics”, as it were.)
I’m not sure they would do much, because AFAIK they already aren’t doing much. They already could do coercive person-wise eugenics, and AFAIK they aren’t? I guess in some cases, actual genocides could be motivated by eugenical reasoning? Of course, the Nazis were. If they wanted to do somewhat less coercive but still coercive eugenics, they could force IVF and preimplantation genetic testing on their subjects, but they aren’t AFAIK. Presumably the incentive (real or perceived) would increase as the effectiveness of reprogenetics increases, though, so this pattern could change. I would imagine that it’s ~inherently difficult to regulate reproduction, however. Like, what are you going to do? Stop people from screwing? You can do it, but you have to get really violent on a mass scale. (I hope this isn’t taken as a dismissal; I mean this as my first reaction in a conversation, to elicit a more specific plausible scenario. I’ve talked to at least one person living in an oppressive regime who was worried about the regime doing population control—specifically, controlling genetics of personality.)
Regarding whether this should be a crux, I’m also unsure. In general, I’m not trying to be straightforwardly (/naively/myopically) consequentialist. In other words, I wouldn’t simply count up the nations that would do a big bad thing with tech, and the ones that would do a big good thing, and then see which amounts to more. For one thing, it feels weird to think that I’m going to not use some technology to help my own child, just because you might use that technology to harm yours. I would also want to think about the longer term; the liberal pluralistic version could help usher in a great future (as part of broader progress), and I want to hasten that—I don’t think we want to progress at the rate of the least moral country, or something. IDK.
All that said, I do think we should work on international regulatory regimes for reprogenetics. I think there are probably some core aspects of genomic liberty that could be reasonably instituted at the international level, that might significantly alleviate these risks. For example “No regime should ever coerce any of its subjects to have children” or “No regime should ever coerce any of its subjects to have certain personality traits”. These might be hard to formalize / operationalize. Would take more work.
Another avenue is professional and scientific norms within those communities. These technologies take a lot of technical and scientific know-how. As an example, different ancestry groups—at least at the moment—need to collect genome data and construct new PGSes in order to use polygenic reprogenetics. (This isn’t a good thing because it can lead to unequal access, and hopefully it can be attenuated by better genetics models.) Another My point is just that this is an example where a country can’t just snaps its fingers and implement this stuff without some buy-in from scientists etc. Another example is that IVF is not trivial to do; you need ultrasound, medication expertise, anesthesiologists, and a surgeon. Another example: IVG would likely take quite a while to scale up and innovate so strongly that it’s a routine thing (I’m just guessing, here; are there cases where complex stem cell differentiation is done routinely in many many labs?).
There are also probably at least a few cases where the scientific community could avoid certain advances, or keep them private, at least partly / for some time. For example, I’d oppose doing any work to refine an “obedience PGS”, though it gets awkward because various things that you do want to have PGSes for could be correlated a bit with obedience. FWIW, personality seems significantly harder to model, at least for now.
I think that’s probably true in aggregate, but as someone who didn’t get reprogenetics but would like to give it to my future children, that’s a cost I’d be willing to pay. I hear that simply creating the option maybe automatically means everyone pays the cost. But I think this would prove too much? Like, it applies just as much to any new thing you create, which parents could in theory give to their kids, but might not want to.
Absolutely! I think there are several kinda-sorta-plausible paths to this. But, they’re all pretty speculative and also hard to accelerate, and in some cases potentially quite dangerous. See https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods Since that post, I’ve done bits of research about these on the side, but haven’t found any big updates that make it seem more feasible. One throughline is that reprogenetics is the only case where you can actually get longitudinal, end-to-end empirical data about the effects of potential interventions on intelligence and other interesting traits. You can observe actual people with different behaviors and different genes. But what are you going to do with your new brain drug that wipes out all the PNNs in someone’s association cortex? Just try it and hope that you don’t completely scramble their mind? Or try it on a chimpanzee, and hope that better termite-fishing or digit recall in chimps would translate to conceptually creative problem solving ability in humans? It coud work, but IDK. That said, there could totally be several plausible ways, and I’m interested in researching those. You do also get the advantage of slightly faster iteration cycles.
Yeah, I meant in terms of practical adoption. A democratic state will initially face strong pressure to restrict or ban technologies that the majority of the population strongly disagrees with. Even though this topic is already debated, this debate probably still feels pretty ‘alien’ to ordinary people. I don’t think a large portion of the public could easily accept it, especially in its broad ‘total liberty’ version.
Human reproduction is seen as something sacred. To intervene in a way that feels justifiable to common people, you’d need a justification that’s just as ‘sacred’ or important. Fighting diseases definitely fits that for most reasonable people. Even increasing intelligence or creativity could be seen as obviously useful, even if not sacred. But claiming the right to choose the fine details of your child’s personality would look like the classic ‘playing God’ scenario, which could turn a lot of people against the whole thing. Even worse, allowing total liberty over ‘trivial’ traits (though I agree they aren’t often actually so trivial) would act as a perfect strawman for anyone wanting to attack this. It gives the idea of children as ‘consumer products’ you pick at a supermarket based on trends, like choosing a dog breed because it’s fashionable. These associations would be horrific for many people and maybe would overshadow the actual concrete benefits of these technologies.
I think we tend to underestimate how much people would resist change when it comes to deeply rooted traditions, and probably even more for basic biological functions like natural reproduction. We can just look at the rejection of GMOs: they are mostly proven to be safe, yet they are still banned or hated in many places.
My point is that by strongly advocating for everything at once, we may risk an ‘all-or-nothing’ rejection. Giving people time to get used to the technology and seeing that nothing ‘demonic’ happens seems like a more plausible way to gain long-term acceptance. Not that discussing everything now is unreasonable, but we should be aware that it might be a hard thing to pull off. And therefore try to focus on at least saving the less controversial interventions (such as preventing disease and improving intelligence).
That said, the fact that this could potentially be a big new business might be a strong incentive, especially in a country like the US. So maybe I’m being too pessimistic here.
I agree with the rest of your observations. I don’t think the critical points I raised are, in themselves, sufficient reasons not to adopt the technology, but it’s obviously important to have them clear from the start and try to prevent them as much as possible.
Sorry (again) for this very late reply!
You may be right, IDK. Will have to think more.