Here are some quick thoughts that come to mind after reading your post. I find HIA and reprogenetics to be fascinating topics, but I see several critical hurdles if we frame them primarily as tools for mitigating AI-related existential risk.
The biggest logical hurdle is time. AI development is moving at a breakneck pace, while biological HIA interventions (such as embryo selection) take decades to manifest in the real world. An enhanced human born today will not be an active researcher for at least 20–25 years. If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.
I notice you address this objection by arguing that even a 10-year acceleration in a 40–50 year horizon still represents a meaningful reduction in existential risk. I find this partially compelling — but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force. Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.
We could also consider a complementary path: the top priority remains creating a safe, aligned AI. Once achieved, we can use that superintelligence to help us develop HIA and advanced biotechnology far more rapidly and safely than we ever could on our own.
Furthermore, just as we fear unaligned AI, we should fear “unaligned” superintelligent humans. This risk may be even greater, as humans are not “programmed” for pure rationality; we are driven by complex emotions, tribalism, and deep-seated cognitive biases. Therefore, any HIA research should prioritize and fund moral enhancement (e.g., increasing empathy and compassion, reducing cognitive biases) alongside cognitive gains. This is crucial to avoid creating highly intelligent but destructive actors.
If we imagine a future philanthropic program to make these enhancements accessible for free, one could hypothesize a form of “bundling”: making the cognitive upgrade conditional on a voluntary moral/character upgrade. While not a state mandate — and admittedly open to hard questions about who defines “moral improvement” and the risk of paternalism — it would act as a soft requirement for those choosing to use subsidized resources, thereby incentivizing positive social evolution.
A clear advantage of HIA over pure AI development is the guarantee of consciousness. If a non-conscious, unaligned AI were to replace us, it would result in a “dead universe” devoid of beings capable of experiencing value. Ensuring that conscious beings remain the primary agents of our future is a vital safeguard.
Beyond X-risk, human enhancement has massive potential for human well-being, such as eradicating genetic diseases. However, for this to be an ethical intervention rather than a dystopian one, the technology must be as open, accessible, and available by default as possible to everyone, regardless of social class or geography, to prevent the emergence of unbridgeable inequalities.
In light of these points, I see HIA as a “secondary strategy.” It could make sense to allocate a portion of funds to this area for the sake of portfolio diversification, a sort of hedge investment against the uncertainty of our long-term future.
[Just noting that an online AI detector says the above comment is most likely written by a human and then “AI polished”; I strongly prefer that you just write the unpolished version even if you think it’s “worse”.]
if we frame them primarily as tools for mitigating AI-related existential risk.
I did frame it that way, because decreasing existential risk should be the top priority in terms of causes. But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.
An enhanced human born today will not be an active researcher for at least 20–25 years.
Well, we could say 15-20 years (I think John von Neumann started making significant contributions to math around age 20), but yeah.
If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.
This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force.
I’m not sure how you’re using the phrase “shorter timelines” here. If you mean “when AGI actually comes”, then see above. If you mean “someone’s strategic probabilistic distribution over when AGI comes”, then I disagree. See https://tsvibt.blogspot.com/2022/08/the-benefit-of-intervening-sooner.html. Even with quite agressive timelines, HIA acceleration can still decrease X-risk by at least in the ballpark of a percentage point (or more).
Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.
I spent about a decade researching AGI alignment, much of that time at MIRI; my conclusion, which I believe is agreed upon by a significant portion of the AGI alignment research community, is that this problem is extremely difficult, and not remotely on track to being solved in time, and pouring more resources into the problem basically doesn’t help at the moment. If someone is making strategic decisions based on the fact that there is disagreement on this point, I would urge you to notice that the prominent optimists will not debate the pessimists.
[Just noting that an online AI detector says the above comment is most likely written by a human and then “AI polished”; I strongly prefer that you just write the unpolished version even if you think it’s “worse”.]
Yeah, you’re right. I usually use AI mostly for translation, but this time I asked it to rewrite some parts that had come out a bit tangled. It said the same things, but expressed them a bit too much in its own way, and later I half-regretted leaving the text like that, too.
But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.
I mostly agree with this. On whether they should be a top cause area, less so. As long as it stays framed as a marginal investment or a “secondary strategy” against AI catastrophic risk, it seems more justifiable and defensible to the general public, institutions, or people who might join EA. Making it a top cause area would mean going all-in on it. I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don’t face to the same degree (both reputationally for EA, and in terms of actual risks from adopting the technology).
This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
That’s a good point, but even if HIA demonstrated that we don’t really need AGI, it seems unlikely that society as a whole would give up pursuing it if it could get there first. That said, I agree that even a small increase in the chances of avoiding the risk matters a lot given the stakes.
not remotely on track to being solved in time, and pouring more resources into the problem basically doesn’t help at the moment.
I’m not too optimistic about AI alignment. But does that mean you’d estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
Ok, thanks for noting! (It occurred to me after I wrote that that translation would be a major use case and obviously a good one.)
I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don’t face to the same degree
You’re right, it certainly wouldn’t make sense for it to immediately jump to being a top priority cause, yeah, even if I’m arguing it should maybe be one eventually. If we’re being granular about the computations I’m bidding for, it would be more like “some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment”.
it seems more justifiable and defensible to the general public, institutions, or people who might join EA.
Interesting. Regarding people who might join EA, I don’t think I quite see it, but the point is interesting and I’ll maybe think about it a bit more.
That said, in terms of societal justification, I would want to distinguish between motivations about AGI X-risk, and concrete aims and intentions with reprogenetics. The latter is what I’d propose to collectively work on. That would still involve intelligence amplification, and transparently so, as is owed to society. But the actual plan, and the pitch to society, would be more broad. It would be about the whole of reprogenetics. So it would include empowering parents to give their kids an exceptionally healthy happy life, and so on, and it would include policy, professional, social, and moral safeguards against the major downside risks.
In other words, to borrow from an old CFAR tagline, I’m saying something like “reprogenetics for its own sake, for the sake of X-risk reduction”, if that makes any sense.
In a bunch more detail, I want to distinguish:
(motivation) my background motivation for devoting a lot of effort to HIA and reprogenetics (HIA helping decrease AGI X-risk)
(explanation of motivation) how I describe/explain/justify my background motivation to people / the public / etc.
(concrete aims) the concrete aims/targets that I pursue with my actions within the space of reprogenetics
(explanation of aims) how I describe/explain/justify/commit-to concrete aims
(proposed societal motivation) What I’m putting forward as a vision / motivation for developing and deploying reprogenetics that would be good and would justify doing so
For honesty’s sake, I personally strongly aim to think and communicate so that:
My public explanation of my motivation gives an honest (truthful, open, salient, clear) presentation of my actual motivation.
My public explanation of my concrete aims is likewise honest.
Both my motivations and my concrete aims are clearly presented.
My concrete aims have clear boundaries around them. For example, I might commit to certain actions on the basis of my publicly stated concrete aims.
My concrete aims are consonant with my proposed societal motivation.
This serves multiple purposes. For example:
I want to work out how, and argue to the public, that reprogenetics is good “on its own terms”; in particular, I want to argue that it’s good even if you don’t buy into anything about AGI X-risk. This is a stronger position I want to argue for, and expose my position to critique on the basis of.
I want to work out and communicate to the public / stakeholders a vision of how society can orient around reprogenetics that is beneficial to ~everyone. This involves working out societal coordination. The flag of [figuring out what to coordinate on and how] would be more about the concrete aims and the proposed societal motivation, and not about my background motivations.
I would suggest that EA could do something similar. That might work differently / not work at all, in the context of a large social movement. I haven’t thought about that, it’s an interesting question.
it seems unlikely that society as a whole would give up pursuing it if it could get there first.
Yeah, I’m quite uncertain on this point. I’m interested in understanding better the details of why AGI is actually being pursued, and under what conditions various capabilities researchers might walk away from that research. But that’s a whole other intellectual project that I don’t have bandwidth for; I’d strongly encourage someone to pick that one up though!
I’m not too optimistic about AI alignment. But does that mean you’d estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
I do think that the current marginal dollar is much better spent on either supporting a global ban on AGI research, and/or HIA, compared to marginal alignment research. That’s definitely a controversial opinion, but I’ll stand on that (and FWIW, not that I should remotely be taken to speak for them, but for example I would suspect that Yudkowsky and Soares would agree with this judgement). I’m actually unsure whether I personally think the benefit of HIA is more in “some of the kids might solve alignment” vs. “some of the kids might figure out some other way to make the world safe”; I’ve become quite pessimistic about solving AGI alignment, but that’s kinda idiosyncratic.
Thanks for such a detailed answer! Sorry for the slow reply on my part.
“some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment”
Yeah, this makes sense to me.
to borrow from an old CFAR tagline, I’m saying something like “reprogenetics for its own sake, for the sake of X-risk reduction”, if that makes any sense.
The communication strategy you’ve outlined seems right. I’d say society currently doesn’t take AI existential risks all that seriously, so a framing centered on “empowering parents to give their kids an exceptionally healthy happy life” is likely to be much more compelling and effective.
I’ve had a chance to look a little bit closer at the other comments and the links you shared, which I found interesting (though I haven’t gone through everything). A few additional observations though:
I’m not sure I’m in favor of a liberty as broad as what’s proposed in the links. Personally, I’d guess that for this to be acceptable (and adopted by institutions), we should initially propose the technology for less controversial goals, like removing diseases or promoting health. Increasing intelligence might also be a potentially non-controversial goal. But proposing to act immediately on personality and more “trivial” traits might backfire. I think a trajectory like that would be more effective in practice.
A vision of genomic emancipation based on freedom of choice and plurality might work in the democratic West, but other states don’t necessarily see those as values, so it seems unlikely they would adopt a similar vision.
Even if democratic states “led the way” by proposing this vision (or another ethical framework for reprogenetics), you would need strong international institutions to establish a common global regulation. That doesn’t seem to be the case in today’s world, which feels like it’s moving toward a breakdown of international rules and a decrease in the global influence of Western democracies.
Dictatorial regimes would likely impose certain characteristics to make themselves more competitive (perhaps also unethical ones). At that point, democracies might be forced to adapt to certain “mandatory” enhancements for their citizens just to stay competitive.
All of this would make the relationship between parents and children even harder. Where before you could only blame chance for your traits, there would now be actual people responsible for many of your characteristics. This is even more true if parents choose not to modify you, leaving you at a disadvantage while everyone else “improved” their children.
Wouldn’t it be worth focusing, in parallel, on technologies that allow for this when someone is already an adult and can choose for themselves? Especially regarding HIA. This would solve several ethical problems, particularly the fact that it wouldn’t be a choice made by someone else. It would also be perceived as less “unnatural,” I think. In a way, people already try to do this with the limited tools we have now. I realize this is mostly a technological problem since such tech is currently “sci-fi,” but that probably won’t be the case forever.
I’m not sure I’m in favor of a liberty as broad as what’s proposed in the links. Personally, I’d guess that for this to be acceptable (and adopted by institutions), we should initially propose the technology for less controversial goals, like removing diseases or promoting health. Increasing intelligence might also be a potentially non-controversial goal. But proposing to act immediately on personality and more “trivial” traits might backfire. I think a trajectory like that would be more effective in practice.
For the sake of honesty, and since everyone will be thinking about all those traits anyway, I think we may as well just have the discussion now. People are generally actually pretty open to talking about these things, I think.
It’s not some secret topic. There’s tons of academic papers in mainstream journals discussing all sorts of ethical, moral, social, regulatory, technical, scientific, and practical aspects of various sorts of reprogenetics and advanced ARTs (PGT, embryo editing, gamete selection, IVG, even ectogenesis and cloning). There’s even an academic paper looking at the mathematics of chromosome selection! People run big polls of the public’s opinions about these things; there are national and international committees (scientific, governmental) discussing how to regulate these technologies; there are panel discussions, talks at conferences, statements by advocacy groups, etc. There’s a lot of work to be done in clarifying, improving, and advancing these discussions, but it’s not like some alien taboo topic.
If you meant in terms of the actual rollout, I’m not sure. It’s true that people are more worried about cognitive traits (including intelligence) and appearance stuff than decreasing disease. My current guess is that people are less actually taking a strong reasoned-out stance against increasing intelligence, and rather they are just not sure how to separate out that use from other worse uses, but really I should talk to more people who actually hold various positions like this.
Intuitively I don’t get what’s so bad about affecting appearance, except for the runaway competition thing where everyone wants tall sons. But non-intuitively, I can also see that this would be a vector for “soft eugenics”; e.g. in a racist society parents could be diffusely pressured into making their kid lighter-skinned (cf. “face bleaching”). Part of my thinking here, is that genomic liberty works in the context of multi-generational feedback. In that context, it seems better to err on the side of more liberty rather than less, because we can regulate later when we see that things are going wrong, but deregulating is hard because you aren’t getting feedback about how the de-regulated version would go. (Cf. https://berkeleygenomics.org/articles/Genomic_emancipation.html#habermas-and-multigenerational-feedback )
A vision of genomic emancipation based on freedom of choice and plurality might work in the democratic West, but other states don’t necessarily see those as values, so it seems unlikely they would adopt a similar vision.
This might be right. I’m really unsure what would happen. I’m also not sure if this should be a crux.
I do, though, think it’s much better for reprogenetics to be developed in a strongly liberal democracy first, so that a good version of a society with reprogenetics can be worked out. Say what you will about it, but AFAIK the US is the most successfully diverse / pluralistic state in history, maybe by far, in terms of global languages, cultures, ethnicities, religious beliefs and practices, political views, etc. (Some empires are contenders, maybe; but that’s by conquering many nations and then in some cases being nice. India is highly diverse, but I think it’s not globally diverse in the same way.) I think an awesome liberal pluralistic version of reprogenetics is going to be hard to beat. (“Eugenics with Chinese characteristics”, as it were.)
I’m not sure they would do much, because AFAIK they already aren’t doing much. They already could do coercive person-wise eugenics, and AFAIK they aren’t? I guess in some cases, actual genocides could be motivated by eugenical reasoning? Of course, the Nazis were. If they wanted to do somewhat less coercive but still coercive eugenics, they could force IVF and preimplantation genetic testing on their subjects, but they aren’t AFAIK. Presumably the incentive (real or perceived) would increase as the effectiveness of reprogenetics increases, though, so this pattern could change. I would imagine that it’s ~inherently difficult to regulate reproduction, however. Like, what are you going to do? Stop people from screwing? You can do it, but you have to get really violent on a mass scale. (I hope this isn’t taken as a dismissal; I mean this as my first reaction in a conversation, to elicit a more specific plausible scenario. I’ve talked to at least one person living in an oppressive regime who was worried about the regime doing population control—specifically, controlling genetics of personality.)
Regarding whether this should be a crux, I’m also unsure. In general, I’m not trying to be straightforwardly (/naively/myopically) consequentialist. In other words, I wouldn’t simply count up the nations that would do a big bad thing with tech, and the ones that would do a big good thing, and then see which amounts to more. For one thing, it feels weird to think that I’m going to not use some technology to help my own child, just because you might use that technology to harm yours. I would also want to think about the longer term; the liberal pluralistic version could help usher in a great future (as part of broader progress), and I want to hasten that—I don’t think we want to progress at the rate of the least moral country, or something. IDK.
All that said, I do think we should work on international regulatory regimes for reprogenetics. I think there are probably some core aspects of genomic liberty that could be reasonably instituted at the international level, that might significantly alleviate these risks. For example “No regime should ever coerce any of its subjects to have children” or “No regime should ever coerce any of its subjects to have certain personality traits”. These might be hard to formalize / operationalize. Would take more work.
Another avenue is professional and scientific norms within those communities. These technologies take a lot of technical and scientific know-how. As an example, different ancestry groups—at least at the moment—need to collect genome data and construct new PGSes in order to use polygenic reprogenetics. (This isn’t a good thing because it can lead to unequal access, and hopefully it can be attenuated by better genetics models.) Another My point is just that this is an example where a country can’t just snaps its fingers and implement this stuff without some buy-in from scientists etc. Another example is that IVF is not trivial to do; you need ultrasound, medication expertise, anesthesiologists, and a surgeon. Another example: IVG would likely take quite a while to scale up and innovate so strongly that it’s a routine thing (I’m just guessing, here; are there cases where complex stem cell differentiation is done routinely in many many labs?).
There are also probably at least a few cases where the scientific community could avoid certain advances, or keep them private, at least partly / for some time. For example, I’d oppose doing any work to refine an “obedience PGS”, though it gets awkward because various things that you do want to have PGSes for could be correlated a bit with obedience. FWIW, personality seems significantly harder to model, at least for now.
All of this would make the relationship between parents and children even harder. Where before you could only blame chance for your traits, there would now be actual people responsible for many of your characteristics. This is even more true if parents choose not to modify you, leaving you at a disadvantage while everyone else “improved” their children.
I think that’s probably true in aggregate, but as someone who didn’t get reprogenetics but would like to give it to my future children, that’s a cost I’d be willing to pay. I hear that simply creating the option maybe automatically means everyone pays the cost. But I think this would prove too much? Like, it applies just as much to any new thing you create, which parents could in theory give to their kids, but might not want to.
Wouldn’t it be worth focusing, in parallel, on technologies that allow for this when someone is already an adult and can choose for themselves? Especially regarding HIA. This would solve several ethical problems, particularly the fact that it wouldn’t be a choice made by someone else. It would also be perceived as less “unnatural,” I think. In a way, people already try to do this with the limited tools we have now. I realize this is mostly a technological problem since such tech is currently “sci-fi,” but that probably won’t be the case forever.
Absolutely! I think there are several kinda-sorta-plausible paths to this. But, they’re all pretty speculative and also hard to accelerate, and in some cases potentially quite dangerous. See https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods Since that post, I’ve done bits of research about these on the side, but haven’t found any big updates that make it seem more feasible. One throughline is that reprogenetics is the only case where you can actually get longitudinal, end-to-end empirical data about the effects of potential interventions on intelligence and other interesting traits. You can observe actual people with different behaviors and different genes. But what are you going to do with your new brain drug that wipes out all the PNNs in someone’s association cortex? Just try it and hope that you don’t completely scramble their mind? Or try it on a chimpanzee, and hope that better termite-fishing or digit recall in chimps would translate to conceptually creative problem solving ability in humans? It coud work, but IDK. That said, there could totally be several plausible ways, and I’m interested in researching those. You do also get the advantage of slightly faster iteration cycles.
Personally, I’d guess that for this to be acceptable (and adopted by institutions), we should initially propose the technology for less controversial goals, like removing diseases or promoting health. Increasing intelligence might also be a potentially non-controversial goal. But proposing to act immediately on personality and more “trivial” traits might backfire. I think a trajectory like that would be more effective in practice
If you meant in terms of the actual rollout,
Yeah, I meant in terms of practical adoption. A democratic state will initially face strong pressure to restrict or ban technologies that the majority of the population strongly disagrees with. Even though this topic is already debated, this debate probably still feels pretty ‘alien’ to ordinary people. I don’t think a large portion of the public could easily accept it, especially in its broad ‘total liberty’ version.
Human reproduction is seen as something sacred. To intervene in a way that feels justifiable to common people, you’d need a justification that’s just as ‘sacred’ or important. Fighting diseases definitely fits that for most reasonable people. Even increasing intelligence or creativity could be seen as obviously useful, even if not sacred. But claiming the right to choose the fine details of your child’s personality would look like the classic ‘playing God’ scenario, which could turn a lot of people against the whole thing. Even worse, allowing total liberty over ‘trivial’ traits (though I agree they aren’t often actually so trivial) would act as a perfect strawman for anyone wanting to attack this. It gives the idea of children as ‘consumer products’ you pick at a supermarket based on trends, like choosing a dog breed because it’s fashionable. These associations would be horrific for many people and maybe would overshadow the actual concrete benefits of these technologies.
I think we tend to underestimate how much people would resist change when it comes to deeply rooted traditions, and probably even more for basic biological functions like natural reproduction. We can just look at the rejection of GMOs: they are mostly proven to be safe, yet they are still banned or hated in many places.
My point is that by strongly advocating for everything at once, we may risk an ‘all-or-nothing’ rejection. Giving people time to get used to the technology and seeing that nothing ‘demonic’ happens seems like a more plausible way to gain long-term acceptance. Not that discussing everything now is unreasonable, but we should be aware that it might be a hard thing to pull off. And therefore try to focus on at least saving the less controversial interventions (such as preventing disease and improving intelligence).
That said, the fact that this could potentially be a big new business might be a strong incentive, especially in a country like the US. So maybe I’m being too pessimistic here.
I agree with the rest of your observations. I don’t think the critical points I raised are, in themselves, sufficient reasons not to adopt the technology, but it’s obviously important to have them clear from the start and try to prevent them as much as possible.
However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
I don’t think this is a real contribution. I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades. I think they’re trying to make it because they think theycan.
And also because they [rightly or wrongly] believe that AGI will be more cost effective, more controllable, need less sleep and have higher problem solving potential than even the smartest possible humans. And be here a lot sooner. (And in some of the AGI fantasies, a route to making humans genetically smarter anyway!) -
Even if one assumes near term “AGI” has a fairly low ceiling,[1] it seems like “intelligence augmentation” is unpromising as an EA intervention.[2] The necessary research is complex, expensive, long term and dependent not just on germline engineering, but on academic research to understand what intelligence is in less shallow terms than we currently do. It’s not clear that there are individual tractable interventions. The quantifiable impact—if it actually worked—would presumably be a tiny proportion of people sufficiently rich and focused on maximising their offspring’s intelligence paying to select a few genes somewhat correlated with intelligence for “designer babies”, with the possibility this might translate enough into real world outcomes to turn a handful of children with already above average prospects into particularly capable and influential individuals. It is not obvious these children will grow up to use their greater talent (real or perceived) for mitigating existential risk or any other sort of greater good[3] Humans with rich, driven parents who’ve been taught about their superiority to ordinary humans from birth don’t sound immune to “alignment problems” either....
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades.
I don’t feel confident about this in any direction. However, my sense is that it’s one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically “there won’t be enough smart people”—but rather, “humanity doesn’t currently have the brainpower to solve the really pressing problems”, e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say “well it doesn’t matter because someone else will do this research anyway, why not me”. But what about a strong global ban? Then you get objections like “well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on”. That’s the justification that I’m trying to push against by saying “look, we can get all that good stuff on a pretty good timeline without crazy x-risk”.
Regarding your next paragraph, there’s a lot of claims there, which I largely think are incorrect, but it’s kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore
If you’re interested in discussing this at more length, I’d love to have you on for a podcast episode. Interested?
the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
Yeah this is another quite large potential benefit of reprogenetics that I’m excited about. It would require that the technology ends up “safe, accessible, and powerful”.
I partially agree about “what intelligence is”, in that this is a quite important area for further research. However, I do not agree that we would need to know more, in order to enable parents to make quite [beneficial by their lights] genomic choices on behalf of their future children, including decreasing disease risk and also increasing actual intelligence.
I agree that at the very beginning some weird rich people would be the ones benefiting. But I’m confident that the technology would become affordable for many—quite plausibly significantly more affordable than IVF currently is (e.g. given IVG). I then suspect many parents would want to give their kid a genomic foundation for high capabilities in general, including intelligence. How much, is of course up to them; I suspect, though, that there would be plenty of people interested in having very smart kids.
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
Here are some quick thoughts that come to mind after reading your post. I find HIA and reprogenetics to be fascinating topics, but I see several critical hurdles if we frame them primarily as tools for mitigating AI-related existential risk.
The biggest logical hurdle is time. AI development is moving at a breakneck pace, while biological HIA interventions (such as embryo selection) take decades to manifest in the real world. An enhanced human born today will not be an active researcher for at least 20–25 years. If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.
I notice you address this objection by arguing that even a 10-year acceleration in a 40–50 year horizon still represents a meaningful reduction in existential risk. I find this partially compelling — but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force. Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.
We could also consider a complementary path: the top priority remains creating a safe, aligned AI. Once achieved, we can use that superintelligence to help us develop HIA and advanced biotechnology far more rapidly and safely than we ever could on our own.
Furthermore, just as we fear unaligned AI, we should fear “unaligned” superintelligent humans. This risk may be even greater, as humans are not “programmed” for pure rationality; we are driven by complex emotions, tribalism, and deep-seated cognitive biases. Therefore, any HIA research should prioritize and fund moral enhancement (e.g., increasing empathy and compassion, reducing cognitive biases) alongside cognitive gains. This is crucial to avoid creating highly intelligent but destructive actors.
If we imagine a future philanthropic program to make these enhancements accessible for free, one could hypothesize a form of “bundling”: making the cognitive upgrade conditional on a voluntary moral/character upgrade. While not a state mandate — and admittedly open to hard questions about who defines “moral improvement” and the risk of paternalism — it would act as a soft requirement for those choosing to use subsidized resources, thereby incentivizing positive social evolution.
A clear advantage of HIA over pure AI development is the guarantee of consciousness. If a non-conscious, unaligned AI were to replace us, it would result in a “dead universe” devoid of beings capable of experiencing value. Ensuring that conscious beings remain the primary agents of our future is a vital safeguard.
Beyond X-risk, human enhancement has massive potential for human well-being, such as eradicating genetic diseases. However, for this to be an ethical intervention rather than a dystopian one, the technology must be as open, accessible, and available by default as possible to everyone, regardless of social class or geography, to prevent the emergence of unbridgeable inequalities.
In light of these points, I see HIA as a “secondary strategy.” It could make sense to allocate a portion of funds to this area for the sake of portfolio diversification, a sort of hedge investment against the uncertainty of our long-term future.
[Just noting that an online AI detector says the above comment is most likely written by a human and then “AI polished”; I strongly prefer that you just write the unpolished version even if you think it’s “worse”.]
I did frame it that way, because decreasing existential risk should be the top priority in terms of causes. But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.
Well, we could say 15-20 years (I think John von Neumann started making significant contributions to math around age 20), but yeah.
This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
I’m not sure how you’re using the phrase “shorter timelines” here. If you mean “when AGI actually comes”, then see above. If you mean “someone’s strategic probabilistic distribution over when AGI comes”, then I disagree. See https://tsvibt.blogspot.com/2022/08/the-benefit-of-intervening-sooner.html. Even with quite agressive timelines, HIA acceleration can still decrease X-risk by at least in the ballpark of a percentage point (or more).
I spent about a decade researching AGI alignment, much of that time at MIRI; my conclusion, which I believe is agreed upon by a significant portion of the AGI alignment research community, is that this problem is extremely difficult, and not remotely on track to being solved in time, and pouring more resources into the problem basically doesn’t help at the moment. If someone is making strategic decisions based on the fact that there is disagreement on this point, I would urge you to notice that the prominent optimists will not debate the pessimists.
Yeah, you’re right. I usually use AI mostly for translation, but this time I asked it to rewrite some parts that had come out a bit tangled. It said the same things, but expressed them a bit too much in its own way, and later I half-regretted leaving the text like that, too.
I mostly agree with this. On whether they should be a top cause area, less so. As long as it stays framed as a marginal investment or a “secondary strategy” against AI catastrophic risk, it seems more justifiable and defensible to the general public, institutions, or people who might join EA. Making it a top cause area would mean going all-in on it. I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don’t face to the same degree (both reputationally for EA, and in terms of actual risks from adopting the technology).
That’s a good point, but even if HIA demonstrated that we don’t really need AGI, it seems unlikely that society as a whole would give up pursuing it if it could get there first. That said, I agree that even a small increase in the chances of avoiding the risk matters a lot given the stakes.
I’m not too optimistic about AI alignment. But does that mean you’d estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
Ok, thanks for noting! (It occurred to me after I wrote that that translation would be a major use case and obviously a good one.)
You’re right, it certainly wouldn’t make sense for it to immediately jump to being a top priority cause, yeah, even if I’m arguing it should maybe be one eventually. If we’re being granular about the computations I’m bidding for, it would be more like “some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment”.
Interesting. Regarding people who might join EA, I don’t think I quite see it, but the point is interesting and I’ll maybe think about it a bit more.
That said, in terms of societal justification, I would want to distinguish between motivations about AGI X-risk, and concrete aims and intentions with reprogenetics. The latter is what I’d propose to collectively work on. That would still involve intelligence amplification, and transparently so, as is owed to society. But the actual plan, and the pitch to society, would be more broad. It would be about the whole of reprogenetics. So it would include empowering parents to give their kids an exceptionally healthy happy life, and so on, and it would include policy, professional, social, and moral safeguards against the major downside risks.
In other words, to borrow from an old CFAR tagline, I’m saying something like “reprogenetics for its own sake, for the sake of X-risk reduction”, if that makes any sense.
In a bunch more detail, I want to distinguish:
(motivation) my background motivation for devoting a lot of effort to HIA and reprogenetics (HIA helping decrease AGI X-risk)
(explanation of motivation) how I describe/explain/justify my background motivation to people / the public / etc.
(concrete aims) the concrete aims/targets that I pursue with my actions within the space of reprogenetics
(explanation of aims) how I describe/explain/justify/commit-to concrete aims
(proposed societal motivation) What I’m putting forward as a vision / motivation for developing and deploying reprogenetics that would be good and would justify doing so
For honesty’s sake, I personally strongly aim to think and communicate so that:
My public explanation of my motivation gives an honest (truthful, open, salient, clear) presentation of my actual motivation.
My public explanation of my concrete aims is likewise honest.
Both my motivations and my concrete aims are clearly presented.
My concrete aims have clear boundaries around them. For example, I might commit to certain actions on the basis of my publicly stated concrete aims.
My concrete aims are consonant with my proposed societal motivation.
This serves multiple purposes. For example:
I want to work out how, and argue to the public, that reprogenetics is good “on its own terms”; in particular, I want to argue that it’s good even if you don’t buy into anything about AGI X-risk. This is a stronger position I want to argue for, and expose my position to critique on the basis of.
I want to work out and communicate to the public / stakeholders a vision of how society can orient around reprogenetics that is beneficial to ~everyone. This involves working out societal coordination. The flag of [figuring out what to coordinate on and how] would be more about the concrete aims and the proposed societal motivation, and not about my background motivations.
I would suggest that EA could do something similar. That might work differently / not work at all, in the context of a large social movement. I haven’t thought about that, it’s an interesting question.
Yeah, I’m quite uncertain on this point. I’m interested in understanding better the details of why AGI is actually being pursued, and under what conditions various capabilities researchers might walk away from that research. But that’s a whole other intellectual project that I don’t have bandwidth for; I’d strongly encourage someone to pick that one up though!
I do think that the current marginal dollar is much better spent on either supporting a global ban on AGI research, and/or HIA, compared to marginal alignment research. That’s definitely a controversial opinion, but I’ll stand on that (and FWIW, not that I should remotely be taken to speak for them, but for example I would suspect that Yudkowsky and Soares would agree with this judgement). I’m actually unsure whether I personally think the benefit of HIA is more in “some of the kids might solve alignment” vs. “some of the kids might figure out some other way to make the world safe”; I’ve become quite pessimistic about solving AGI alignment, but that’s kinda idiosyncratic.
Thanks for such a detailed answer! Sorry for the slow reply on my part.
Yeah, this makes sense to me.
The communication strategy you’ve outlined seems right. I’d say society currently doesn’t take AI existential risks all that seriously, so a framing centered on “empowering parents to give their kids an exceptionally healthy happy life” is likely to be much more compelling and effective.
I’ve had a chance to look a little bit closer at the other comments and the links you shared, which I found interesting (though I haven’t gone through everything). A few additional observations though:
I’m not sure I’m in favor of a liberty as broad as what’s proposed in the links. Personally, I’d guess that for this to be acceptable (and adopted by institutions), we should initially propose the technology for less controversial goals, like removing diseases or promoting health. Increasing intelligence might also be a potentially non-controversial goal. But proposing to act immediately on personality and more “trivial” traits might backfire. I think a trajectory like that would be more effective in practice.
A vision of genomic emancipation based on freedom of choice and plurality might work in the democratic West, but other states don’t necessarily see those as values, so it seems unlikely they would adopt a similar vision.
Even if democratic states “led the way” by proposing this vision (or another ethical framework for reprogenetics), you would need strong international institutions to establish a common global regulation. That doesn’t seem to be the case in today’s world, which feels like it’s moving toward a breakdown of international rules and a decrease in the global influence of Western democracies.
Dictatorial regimes would likely impose certain characteristics to make themselves more competitive (perhaps also unethical ones). At that point, democracies might be forced to adapt to certain “mandatory” enhancements for their citizens just to stay competitive.
All of this would make the relationship between parents and children even harder. Where before you could only blame chance for your traits, there would now be actual people responsible for many of your characteristics. This is even more true if parents choose not to modify you, leaving you at a disadvantage while everyone else “improved” their children.
Wouldn’t it be worth focusing, in parallel, on technologies that allow for this when someone is already an adult and can choose for themselves? Especially regarding HIA. This would solve several ethical problems, particularly the fact that it wouldn’t be a choice made by someone else. It would also be perceived as less “unnatural,” I think. In a way, people already try to do this with the limited tools we have now. I realize this is mostly a technological problem since such tech is currently “sci-fi,” but that probably won’t be the case forever.
For the sake of honesty, and since everyone will be thinking about all those traits anyway, I think we may as well just have the discussion now. People are generally actually pretty open to talking about these things, I think.
It’s not some secret topic. There’s tons of academic papers in mainstream journals discussing all sorts of ethical, moral, social, regulatory, technical, scientific, and practical aspects of various sorts of reprogenetics and advanced ARTs (PGT, embryo editing, gamete selection, IVG, even ectogenesis and cloning). There’s even an academic paper looking at the mathematics of chromosome selection! People run big polls of the public’s opinions about these things; there are national and international committees (scientific, governmental) discussing how to regulate these technologies; there are panel discussions, talks at conferences, statements by advocacy groups, etc. There’s a lot of work to be done in clarifying, improving, and advancing these discussions, but it’s not like some alien taboo topic.
If you meant in terms of the actual rollout, I’m not sure. It’s true that people are more worried about cognitive traits (including intelligence) and appearance stuff than decreasing disease. My current guess is that people are less actually taking a strong reasoned-out stance against increasing intelligence, and rather they are just not sure how to separate out that use from other worse uses, but really I should talk to more people who actually hold various positions like this.
Intuitively I don’t get what’s so bad about affecting appearance, except for the runaway competition thing where everyone wants tall sons. But non-intuitively, I can also see that this would be a vector for “soft eugenics”; e.g. in a racist society parents could be diffusely pressured into making their kid lighter-skinned (cf. “face bleaching”). Part of my thinking here, is that genomic liberty works in the context of multi-generational feedback. In that context, it seems better to err on the side of more liberty rather than less, because we can regulate later when we see that things are going wrong, but deregulating is hard because you aren’t getting feedback about how the de-regulated version would go. (Cf. https://berkeleygenomics.org/articles/Genomic_emancipation.html#habermas-and-multigenerational-feedback )
This might be right. I’m really unsure what would happen. I’m also not sure if this should be a crux.
I do, though, think it’s much better for reprogenetics to be developed in a strongly liberal democracy first, so that a good version of a society with reprogenetics can be worked out. Say what you will about it, but AFAIK the US is the most successfully diverse / pluralistic state in history, maybe by far, in terms of global languages, cultures, ethnicities, religious beliefs and practices, political views, etc. (Some empires are contenders, maybe; but that’s by conquering many nations and then in some cases being nice. India is highly diverse, but I think it’s not globally diverse in the same way.) I think an awesome liberal pluralistic version of reprogenetics is going to be hard to beat. (“Eugenics with Chinese characteristics”, as it were.)
I’m not sure they would do much, because AFAIK they already aren’t doing much. They already could do coercive person-wise eugenics, and AFAIK they aren’t? I guess in some cases, actual genocides could be motivated by eugenical reasoning? Of course, the Nazis were. If they wanted to do somewhat less coercive but still coercive eugenics, they could force IVF and preimplantation genetic testing on their subjects, but they aren’t AFAIK. Presumably the incentive (real or perceived) would increase as the effectiveness of reprogenetics increases, though, so this pattern could change. I would imagine that it’s ~inherently difficult to regulate reproduction, however. Like, what are you going to do? Stop people from screwing? You can do it, but you have to get really violent on a mass scale. (I hope this isn’t taken as a dismissal; I mean this as my first reaction in a conversation, to elicit a more specific plausible scenario. I’ve talked to at least one person living in an oppressive regime who was worried about the regime doing population control—specifically, controlling genetics of personality.)
Regarding whether this should be a crux, I’m also unsure. In general, I’m not trying to be straightforwardly (/naively/myopically) consequentialist. In other words, I wouldn’t simply count up the nations that would do a big bad thing with tech, and the ones that would do a big good thing, and then see which amounts to more. For one thing, it feels weird to think that I’m going to not use some technology to help my own child, just because you might use that technology to harm yours. I would also want to think about the longer term; the liberal pluralistic version could help usher in a great future (as part of broader progress), and I want to hasten that—I don’t think we want to progress at the rate of the least moral country, or something. IDK.
All that said, I do think we should work on international regulatory regimes for reprogenetics. I think there are probably some core aspects of genomic liberty that could be reasonably instituted at the international level, that might significantly alleviate these risks. For example “No regime should ever coerce any of its subjects to have children” or “No regime should ever coerce any of its subjects to have certain personality traits”. These might be hard to formalize / operationalize. Would take more work.
Another avenue is professional and scientific norms within those communities. These technologies take a lot of technical and scientific know-how. As an example, different ancestry groups—at least at the moment—need to collect genome data and construct new PGSes in order to use polygenic reprogenetics. (This isn’t a good thing because it can lead to unequal access, and hopefully it can be attenuated by better genetics models.) Another My point is just that this is an example where a country can’t just snaps its fingers and implement this stuff without some buy-in from scientists etc. Another example is that IVF is not trivial to do; you need ultrasound, medication expertise, anesthesiologists, and a surgeon. Another example: IVG would likely take quite a while to scale up and innovate so strongly that it’s a routine thing (I’m just guessing, here; are there cases where complex stem cell differentiation is done routinely in many many labs?).
There are also probably at least a few cases where the scientific community could avoid certain advances, or keep them private, at least partly / for some time. For example, I’d oppose doing any work to refine an “obedience PGS”, though it gets awkward because various things that you do want to have PGSes for could be correlated a bit with obedience. FWIW, personality seems significantly harder to model, at least for now.
I think that’s probably true in aggregate, but as someone who didn’t get reprogenetics but would like to give it to my future children, that’s a cost I’d be willing to pay. I hear that simply creating the option maybe automatically means everyone pays the cost. But I think this would prove too much? Like, it applies just as much to any new thing you create, which parents could in theory give to their kids, but might not want to.
Absolutely! I think there are several kinda-sorta-plausible paths to this. But, they’re all pretty speculative and also hard to accelerate, and in some cases potentially quite dangerous. See https://www.lesswrong.com/posts/jTiSWHKAtnyA723LE/overview-of-strong-human-intelligence-amplification-methods Since that post, I’ve done bits of research about these on the side, but haven’t found any big updates that make it seem more feasible. One throughline is that reprogenetics is the only case where you can actually get longitudinal, end-to-end empirical data about the effects of potential interventions on intelligence and other interesting traits. You can observe actual people with different behaviors and different genes. But what are you going to do with your new brain drug that wipes out all the PNNs in someone’s association cortex? Just try it and hope that you don’t completely scramble their mind? Or try it on a chimpanzee, and hope that better termite-fishing or digit recall in chimps would translate to conceptually creative problem solving ability in humans? It coud work, but IDK. That said, there could totally be several plausible ways, and I’m interested in researching those. You do also get the advantage of slightly faster iteration cycles.
Yeah, I meant in terms of practical adoption. A democratic state will initially face strong pressure to restrict or ban technologies that the majority of the population strongly disagrees with. Even though this topic is already debated, this debate probably still feels pretty ‘alien’ to ordinary people. I don’t think a large portion of the public could easily accept it, especially in its broad ‘total liberty’ version.
Human reproduction is seen as something sacred. To intervene in a way that feels justifiable to common people, you’d need a justification that’s just as ‘sacred’ or important. Fighting diseases definitely fits that for most reasonable people. Even increasing intelligence or creativity could be seen as obviously useful, even if not sacred. But claiming the right to choose the fine details of your child’s personality would look like the classic ‘playing God’ scenario, which could turn a lot of people against the whole thing. Even worse, allowing total liberty over ‘trivial’ traits (though I agree they aren’t often actually so trivial) would act as a perfect strawman for anyone wanting to attack this. It gives the idea of children as ‘consumer products’ you pick at a supermarket based on trends, like choosing a dog breed because it’s fashionable. These associations would be horrific for many people and maybe would overshadow the actual concrete benefits of these technologies.
I think we tend to underestimate how much people would resist change when it comes to deeply rooted traditions, and probably even more for basic biological functions like natural reproduction. We can just look at the rejection of GMOs: they are mostly proven to be safe, yet they are still banned or hated in many places.
My point is that by strongly advocating for everything at once, we may risk an ‘all-or-nothing’ rejection. Giving people time to get used to the technology and seeing that nothing ‘demonic’ happens seems like a more plausible way to gain long-term acceptance. Not that discussing everything now is unreasonable, but we should be aware that it might be a hard thing to pull off. And therefore try to focus on at least saving the less controversial interventions (such as preventing disease and improving intelligence).
That said, the fact that this could potentially be a big new business might be a strong incentive, especially in a country like the US. So maybe I’m being too pessimistic here.
I agree with the rest of your observations. I don’t think the critical points I raised are, in themselves, sufficient reasons not to adopt the technology, but it’s obviously important to have them clear from the start and try to prevent them as much as possible.
Sorry (again) for this very late reply!
You may be right, IDK. Will have to think more.
I don’t think this is a real contribution. I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades. I think they’re trying to make it because they think they can.
And also because they [rightly or wrongly] believe that AGI will be more cost effective, more controllable, need less sleep and have higher problem solving potential than even the smartest possible humans. And be here a lot sooner. (And in some of the AGI fantasies, a route to making humans genetically smarter anyway!)
-
Even if one assumes near term “AGI” has a fairly low ceiling,[1] it seems like “intelligence augmentation” is unpromising as an EA intervention.[2] The necessary research is complex, expensive, long term and dependent not just on germline engineering, but on academic research to understand what intelligence is in less shallow terms than we currently do. It’s not clear that there are individual tractable interventions. The quantifiable impact—if it actually worked—would presumably be a tiny proportion of people sufficiently rich and focused on maximising their offspring’s intelligence paying to select a few genes somewhat correlated with intelligence for “designer babies”, with the possibility this might translate enough into real world outcomes to turn a handful of children with already above average prospects into particularly capable and influential individuals. It is not obvious these children will grow up to use their greater talent (real or perceived) for mitigating existential risk or any other sort of greater good[3] Humans with rich, driven parents who’ve been taught about their superiority to ordinary humans from birth don’t sound immune to “alignment problems” either....
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
I do actually, but it’s not fashionable here, or indeed at MIRI!
at least, viewed through EA’s analytical lens rather than associated cultural tendency to overestimate the importance of individual intelligence..,
I mean, what percentage of the world’s smartest people focuses on that now?
Thanks for engaging substantively!
I don’t feel confident about this in any direction. However, my sense is that it’s one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically “there won’t be enough smart people”—but rather, “humanity doesn’t currently have the brainpower to solve the really pressing problems”, e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say “well it doesn’t matter because someone else will do this research anyway, why not me”. But what about a strong global ban? Then you get objections like “well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on”. That’s the justification that I’m trying to push against by saying “look, we can get all that good stuff on a pretty good timeline without crazy x-risk”.
Regarding your next paragraph, there’s a lot of claims there, which I largely think are incorrect, but it’s kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore
If you’re interested in discussing this at more length, I’d love to have you on for a podcast episode. Interested?
Yeah this is another quite large potential benefit of reprogenetics that I’m excited about. It would require that the technology ends up “safe, accessible, and powerful”.
I guess, just to state where some of the disagreements lie:
I agree the research is complex and multifaceted. (See for example https://berkeleygenomics.org/articles/Visual_roadmap_to_strong_human_germline_engineering.html and https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html )
I partially agree about “what intelligence is”, in that this is a quite important area for further research. However, I do not agree that we would need to know more, in order to enable parents to make quite [beneficial by their lights] genomic choices on behalf of their future children, including decreasing disease risk and also increasing actual intelligence.
I agree that at the very beginning some weird rich people would be the ones benefiting. But I’m confident that the technology would become affordable for many—quite plausibly significantly more affordable than IVF currently is (e.g. given IVG). I then suspect many parents would want to give their kid a genomic foundation for high capabilities in general, including intelligence. How much, is of course up to them; I suspect, though, that there would be plenty of people interested in having very smart kids.
“select a few genes” I’m interested in significantly stronger reprogenetics; we already know many hundreds of genes that contribute to intelligence; and stronger reprogenetics is, biotechnologically speaking, probably feasible—see https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html
Regarding what the kids will do, yeah, they can and should do what they want, but do you think that this is net bad? Or what would be your guess here? Cf. https://tsvibt.blogspot.com/2025/11/hia-and-x-risk-part-1-why-it-helps.html and https://www.lesswrong.com/posts/K4K6ikQtHxcG49Tcn/hia-and-x-risk-part-2-why-it-hurts
Regarding this, see also my comment here: https://forum.effectivealtruism.org/posts/QLugEBJJ3HYyAcvwy/new-cause-area-human-intelligence-amplification?commentId=5yxEpv9vFRABptHyd