Here are some quick thoughts that come to mind after reading your post. I find HIA and reprogenetics to be fascinating topics, but I see several critical hurdles if we frame them primarily as tools for mitigating AI-related existential risk.
The biggest logical hurdle is time. AI development is moving at a breakneck pace, while biological HIA interventions (such as embryo selection) take decades to manifest in the real world. An enhanced human born today will not be an active researcher for at least 20–25 years. If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.
I notice you address this objection by arguing that even a 10-year acceleration in a 40–50 year horizon still represents a meaningful reduction in existential risk. I find this partially compelling — but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force. Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.
We could also consider a complementary path: the top priority remains creating a safe, aligned AI. Once achieved, we can use that superintelligence to help us develop HIA and advanced biotechnology far more rapidly and safely than we ever could on our own.
Furthermore, just as we fear unaligned AI, we should fear “unaligned” superintelligent humans. This risk may be even greater, as humans are not “programmed” for pure rationality; we are driven by complex emotions, tribalism, and deep-seated cognitive biases. Therefore, any HIA research should prioritize and fund moral enhancement (e.g., increasing empathy and compassion, reducing cognitive biases) alongside cognitive gains. This is crucial to avoid creating highly intelligent but destructive actors.
If we imagine a future philanthropic program to make these enhancements accessible for free, one could hypothesize a form of “bundling”: making the cognitive upgrade conditional on a voluntary moral/character upgrade. While not a state mandate — and admittedly open to hard questions about who defines “moral improvement” and the risk of paternalism — it would act as a soft requirement for those choosing to use subsidized resources, thereby incentivizing positive social evolution.
A clear advantage of HIA over pure AI development is the guarantee of consciousness. If a non-conscious, unaligned AI were to replace us, it would result in a “dead universe” devoid of beings capable of experiencing value. Ensuring that conscious beings remain the primary agents of our future is a vital safeguard.
Beyond X-risk, human enhancement has massive potential for human well-being, such as eradicating genetic diseases. However, for this to be an ethical intervention rather than a dystopian one, the technology must be as open, accessible, and available by default as possible to everyone, regardless of social class or geography, to prevent the emergence of unbridgeable inequalities.
In light of these points, I see HIA as a “secondary strategy.” It could make sense to allocate a portion of funds to this area for the sake of portfolio diversification, a sort of hedge investment against the uncertainty of our long-term future.
[Just noting that an online AI detector says the above comment is most likely written by a human and then “AI polished”; I strongly prefer that you just write the unpolished version even if you think it’s “worse”.]
if we frame them primarily as tools for mitigating AI-related existential risk.
I did frame it that way, because decreasing existential risk should be the top priority in terms of causes. But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.
An enhanced human born today will not be an active researcher for at least 20–25 years.
Well, we could say 15-20 years (I think John von Neumann started making significant contributions to math around age 20), but yeah.
If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.
This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force.
I’m not sure how you’re using the phrase “shorter timelines” here. If you mean “when AGI actually comes”, then see above. If you mean “someone’s strategic probabilistic distribution over when AGI comes”, then I disagree. See https://tsvibt.blogspot.com/2022/08/the-benefit-of-intervening-sooner.html. Even with quite agressive timelines, HIA acceleration can still decrease X-risk by at least in the ballpark of a percentage point (or more).
Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.
I spent about a decade researching AGI alignment, much of that time at MIRI; my conclusion, which I believe is agreed upon by a significant portion of the AGI alignment research community, is that this problem is extremely difficult, and not remotely on track to being solved in time, and pouring more resources into the problem basically doesn’t help at the moment. If someone is making strategic decisions based on the fact that there is disagreement on this point, I would urge you to notice that the prominent optimists will not debate the pessimists.
[Just noting that an online AI detector says the above comment is most likely written by a human and then “AI polished”; I strongly prefer that you just write the unpolished version even if you think it’s “worse”.]
Yeah, you’re right. I usually use AI mostly for translation, but this time I asked it to rewrite some parts that had come out a bit tangled. It said the same things, but expressed them a bit too much in its own way, and later I half-regretted leaving the text like that, too.
But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.
I mostly agree with this. On whether they should be a top cause area, less so. As long as it stays framed as a marginal investment or a “secondary strategy” against AI catastrophic risk, it seems more justifiable and defensible to the general public, institutions, or people who might join EA. Making it a top cause area would mean going all-in on it. I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don’t face to the same degree (both reputationally for EA, and in terms of actual risks from adopting the technology).
This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
That’s a good point, but even if HIA demonstrated that we don’t really need AGI, it seems unlikely that society as a whole would give up pursuing it if it could get there first. That said, I agree that even a small increase in the chances of avoiding the risk matters a lot given the stakes.
not remotely on track to being solved in time, and pouring more resources into the problem basically doesn’t help at the moment.
I’m not too optimistic about AI alignment. But does that mean you’d estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
Ok, thanks for noting! (It occurred to me after I wrote that that translation would be a major use case and obviously a good one.)
I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don’t face to the same degree
You’re right, it certainly wouldn’t make sense for it to immediately jump to being a top priority cause, yeah, even if I’m arguing it should maybe be one eventually. If we’re being granular about the computations I’m bidding for, it would be more like “some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment”.
it seems more justifiable and defensible to the general public, institutions, or people who might join EA.
Interesting. Regarding people who might join EA, I don’t think I quite see it, but the point is interesting and I’ll maybe think about it a bit more.
That said, in terms of societal justification, I would want to distinguish between motivations about AGI X-risk, and concrete aims and intentions with reprogenetics. The latter is what I’d propose to collectively work on. That would still involve intelligence amplification, and transparently so, as is owed to society. But the actual plan, and the pitch to society, would be more broad. It would be about the whole of reprogenetics. So it would include empowering parents to give their kids an exceptionally healthy happy life, and so on, and it would include policy, professional, social, and moral safeguards against the major downside risks.
In other words, to borrow from an old CFAR tagline, I’m saying something like “reprogenetics for its own sake, for the sake of X-risk reduction”, if that makes any sense.
In a bunch more detail, I want to distinguish:
(motivation) my background motivation for devoting a lot of effort to HIA and reprogenetics (HIA helping decrease AGI X-risk)
(explanation of motivation) how I describe/explain/justify my background motivation to people / the public / etc.
(concrete aims) the concrete aims/targets that I pursue with my actions within the space of reprogenetics
(explanation of aims) how I describe/explain/justify/commit-to concrete aims
(proposed societal motivation) What I’m putting forward as a vision / motivation for developing and deploying reprogenetics that would be good and would justify doing so
For honesty’s sake, I personally strongly aim to think and communicate so that:
My public explanation of my motivation gives an honest (truthful, open, salient, clear) presentation of my actual motivation.
My public explanation of my concrete aims is likewise honest.
Both my motivations and my concrete aims are clearly presented.
My concrete aims have clear boundaries around them. For example, I might commit to certain actions on the basis of my publicly stated concrete aims.
My concrete aims are consonant with my proposed societal motivation.
This serves multiple purposes. For example:
I want to work out how, and argue to the public, that reprogenetics is good “on its own terms”; in particular, I want to argue that it’s good even if you don’t buy into anything about AGI X-risk. This is a stronger position I want to argue for, and expose my position to critique on the basis of.
I want to work out and communicate to the public / stakeholders a vision of how society can orient around reprogenetics that is beneficial to ~everyone. This involves working out societal coordination. The flag of [figuring out what to coordinate on and how] would be more about the concrete aims and the proposed societal motivation, and not about my background motivations.
I would suggest that EA could do something similar. That might work differently / not work at all, in the context of a large social movement. I haven’t thought about that, it’s an interesting question.
it seems unlikely that society as a whole would give up pursuing it if it could get there first.
Yeah, I’m quite uncertain on this point. I’m interested in understanding better the details of why AGI is actually being pursued, and under what conditions various capabilities researchers might walk away from that research. But that’s a whole other intellectual project that I don’t have bandwidth for; I’d strongly encourage someone to pick that one up though!
I’m not too optimistic about AI alignment. But does that mean you’d estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
I do think that the current marginal dollar is much better spent on either supporting a global ban on AGI research, and/or HIA, compared to marginal alignment research. That’s definitely a controversial opinion, but I’ll stand on that (and FWIW, not that I should remotely be taken to speak for them, but for example I would suspect that Yudkowsky and Soares would agree with this judgement). I’m actually unsure whether I personally think the benefit of HIA is more in “some of the kids might solve alignment” vs. “some of the kids might figure out some other way to make the world safe”; I’ve become quite pessimistic about solving AGI alignment, but that’s kinda idiosyncratic.
However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
I don’t think this is a real contribution. I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades. I think they’re trying to make it because they think theycan.
And also because they [rightly or wrongly] believe that AGI will be more cost effective, more controllable, need less sleep and have higher problem solving potential than even the smartest possible humans. And be here a lot sooner. (And in some of the AGI fantasies, a route to making humans genetically smarter anyway!) -
Even if one assumes near term “AGI” has a fairly low ceiling,[1] it seems like “intelligence augmentation” is unpromising as an EA intervention.[2] The necessary research is complex, expensive, long term and dependent not just on germline engineering, but on academic research to understand what intelligence is in less shallow terms than we currently do. It’s not clear that there are individual tractable interventions. The quantifiable impact—if it actually worked—would presumably be a tiny proportion of people sufficiently rich and focused on maximising their offspring’s intelligence paying to select a few genes somewhat correlated with intelligence for “designer babies”, with the possibility this might translate enough into real world outcomes to turn a handful of children with already above average prospects into particularly capable and influential individuals. It is not obvious these children will grow up to use their greater talent (real or perceived) for mitigating existential risk or any other sort of greater good[3] Humans with rich, driven parents who’ve been taught about their superiority to ordinary humans from birth don’t sound immune to “alignment problems” either....
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades.
I don’t feel confident about this in any direction. However, my sense is that it’s one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically “there won’t be enough smart people”—but rather, “humanity doesn’t currently have the brainpower to solve the really pressing problems”, e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say “well it doesn’t matter because someone else will do this research anyway, why not me”. But what about a strong global ban? Then you get objections like “well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on”. That’s the justification that I’m trying to push against by saying “look, we can get all that good stuff on a pretty good timeline without crazy x-risk”.
Regarding your next paragraph, there’s a lot of claims there, which I largely think are incorrect, but it’s kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore
If you’re interested in discussing this at more length, I’d love to have you on for a podcast episode. Interested?
the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
Yeah this is another quite large potential benefit of reprogenetics that I’m excited about. It would require that the technology ends up “safe, accessible, and powerful”.
I partially agree about “what intelligence is”, in that this is a quite important area for further research. However, I do not agree that we would need to know more, in order to enable parents to make quite [beneficial by their lights] genomic choices on behalf of their future children, including decreasing disease risk and also increasing actual intelligence.
I agree that at the very beginning some weird rich people would be the ones benefiting. But I’m confident that the technology would become affordable for many—quite plausibly significantly more affordable than IVF currently is (e.g. given IVG). I then suspect many parents would want to give their kid a genomic foundation for high capabilities in general, including intelligence. How much, is of course up to them; I suspect, though, that there would be plenty of people interested in having very smart kids.
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
Here are some quick thoughts that come to mind after reading your post. I find HIA and reprogenetics to be fascinating topics, but I see several critical hurdles if we frame them primarily as tools for mitigating AI-related existential risk.
The biggest logical hurdle is time. AI development is moving at a breakneck pace, while biological HIA interventions (such as embryo selection) take decades to manifest in the real world. An enhanced human born today will not be an active researcher for at least 20–25 years. If AGI arrives within a 15-year window, human intelligence will simply lag behind at the most critical juncture.
I notice you address this objection by arguing that even a 10-year acceleration in a 40–50 year horizon still represents a meaningful reduction in existential risk. I find this partially compelling — but it seems to assume that AGI timelines are long enough for HIA to matter at all, which remains deeply uncertain. On shorter timelines, the argument loses most of its force. Addressing AI X-risk by trying to create smarter humans who might then solve the problem is also a highly indirect strategy; it seems more tractable to focus directly on AI alignment.
We could also consider a complementary path: the top priority remains creating a safe, aligned AI. Once achieved, we can use that superintelligence to help us develop HIA and advanced biotechnology far more rapidly and safely than we ever could on our own.
Furthermore, just as we fear unaligned AI, we should fear “unaligned” superintelligent humans. This risk may be even greater, as humans are not “programmed” for pure rationality; we are driven by complex emotions, tribalism, and deep-seated cognitive biases. Therefore, any HIA research should prioritize and fund moral enhancement (e.g., increasing empathy and compassion, reducing cognitive biases) alongside cognitive gains. This is crucial to avoid creating highly intelligent but destructive actors.
If we imagine a future philanthropic program to make these enhancements accessible for free, one could hypothesize a form of “bundling”: making the cognitive upgrade conditional on a voluntary moral/character upgrade. While not a state mandate — and admittedly open to hard questions about who defines “moral improvement” and the risk of paternalism — it would act as a soft requirement for those choosing to use subsidized resources, thereby incentivizing positive social evolution.
A clear advantage of HIA over pure AI development is the guarantee of consciousness. If a non-conscious, unaligned AI were to replace us, it would result in a “dead universe” devoid of beings capable of experiencing value. Ensuring that conscious beings remain the primary agents of our future is a vital safeguard.
Beyond X-risk, human enhancement has massive potential for human well-being, such as eradicating genetic diseases. However, for this to be an ethical intervention rather than a dystopian one, the technology must be as open, accessible, and available by default as possible to everyone, regardless of social class or geography, to prevent the emergence of unbridgeable inequalities.
In light of these points, I see HIA as a “secondary strategy.” It could make sense to allocate a portion of funds to this area for the sake of portfolio diversification, a sort of hedge investment against the uncertainty of our long-term future.
[Just noting that an online AI detector says the above comment is most likely written by a human and then “AI polished”; I strongly prefer that you just write the unpolished version even if you think it’s “worse”.]
I did frame it that way, because decreasing existential risk should be the top priority in terms of causes. But I do also think HIA and reprogenetics are very good interventions even if there were no AGI X-risk, so for anyone who cares about interventions like that, they should be a top cause area.
Well, we could say 15-20 years (I think John von Neumann started making significant contributions to math around age 20), but yeah.
This is largely true, yeah. However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
I’m not sure how you’re using the phrase “shorter timelines” here. If you mean “when AGI actually comes”, then see above. If you mean “someone’s strategic probabilistic distribution over when AGI comes”, then I disagree. See https://tsvibt.blogspot.com/2022/08/the-benefit-of-intervening-sooner.html. Even with quite agressive timelines, HIA acceleration can still decrease X-risk by at least in the ballpark of a percentage point (or more).
I spent about a decade researching AGI alignment, much of that time at MIRI; my conclusion, which I believe is agreed upon by a significant portion of the AGI alignment research community, is that this problem is extremely difficult, and not remotely on track to being solved in time, and pouring more resources into the problem basically doesn’t help at the moment. If someone is making strategic decisions based on the fact that there is disagreement on this point, I would urge you to notice that the prominent optimists will not debate the pessimists.
Yeah, you’re right. I usually use AI mostly for translation, but this time I asked it to rewrite some parts that had come out a bit tangled. It said the same things, but expressed them a bit too much in its own way, and later I half-regretted leaving the text like that, too.
I mostly agree with this. On whether they should be a top cause area, less so. As long as it stays framed as a marginal investment or a “secondary strategy” against AI catastrophic risk, it seems more justifiable and defensible to the general public, institutions, or people who might join EA. Making it a top cause area would mean going all-in on it. I realize the post was arguing exactly for that, but it seems like a pretty divisive topic even within the EA community itself, and it raises a lot of risks and open questions that other interventions don’t face to the same degree (both reputationally for EA, and in terms of actual risks from adopting the technology).
That’s a good point, but even if HIA demonstrated that we don’t really need AGI, it seems unlikely that society as a whole would give up pursuing it if it could get there first. That said, I agree that even a small increase in the chances of avoiding the risk matters a lot given the stakes.
I’m not too optimistic about AI alignment. But does that mean you’d estimate, for example, an extra dollar in HIA has a better chance of solving the problem than spending it directly on AI alignment? Or even that taking a dollar away from alignment right now to move it to HIA would better reduce AI existential risk? (setting aside the case for just a marginal investment, perhaps?)
Ok, thanks for noting! (It occurred to me after I wrote that that translation would be a major use case and obviously a good one.)
You’re right, it certainly wouldn’t make sense for it to immediately jump to being a top priority cause, yeah, even if I’m arguing it should maybe be one eventually. If we’re being granular about the computations I’m bidding for, it would be more like “some EAs should do some more investigation into whether this could make sense as a cause area for substantially more investment”.
Interesting. Regarding people who might join EA, I don’t think I quite see it, but the point is interesting and I’ll maybe think about it a bit more.
That said, in terms of societal justification, I would want to distinguish between motivations about AGI X-risk, and concrete aims and intentions with reprogenetics. The latter is what I’d propose to collectively work on. That would still involve intelligence amplification, and transparently so, as is owed to society. But the actual plan, and the pitch to society, would be more broad. It would be about the whole of reprogenetics. So it would include empowering parents to give their kids an exceptionally healthy happy life, and so on, and it would include policy, professional, social, and moral safeguards against the major downside risks.
In other words, to borrow from an old CFAR tagline, I’m saying something like “reprogenetics for its own sake, for the sake of X-risk reduction”, if that makes any sense.
In a bunch more detail, I want to distinguish:
(motivation) my background motivation for devoting a lot of effort to HIA and reprogenetics (HIA helping decrease AGI X-risk)
(explanation of motivation) how I describe/explain/justify my background motivation to people / the public / etc.
(concrete aims) the concrete aims/targets that I pursue with my actions within the space of reprogenetics
(explanation of aims) how I describe/explain/justify/commit-to concrete aims
(proposed societal motivation) What I’m putting forward as a vision / motivation for developing and deploying reprogenetics that would be good and would justify doing so
For honesty’s sake, I personally strongly aim to think and communicate so that:
My public explanation of my motivation gives an honest (truthful, open, salient, clear) presentation of my actual motivation.
My public explanation of my concrete aims is likewise honest.
Both my motivations and my concrete aims are clearly presented.
My concrete aims have clear boundaries around them. For example, I might commit to certain actions on the basis of my publicly stated concrete aims.
My concrete aims are consonant with my proposed societal motivation.
This serves multiple purposes. For example:
I want to work out how, and argue to the public, that reprogenetics is good “on its own terms”; in particular, I want to argue that it’s good even if you don’t buy into anything about AGI X-risk. This is a stronger position I want to argue for, and expose my position to critique on the basis of.
I want to work out and communicate to the public / stakeholders a vision of how society can orient around reprogenetics that is beneficial to ~everyone. This involves working out societal coordination. The flag of [figuring out what to coordinate on and how] would be more about the concrete aims and the proposed societal motivation, and not about my background motivations.
I would suggest that EA could do something similar. That might work differently / not work at all, in the context of a large social movement. I haven’t thought about that, it’s an interesting question.
Yeah, I’m quite uncertain on this point. I’m interested in understanding better the details of why AGI is actually being pursued, and under what conditions various capabilities researchers might walk away from that research. But that’s a whole other intellectual project that I don’t have bandwidth for; I’d strongly encourage someone to pick that one up though!
I do think that the current marginal dollar is much better spent on either supporting a global ban on AGI research, and/or HIA, compared to marginal alignment research. That’s definitely a controversial opinion, but I’ll stand on that (and FWIW, not that I should remotely be taken to speak for them, but for example I would suspect that Yudkowsky and Soares would agree with this judgement). I’m actually unsure whether I personally think the benefit of HIA is more in “some of the kids might solve alignment” vs. “some of the kids might figure out some other way to make the world safe”; I’ve become quite pessimistic about solving AGI alignment, but that’s kinda idiosyncratic.
I don’t think this is a real contribution. I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades. I think they’re trying to make it because they think they can.
And also because they [rightly or wrongly] believe that AGI will be more cost effective, more controllable, need less sleep and have higher problem solving potential than even the smartest possible humans. And be here a lot sooner. (And in some of the AGI fantasies, a route to making humans genetically smarter anyway!)
-
Even if one assumes near term “AGI” has a fairly low ceiling,[1] it seems like “intelligence augmentation” is unpromising as an EA intervention.[2] The necessary research is complex, expensive, long term and dependent not just on germline engineering, but on academic research to understand what intelligence is in less shallow terms than we currently do. It’s not clear that there are individual tractable interventions. The quantifiable impact—if it actually worked—would presumably be a tiny proportion of people sufficiently rich and focused on maximising their offspring’s intelligence paying to select a few genes somewhat correlated with intelligence for “designer babies”, with the possibility this might translate enough into real world outcomes to turn a handful of children with already above average prospects into particularly capable and influential individuals. It is not obvious these children will grow up to use their greater talent (real or perceived) for mitigating existential risk or any other sort of greater good[3] Humans with rich, driven parents who’ve been taught about their superiority to ordinary humans from birth don’t sound immune to “alignment problems” either....
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
I do actually, but it’s not fashionable here, or indeed at MIRI!
at least, viewed through EA’s analytical lens rather than associated cultural tendency to overestimate the importance of individual intelligence..,
I mean, what percentage of the world’s smartest people focuses on that now?
Thanks for engaging substantively!
I don’t feel confident about this in any direction. However, my sense is that it’s one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically “there won’t be enough smart people”—but rather, “humanity doesn’t currently have the brainpower to solve the really pressing problems”, e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say “well it doesn’t matter because someone else will do this research anyway, why not me”. But what about a strong global ban? Then you get objections like “well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on”. That’s the justification that I’m trying to push against by saying “look, we can get all that good stuff on a pretty good timeline without crazy x-risk”.
Regarding your next paragraph, there’s a lot of claims there, which I largely think are incorrect, but it’s kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore
If you’re interested in discussing this at more length, I’d love to have you on for a podcast episode. Interested?
Yeah this is another quite large potential benefit of reprogenetics that I’m excited about. It would require that the technology ends up “safe, accessible, and powerful”.
I guess, just to state where some of the disagreements lie:
I agree the research is complex and multifaceted. (See for example https://berkeleygenomics.org/articles/Visual_roadmap_to_strong_human_germline_engineering.html and https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html )
I partially agree about “what intelligence is”, in that this is a quite important area for further research. However, I do not agree that we would need to know more, in order to enable parents to make quite [beneficial by their lights] genomic choices on behalf of their future children, including decreasing disease risk and also increasing actual intelligence.
I agree that at the very beginning some weird rich people would be the ones benefiting. But I’m confident that the technology would become affordable for many—quite plausibly significantly more affordable than IVF currently is (e.g. given IVG). I then suspect many parents would want to give their kid a genomic foundation for high capabilities in general, including intelligence. How much, is of course up to them; I suspect, though, that there would be plenty of people interested in having very smart kids.
“select a few genes” I’m interested in significantly stronger reprogenetics; we already know many hundreds of genes that contribute to intelligence; and stronger reprogenetics is, biotechnologically speaking, probably feasible—see https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html
Regarding what the kids will do, yeah, they can and should do what they want, but do you think that this is net bad? Or what would be your guess here? Cf. https://tsvibt.blogspot.com/2025/11/hia-and-x-risk-part-1-why-it-helps.html and https://www.lesswrong.com/posts/K4K6ikQtHxcG49Tcn/hia-and-x-risk-part-2-why-it-hurts
Regarding this, see also my comment here: https://forum.effectivealtruism.org/posts/QLugEBJJ3HYyAcvwy/new-cause-area-human-intelligence-amplification?commentId=5yxEpv9vFRABptHyd