However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
I don’t think this is a real contribution. I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades. I think they’re trying to make it because they think theycan.
And also because they [rightly or wrongly] believe that AGI will be more cost effective, more controllable, need less sleep and have higher problem solving potential than even the smartest possible humans. And be here a lot sooner. (And in some of the AGI fantasies, a route to making humans genetically smarter anyway!) -
Even if one assumes near term “AGI” has a fairly low ceiling,[1] it seems like “intelligence augmentation” is unpromising as an EA intervention.[2] The necessary research is complex, expensive, long term and dependent not just on germline engineering, but on academic research to understand what intelligence is in less shallow terms than we currently do. It’s not clear that there are individual tractable interventions. The quantifiable impact—if it actually worked—would presumably be a tiny proportion of people sufficiently rich and focused on maximising their offspring’s intelligence paying to select a few genes somewhat correlated with intelligence for “designer babies”, with the possibility this might translate enough into real world outcomes to turn a handful of children with already above average prospects into particularly capable and influential individuals. It is not obvious these children will grow up to use their greater talent (real or perceived) for mitigating existential risk or any other sort of greater good[3] Humans with rich, driven parents who’ve been taught about their superiority to ordinary humans from birth don’t sound immune to “alignment problems” either....
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades.
I don’t feel confident about this in any direction. However, my sense is that it’s one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically “there won’t be enough smart people”—but rather, “humanity doesn’t currently have the brainpower to solve the really pressing problems”, e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say “well it doesn’t matter because someone else will do this research anyway, why not me”. But what about a strong global ban? Then you get objections like “well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on”. That’s the justification that I’m trying to push against by saying “look, we can get all that good stuff on a pretty good timeline without crazy x-risk”.
Regarding your next paragraph, there’s a lot of claims there, which I largely think are incorrect, but it’s kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore
If you’re interested in discussing this at more length, I’d love to have you on for a podcast episode. Interested?
the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
Yeah this is another quite large potential benefit of reprogenetics that I’m excited about. It would require that the technology ends up “safe, accessible, and powerful”.
I partially agree about “what intelligence is”, in that this is a quite important area for further research. However, I do not agree that we would need to know more, in order to enable parents to make quite [beneficial by their lights] genomic choices on behalf of their future children, including decreasing disease risk and also increasing actual intelligence.
I agree that at the very beginning some weird rich people would be the ones benefiting. But I’m confident that the technology would become affordable for many—quite plausibly significantly more affordable than IVF currently is (e.g. given IVG). I then suspect many parents would want to give their kid a genomic foundation for high capabilities in general, including intelligence. How much, is of course up to them; I suspect, though, that there would be plenty of people interested in having very smart kids.
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
I don’t think this is a real contribution. I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades. I think they’re trying to make it because they think they can.
And also because they [rightly or wrongly] believe that AGI will be more cost effective, more controllable, need less sleep and have higher problem solving potential than even the smartest possible humans. And be here a lot sooner. (And in some of the AGI fantasies, a route to making humans genetically smarter anyway!)
-
Even if one assumes near term “AGI” has a fairly low ceiling,[1] it seems like “intelligence augmentation” is unpromising as an EA intervention.[2] The necessary research is complex, expensive, long term and dependent not just on germline engineering, but on academic research to understand what intelligence is in less shallow terms than we currently do. It’s not clear that there are individual tractable interventions. The quantifiable impact—if it actually worked—would presumably be a tiny proportion of people sufficiently rich and focused on maximising their offspring’s intelligence paying to select a few genes somewhat correlated with intelligence for “designer babies”, with the possibility this might translate enough into real world outcomes to turn a handful of children with already above average prospects into particularly capable and influential individuals. It is not obvious these children will grow up to use their greater talent (real or perceived) for mitigating existential risk or any other sort of greater good[3] Humans with rich, driven parents who’ve been taught about their superiority to ordinary humans from birth don’t sound immune to “alignment problems” either....
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
I do actually, but it’s not fashionable here, or indeed at MIRI!
at least, viewed through EA’s analytical lens rather than associated cultural tendency to overestimate the importance of individual intelligence..,
I mean, what percentage of the world’s smartest people focuses on that now?
Thanks for engaging substantively!
I don’t feel confident about this in any direction. However, my sense is that it’s one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically “there won’t be enough smart people”—but rather, “humanity doesn’t currently have the brainpower to solve the really pressing problems”, e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say “well it doesn’t matter because someone else will do this research anyway, why not me”. But what about a strong global ban? Then you get objections like “well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on”. That’s the justification that I’m trying to push against by saying “look, we can get all that good stuff on a pretty good timeline without crazy x-risk”.
Regarding your next paragraph, there’s a lot of claims there, which I largely think are incorrect, but it’s kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore
If you’re interested in discussing this at more length, I’d love to have you on for a podcast episode. Interested?
Yeah this is another quite large potential benefit of reprogenetics that I’m excited about. It would require that the technology ends up “safe, accessible, and powerful”.
I guess, just to state where some of the disagreements lie:
I agree the research is complex and multifaceted. (See for example https://berkeleygenomics.org/articles/Visual_roadmap_to_strong_human_germline_engineering.html and https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html )
I partially agree about “what intelligence is”, in that this is a quite important area for further research. However, I do not agree that we would need to know more, in order to enable parents to make quite [beneficial by their lights] genomic choices on behalf of their future children, including decreasing disease risk and also increasing actual intelligence.
I agree that at the very beginning some weird rich people would be the ones benefiting. But I’m confident that the technology would become affordable for many—quite plausibly significantly more affordable than IVF currently is (e.g. given IVG). I then suspect many parents would want to give their kid a genomic foundation for high capabilities in general, including intelligence. How much, is of course up to them; I suspect, though, that there would be plenty of people interested in having very smart kids.
“select a few genes” I’m interested in significantly stronger reprogenetics; we already know many hundreds of genes that contribute to intelligence; and stronger reprogenetics is, biotechnologically speaking, probably feasible—see https://berkeleygenomics.org/articles/Methods_for_strong_human_germline_engineering.html
Regarding what the kids will do, yeah, they can and should do what they want, but do you think that this is net bad? Or what would be your guess here? Cf. https://tsvibt.blogspot.com/2025/11/hia-and-x-risk-part-1-why-it-helps.html and https://www.lesswrong.com/posts/K4K6ikQtHxcG49Tcn/hia-and-x-risk-part-2-why-it-hurts
Regarding this, see also my comment here: https://forum.effectivealtruism.org/posts/QLugEBJJ3HYyAcvwy/new-cause-area-human-intelligence-amplification?commentId=5yxEpv9vFRABptHyd