I don’t think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades.
I don’t feel confident about this in any direction. However, my sense is that it’s one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically “there won’t be enough smart people”—but rather, “humanity doesn’t currently have the brainpower to solve the really pressing problems”, e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say “well it doesn’t matter because someone else will do this research anyway, why not me”. But what about a strong global ban? Then you get objections like “well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on”. That’s the justification that I’m trying to push against by saying “look, we can get all that good stuff on a pretty good timeline without crazy x-risk”.
Regarding your next paragraph, there’s a lot of claims there, which I largely think are incorrect, but it’s kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore
If you’re interested in discussing this at more length, I’d love to have you on for a podcast episode. Interested?
the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
Yeah this is another quite large potential benefit of reprogenetics that I’m excited about. It would require that the technology ends up “safe, accessible, and powerful”.
Thanks for engaging substantively!
I don’t feel confident about this in any direction. However, my sense is that it’s one of the top positive justifications that people use for making AGI (I mean, justifications that would apply in the absence of race dynamics). Not specifically “there won’t be enough smart people”—but rather, “humanity doesn’t currently have the brainpower to solve the really pressing problems”, e.g. cancer, longevity, etc. If you tell an isolated person or company to stop their AGI research, they can just say “well it doesn’t matter because someone else will do this research anyway, why not me”. But what about a strong global ban? Then you get objections like “well hold on a minute, maybe this AI stuff is pretty good, it could cure cancer and so on”. That’s the justification that I’m trying to push against by saying “look, we can get all that good stuff on a pretty good timeline without crazy x-risk”.
Regarding your next paragraph, there’s a lot of claims there, which I largely think are incorrect, but it’s kinda hard to respond to them in a way that is both satisfyingly detailed+convincing but also short enough for a comment. I would point you to my research, which addresses some of these questions: https://berkeleygenomics.org/Explore
If you’re interested in discussing this at more length, I’d love to have you on for a podcast episode. Interested?
Yeah this is another quite large potential benefit of reprogenetics that I’m excited about. It would require that the technology ends up “safe, accessible, and powerful”.