Artificial Superintelligence is not going to magically cure all diseases and ‘solve longevity’
A mini-rant abut AI and longevity. (This is a slightly revised version of a post I shared on X today.)
They say “Artificial Superintelligence would take only a few years to cure cancer, solve longevity, and defeat death itself’.
This is a common claim by pro-AI lobbyists, accelerationists, and naive tech-fetishists. But the claim makes no sense.
The recent success of LLMs does NOT suggest that ASIs could easily cure diseases or solve longevity, for at least two reasons.
1) The data problem. Generative AI for art, music, and language succeeded mostly because AI companies could steal billions of examples of art, music, and language from the internet, to build their base models. They weren’t just trained on academic papers _about_ art, music, and language. They were trained on real _examples_ of art, music, and language.
There are no analogous biomedical data sets with billions of data points that would allow accurate modelling of every biochemical detail of human physiology, disease, and aging. ASIs can’t just read academic papers about human biology to solve longevity. They’d need direct access to vast quantities of biomedical data that simply don’t exist in any easy-to-access forms. And they’d need very detailed, reliable, validated data about a wide range of people across different ages, sexes, ethnicities, genotypes, and medical conditions. Moreover, medical privacy laws would make it extremely difficult and wildly unethical to collect such a vast data set from real humans about every molecular-level detail of their bodies.
2) The feedback problem. LLMs also work well because the AI companies could refine their output with additional feedback from human brains (through Reinforcement Learning from Human Feedback, RLHF).
But there is nothing analogous to that for modeling human bodies, biochemistry, and disease processes. There are no known methods of Reinforcement Learning from Physiological Feedback. And the physiological feedback would have to be long-term, over spans of years to decades, taking into account thousands of possible side-effects for any given intervention. There’s no way to rush animal and human clinical trials—however clever ASI might become at ‘drug discovery’.
More generally, there would be no fast feedback loops from users about model performance. GenAI and LLMs succeeded partly because developers within companies, and customers outside companies, could give very fast feedback about how well the models were functioning. They could just look at the output (images, songs, text), and then tweak, refine, test, and interpret models very quickly, based on how good they were at generating art, music, and language. In biomedical research, there would be no fast feedback loops from human bodies about how well ASI-suggested interventions are actually affecting human bodies, over the long term, across different lifestyles, including all the tradeoffs and side-effects.
It’s interesting that most of the people arguing that ‘ASI would cure all diseases and aging’ are young tech bros who know a lot about computers, but almost nothing about organic chemistry, human genomics, biomedical research, drug discovery, clinical trials, the evolutionary biology of senescence, evolutionary medicine, medical ethics, or the decades of frustrations and failures in longevity research. They think that ‘fixing the human body’ would be as simple as debugging a few thousand lines of code.
Look, I’m all for curing diseases and promoting longevity. If we took the hundreds of billions of dollars per year that are currently spent on trying to build ASI, and we devoted that money instead to longevity research, that would increase the amount of funding in the longevity space by at least 100-fold. And we’d probably solve longevity much faster by targeting it directly than by trying to summon ASI as a magical cure-all. ASIs has some potential benefits (and many grievous risks and downsides).
But it’s totally irresponsible of pro-AI lobbyists to argue that ASIs could magically & quickly cure all human diseases, or solve longevity, or end death. And it’s totally irresponsible of them to claim that anyone opposed to ASI development is ‘pro-death’.
I fully agree with this post.
I think this type of belief comes from a flattening understanding of the difficulty of doing things: it’s assumed that because doing well on a math olympiad is hard, and that curing death is hard, that if you can make an AI do the former it will soon be able to do the latter. But in fact, curing death is so much more difficult to do than a math olympiad that it breaks the scale.
You can also see this in the casual conflation of things like “cure cancer” and “cure death”. The latter is many, many, many orders of magnitude more difficult than the former: claiming that the latter would occur at the same time as the former is an incredibly extraordinary claim, and it requires commensurate evidence to back it up.
The chief argument in favour of this is “recursive self improvement”, but intelligence is not a magic spell that you can just dial up to infinity. There are limits in the forms of empirical knowledge, real-world resources and computational complexity. Certainly current day AI trends seems to be limited by scaling laws that would be impractical a pretty fucking long way from god-like intelligence.
Semantic quibble: I think most people, myself included, simply define ASI as either encompassing those capabilities or being sufficient at recursive self-improvement such it will possess those capabilities in short order.
If your point is primarily that the existing AI paradigm is inadequate, I would tend to agree. There’s also a distinct question of what an intelligence explosion looks like; it may well be that tedious real-world experimentation is necessary for these sorts of biomedical advances, which takes time. That too is a compelling possibility; but I would expect it in a decade at most and certainly quicker than human R&D can advance.
Dylan, this seems to overlook my two central points: that no matter how smart an ASI is, or how much compute it can do, its ability to model human biochemistry and physiology is going to be limited by the biomedical data it can access, and by the slow speed of feedback about whether any given interventions (e.g. to promote longevity) are actually working, with minimal tradeoffs and side-effects. It can’t solve a problem that it can’t model accurately.
An ASI solving very hard empirical problems (e.g. ‘curing aging’) has to have the training data needed to solve the problem. At the moment, we probably have only 0.01% of the data that we’d need. And gathering that data would require informed consent from millions of people. And I don’t imagine you’d be happy to give an ASI full autonomous power to gather such biomedical data, at scale, however it wants… that leads straight to a body horror dystopia.
I do tend to favor longer AGI/TAI timelines than many for roughly these reasons. But I don’t think you are exactly right about the AI data access trend. For one, whether or not me or Americans at large are “happy to give an ASI full autonomous power to gather such biomedical data”, China will be.
I tentatively I expect capabilities with real-world economic importance to come to some extent in the US as well, even if the most radical and transformative stuff requires further integration into the physical world for modeling. And at that point there may simply be a iterative process of greater and greater integration, as public perception improves and dependence increases. The complication here is moral backlash of some sort, which I note you’ve written about before. I agree that this is plausible, I simply wouldn’t call it probable. Things look more bi-modal to me; most likely we get the outcome I’ve described above (mild harms could still be disregarded by China), or we get a longer slow down before curing aging.