Longevity research as AI X-risk intervention

Core argument:

I propose that one possible motivation for a rush to build GAI in our lifetimes is that it is a high-risk gamble to increase human healthspan and lifespan. If one primarily values present-day humans, particularly oneself, one’s family, friends, local community, and nation, then this is a sound strategy for maximizing the expected benefit. The downside risk is that the people closest to the builders die a few decades earlier than they otherwise would have. The upside potential is that they live for many centuries, or even longer.

Under a longtermist moral calculus, by contrast, rushing to build GAI is not a risk worth taking, because of the serious risk of curtailing humanity’s future.

If it is not possible to persuade GAI builders to act according to a longtermist framework, then an alternative is to placate them. One way to do this would be to rush to achieve success in human-directed longevity medicine.

Here, I do not claim that this core argument is correct with certainty. I argue that it is worth taking seriously by EAs, both because it identifies a neglected strategy for potentially contributing to AI safety, and because of open pessimism among technical AI alignment researchers about the likelihood of finding success. This does not mean people should switch from technical AI alignment to longevity research, but that EA should consider encouraging people with a good fit for longevity research to go into this field.

What is longevity research?

Your body constantly suffers massive DNA damage. The sun irradiates your skin, your cells constantly produce toxins in their little biomolecular factories, and your DNA copying machinery is only almost perfect. Under that onslaught, it’s no surprise that people’s bodies break down over time, and they start acquiring the diseases of old age.

Let’s call ordinary biomedical research “disease research” for short. It studies cancer, heart disease, Alzheimer’s, diabetes, infections, the weakening of bones, pain in the joints, and many other ills.

Longevity research claims that we can resist, even achieve victory, over this host of illnesses. After all, your body also has wonderful DNA repair tools that fix nearly all of this damage. And even if it can’t, most cells eventually die and get replaced by your stem cells. These stem cells split in two. One stays a stem cell. The other matures and differentiates into the lost cell type, migrates into its original position, and takes up its old job.

So why do these repair mechanisms eventually fail us? To answer that question, we have to study how our biology changes over our lifespan in ways that either directly produce disease, or make it easier for disease to strike us. That’s a subtle distinction, worth elaborating.

One way aging causes problems is that our repair mechanisms have fundamental limits. As one clear example, stem cells also accumulate DNA damage over the course of one’s life. And nothing’s replacing them. Eventually, dysregulated stem cells can give rise to cancer or to distorted daughter cells that provoke autoimmune reactions. This is an example of how, as we get older, our biology can directly generate disease.

Most of my readers will know that cancer cells are simply the patient’s own cells, but in a disordered state. They might also know that the immune system normally kills cancer cells before they can turn into tumors. Unless you were unlucky enough to get an unusually early cancer, or perhaps if you are an older reader, your body has been silently staving off cancer for your entire life.

But those protections can fail, making it easier for cancer and other diseases to strike. The organ known as the thymus is where some of your most important immune cells—T cells—are born. As you age, your thymus shrinks. You still have your old T cells. Just fewer new ones. The memory of diseases you’ve faced in the past is embodied in your old T cells. New T cells are how your immune system adapts to new diseases. As your thymus shrinks, you lose those new T cells, and so your body gets worse at being able to fight off new types of infections.

So let’s come back to our subtle distinction between how aging directly produces disease, and how it fails to prevent disease. When stem cells pick up DNA damage, and nothing is there to fix or replace them, they can directly cause cancer. By contrast, when your thymus shrinks and you have fewer new T cells, your body’s ability to prevent infections declines, even though it still requires that you suffer an infection for that protective breakdown to pose a health threat.

Longevity research focuses on trying to solve these issues before they can cause or allow disease in the first place. There’s evidence that we can prevent your thymus from shrinking. We might be able to destroy old stem cells and replace them with new ones. Low doses of the drug rapamycin robustly extend lifespan in animal models. Right now, a large study of low-dose rapamycin safety and efficacy is taking place on dogs. And of course, healthy diet and exercise, as well as a clean and safe environment, are the original and most accessible forms of longevity medicine.

Not only might these be new ways of tackling old diseases, but success with any one of them might push back the time when we start getting the diseases of old age. We call the period of your life in which you do not suffer serious diseases “healthspan.” One goal of longevity research is extending lifespan. The other is extending healthspan. Ideally, we would have long lives, with the period of serious disease compressed to a short period at the very end.

Over time, we might enter an era when longevity research is producing results so quickly that life expectancy increases by more than one year, per year. For example, perhaps in 2032, life expectancy in the USA is 85 years. One year later, in 2033, it is two years longer, 87 years. Another year later, in 2034, it is again two years longer, 89 years. If that trend continued indefinitely, this would be called “longevity escape velocity.”

It’s a misleading analogy. We use the term “escape velocity” for a rocket ship because we understand the laws of physics well enough to know in advance how fast that rocket needs to travel in order to escape Earth’s orbit. When we say “velocity,” we are literally talking about velocity. By contrast, even if lifespan were increasing faster than 1 year of lifespan with each passing year, we would not necessarily be able to predict whether or when that trend would come to an end.

We can easily imagine achieving annual 1 year+ gains in life expectancy without being able to produce a chart comparable to this one for explaining how further such gains will appear.

Gains in longevity will be less like “escape velocity” and more like Moore’s Law, which we probably ought to have named Moore’s Trend. It’s simply a name for the long-term pattern of packing more transistors into the same size of circuit over time—doubling every two years, until it started slowing down.

We don’t know if a trend of steadily increasing healthspan or lifespan will ever take place. We also don’t know at what rate it will grow, or how long that trend will last. Biology is different from rocketry and it’s different from transistor manufacture. With rocketry, we figured out a long time ago how to get rockets into outer space. The problem is solved and we’ve moved on to refinements and other issues. With transistors, we’re guaranteed to hit some sort of physical limit as transistors continue to shrink.

On a practical level, it’s hard to be sure how to think about biology. I like to imagine that keeping our bodies alive and healthy is like playing a game of hacky-sack. It’s hard to keep the ball in the air, but it seems like we could engineer ways to keep the hacky sack in the air indefinitely while still following all the rules of the game.

Would longevity research help with AI safety?

To begin, I am not claiming that there’s an overwhelming case that successful longevity research would help with AI safety. I am going to outline a case that it might, along with some critiques. My primary claim is that this argument is of pressing importance, and is worth exploring further.

As stated in the core argument at the top of this post, GAI is a risky but potentially high-upside gamble to increase human life expectancy for present-day people. AI capabilities researchers don’t understand biomedical research, they do understand computers, and they may see GAI as their most effective way to make longevity research go faster. It’s almost pure upside for them. If it pays off, they and their loved ones benefit. If it causes an X-catastrophe, they were going to die anyway. The upside potential is enormous, and the downside potential to them is capped.

The main approach we hope will avert AI-driven X-catastrophe is technical AI alignment research. But it might turn out to take more time than we have to achieve technical AI alignment. It might not even be theoretically possible. Eliezer Yudkowsky has expressed deep, open pessimism about the chance of alignment success. His complaints weren’t just about how quickly AI capabilities research is progressing, but about how unlikely the current proposed technical solutions are to work, either at all, or in any kind of reasonable timeframe under current social norms.

If we achieved “longevity Moore’s Law,” that might alter the risk calculus of AI capabilities researchers, massively increasing the downside potential to them personally (because now they’re expecting to live longer) while limiting the upside potential (because human-directed biomedicine is beating AI to achieve longevity gains). At a certain point, in this model, AI capabilities researchers will refocus from increasing capabilities to increasing safety.

If GAI is coming in the short term, this strategy is useless. Longevity research is still the Wild West. Billions of dollars poured into the nascent industry just this year, massively changing the funding landscape, and there are credible advanced preclinical studies on serious longevity drug candidates going on right now. But that’s probably not enough to alter the hypothetical risk calculus I am positing that AI capability researchers are making in the next few years. If we’re on track to achieve GAI in the next ten years, longevity research is probably no help.

But it may be that GAI will take longer. In this case, technical AI alignment may or may not be achievable in time. If it is not, or if one is not suited for technical AI alignment research, then it makes sense to put one’s efforts into alternative strategies to stave off medium-term X-risk from AI. One strategy is working toward “longevity Moore’s Law,” in the hope that this causes a massive worldwide reconsideration of how people do risk/​benefit calculus, among AI capabilities researchers, politicians, and the public.

Right now, many people are advocating switching from longtermist-based justifications of AI safety’s pressingness to a messaging approach of “AI might kill you and everyone you love.” That message would be more powerful if it was “AI might kill you and everyone you love, and death is no longer an inevitability!”

Critique 1: The argument proves too much

One criticism of this argument is that conventional biomedical research can also help with extending lifespan and healthspan. Perhaps by subdividing known diseases into stages and types, becoming excellent at personalized medicine, and developing medications with better precision in terms of delivery and dose, we can suppress disease indefinitely even if the body has aged and is generating them faster. It would be like getting so good at playing Whack-a-Mole that you might actually win.

Yet the industry is old, huge, full of incredibly smart and hardworking people, has huge financial incentives, and hasn’t been successful so far at achieving “longevity Moore’s Law.” People working in that field are smart enough to choose a “longevity medicine”-based approach to drug design if they thought that it was likely to work. If they haven’t, shouldn’t we take that as a sign that “longevity medicine” is nothing more than a buzzword? Why would we think that investing in “longevity”-branded research would be a better approach than investing in biomedical research generally? And if it seems implausible that investing in, say, cancer research, would be an effective AI X-risk intervention, why would we think “longevity medicine” would be worth looking at?

The argument that longevity medicine is an X-risk intervention proves too much. We can’t predict if or how “longevity Moore’s Law” will come about, and there’s no reason to draw a line between the nascent “longevity” field and the mature “biomedical” field, rather than dividing up the categories some other way. It’s begging the question of whether or not “longevity” medicine is an especially promising healthspan or lifespan generator. We don’t actually have a great reason to believe that. If we did, then everybody would know it and there’d be a rush to supply it with resources. While it is receiving a sudden burst of resources, is that signal enough to give the longevity field creditability above (or on par with) traditional biomedicine? It’s unclear.

Critique 2: The argument is premature

If we are trying to placate AI capabilities researchers by building the technologies they want without GAI, then the first thing we ought to do is find out what they want. We ought to look at corporate mission statements, the comments of AI capabilities researchers, and run opinion surveys.

They might want any number of things: the pleasure of professional achievement, wealth and power, the thrill of discovery, improvements to daily life even without life extension. Their beliefs might be quite exotic in some cases. In any case, the first step would be to find out, not make an assumption and rush ahead into longevity research based on that.

Critique 3: Doing more harm than good

If people were able to live much longer, that would mean that they could work much longer. In the short run, this mostly benefits people who are nearing the end of their working lives. AI safety is a small, new field mostly populated by young people. AI capabilities is a big, relatively well-established field. Although most AI capabilities researchers seem to be younger people, it may have a higher mean researcher age. Extending the length of people’s working lives has the effect mainly of extending the careers of AI capabilities researchers.

If the argument that this achievement in longevity would persuade AI capabilities researchers to refocus on safety turns out to be wrong, then the effect of longevity medicine would be to disproportionately preserve the research effort into capabilities. It might even be stimulating to researchers who put no credence on AI as a potential threat: “we’ve already shown that longevity is tractable, so now we absolutely must develop GAI as fast as possible to lock in and extend these gains!”

Furthering the conversation

For those inclined to technical AI safety research or policy work, I think that’s still their best bet. But for those with good personal fit for medicine, economics, sociology, anthropology, or engineering, I think it is worth developing the argument for longevity medicine with further critiques and rebuttals. Based on the state of the argument as it is now, “maybe longevity medicine is an effective X-risk intervention, maybe it isn’t, we just don’t know” is not the proper response.

The proper response to this uncertainty is “this might be a tractable strategy for employing a whole new category of people on perhaps the most pressing problem in the world, and it might also be a risk factor for that problem. We should work damned hard to figure out which one it is.”

I myself am going into longevity medicine because, on balance, I think that longevity medicine has a net positive expected value for AI safety, that it’s an especially promising approach to biomedicine, and that it’s a good fit for me. But I am not certain about this. I would welcome more attention being paid to both sides of this argument to help steer my own career trajectory, and to inform others making similar decisions. My aim here is to lay down a foundation for further discussion.

No comments.