Aging research and population ethics
This is the second of a series of posts in which I’m trying to build a framework to evaluate aging research. Previous post: A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity.
The first part of this post will explore the potential sensitivity of the impact of aging research on two different views of population ethics. Under the person-affecting view of population ethics, in which creating new lives has neutral moral value, it seems that aging research is really valuable. Under the impersonal view, in which creating new lives has a positive moral value, it could be less clear. By looking at demographic trends, and analyzing the motivations for why people have children, it turns out that saving people by hastening the arrival of LEV wouldn’t prevent births and could actually increase the average fertility rate of the world. This leads to a counterintuitive result: Aging research could be even more valuable under the impersonal view of population ethics.
In the second part of the post, I’ll explore how to reason about moral weights, which could also increase the impact of making LEV come closer if longer lives are valued more than shorter lives for reasons other than QALYs. There are various arguments for why one should prefer some kind of age-discounting or its contrary, but the answer ultimately depends on what a 1000-year-old life and mind looks like and how it is different from the life and mind of a shorter-lived person. Therefore, taking a neutral stance is suggested unless it is believed that the future, if there will be one, is more likely to be better than the present. In that case, the lives of people saved through LEV should count for more.
At first glance, the impact of aging research seems to greatly change depending on if you adopt the impersonal view of population ethics or the person-affecting perspective. In the impersonal view, creating new lives is regarded as good. Assuming that there isn’t suffering at the end of life and people get replaced immediately, this view holds no ethical difference between making people live longer and replacing them with new people. Under the person-affecting perspective, however, creating new lives is not valued: only already existing people are valued, and thus how bad it is to die depends on the amount of well-being lost.
MichaelPlant reminded me of this point under my previous post. I gave an answer there but don’t think it is sufficient. Therefore, a more accurate analysis of this consideration is warranted here.
It seems like if the person-affecting perspective is adopted, then aging research has enormous value. That is the impact outlined in the previous post.
It seems, though, that aging research could have at least the same value under the impersonal view if elongating healthy life does not mean taking the space of potential newborns. When trying to determine if it would, it’s tempting to think about the very far future and start with this question: will humanity use all the resources at its disposal at any given time? Even if humanity will not use all the resources at its disposal, will it still control its population growth in order to maximize well-being? If the answer to one of these two questions is “yes”, then it seems like elongating life would prevent births.
Starting with these questions and thinking about the far future is wrong. Reminder: most of the impact of aging research comes from making the date of LEV come closer and saving the people who wouldn’t otherwise have hit LEV. If LEV will happen, then it’s very probable that it will happen in this century or the next. Therefore, to answer the question “will elongating life prevent births?” we need to account for how society currently works and the current demographic trends.
It seems that the choice of making children is not currently motivated by lack of resources (poverty); on the contrary, the number of children is going down sharply with increased standards of living. This is a trend that is proving true in every part of the world, underdeveloped nations included.
This means that making people live longer in this or the next century is not going to prevent potential births. They will probably happen less and less regardless, and making old people healthy and productive is going to prevent the economic disaster that is looming due to an increasingly aged population, even in underdeveloped countries.
Quite the inverse could prove to be true, though: people with longer lifespans could have more children, simply because they will have much more time to procreate via the childbearing window being extended. Therefore, the fertility rate will probably increase. This consideration could even be the reason why a scenario in which population control will be needed could prove true, although I tend to think that ceiling is very far away in the future, due to technology still having an ample margin of improvement. In case longer lifespans actually increase the world fertility rate, then the impact under the impersonal view of population ethics is the sum of the QALYs saved due to making LEV come closer, plus the QALYs of the newborns of the people saved, who wouldn’t otherwise have been born.
Additionally, if longer lives are more valuable than shorter ones for reasons different than the number of QALYs, the neutral view could still value longer lives over perfectly replacing them with shorter lives. This brings us to how to choose moral weights.
An important question that could substantially affect the measure of impact is how to choose moral weights. I think that it’s practically impossible to come to a definitive answer due to a lack of empirical information, but I can outline possible ways to reason about the problem.
The central question seems to be: is a 1000-year life intrinsically more, less, or equally as valuable as many shorter lives that sum up to 1000 years?
One argument for why it could be less valuable could be this simple one: it’s only one life. I wouldn’t find strange if many people would find different shorter lives more important than a single long one because of some kind of intuition regarding a preference for variety or even fairness. This is also supported by the intuition that many people would choose to live a 90% chance of living for a normal human lifespan than a 10% chance of living a 900-year lifespan.
One life, intuitively, is “fresh” only once. Someone may value shorter lives more because they could be, intuitively, more imbued with fresh experiences. Each one goes through infancy, adolescence, and all the other phases of life.
At first glance, the first argument seems weaker: after all, one person is never really the same. The mind changes continuously, and someone could retain very little of themselves living century after century. Would this person experience less novelty? It’s possible, unless the future reserves really incredible new experiences and surprises. However, is novelty all there is to consider?
Many lives are, probably, imbued with more novelty, but one long life could mean insight and accrual of knowledge that would be impossible for a single lifespan. Anecdotes of old scientists and luminaries with vast visions of their fields but lacking the sharpness of mind to contribute, especially in hard sciences or mathematics, are common. Each one of them dying is a burnt library of insight and knowledge. Severing their lives at that point means also preventing any future experience resulting from that knowledge. In some sense, it feels like stopping to play when the fun begins, and this could also say something about novelty, which may not be extinguishable very soon. Much longer lifespans could also possibly mean deeper and otherwise impossible-to-experience emotions and states of mind, making longer lives more valuable. This seems obvious if we take, again, the example of luminaries: for a common individual, a normal human lifespan may be not enough to acquire the knowledge of a luminary. Thus, a short life may constitute a hard wall against what can be experienced by most people.
Another intuition that would make one consider a longer life more valuable is this: I think there is a pretty strong case for preferring to have one generation of people living 80 years than multiple generations of children living till the fifth year of age. Therefore, maybe the same intuition could apply for longer lifespans. Are people living till 100 like children if compared to someone living to 1000 years old? The answer to this can’t be definitive. I think the answer depends on information we currently don’t have: how a 1000-year life and mind looks like and how it is different from the life and mind of a shorter-lived person.
Intuition on how to assign moral weights suggests both issues: If we lean towards valuing longer lives more, we could be overestimating how much more “enlightened” a human mind can become. If we lean towards valuing shorter lives more, we may underestimate the same variable or even commit a mistake akin to scope insensitivity if we don’t think about the problem deeply enough.
One consideration that could shift the needle considerably on this is if you deem it probable that the future will be better than the present or if you think, instead, that the far future will be worse. I think that the future is more likely to be of a utopian kind or simply devoid of life than worse than the present, and the probability of future existential risks has to be factored in as a discount of impact, but it’s not part of moral weights, so I would tend to ethically value longer lives more than shorter ones for this reason.
However, if you think that the probabilities of the future being better or worse than the present offset each other, then there are good arguments for both methods of applying moral weights, and I would argue to apply neither age-discounting nor the contrary. A neutral stance is probably preferable. That said, different analysts should feel free to think about the problem themselves, and if they believe that one outcome is more likely than the other, they may want to correct these crude estimates.
Crossposted to LessWrong