Hey, regarding aging: you might be interested to know I’m writing a series of articles to evaluate the cost-effectiveness of any project related to aging research. I’ve found that the cost-effectiveness of aging research might be much higher, in certain cases, than what Sarah Constantin found. That’s mostly because I’m also accounting for the fact that new aging research brings the date of Longevity Escape Velocity closer in time, and this increses the scope by many orders of magnitude. Each single year “buyed” means averting 36,500,000,000QALYs, using a conservative estimate (36,500,000 deaths by aging per year multiplied by 1000 estimated years free of disability after LEV). Check out my profile for all the articles I’ve written.
Regarding your proposal of prizes: I think prizes for a general-purpose cure for aging should involve prizes for intermediate steps, that may look… less than incredible. All of the intermediary steps, even inside “paradigm-shift-like” plans like Aubrey de Grey’s will look similar on the outside: they will delay aging. Unless you are measuring if tissues are actually rejuvenated and you have a robust theoretical framework you won’t recognize what will be necessary in the long-term. For cancer, the picture looks much better, but the incremental steps, even for a general-purpose cure, will probably belong to many different groups.
You will probably be pleased to know that recently Peter Diamandis’ XPrize Foundation started a similar project: to give prizes for specific innovations identified as important in aging research and adjacent areas.
Yes, looking back at this, I should have just said that on average, if someone dies with a probability of 1/1000, then he will live 999 years and die in the 1000th. And then I should have linked him the “Expectation” section of the Wikipedia page of the negative binomial distribution.
Thanks, great info. This post is officially outdated :)
Great post, I really appreciate the solutions you propose. I often fear some english mistakes could harm how my arguments are perceived. I think keeping personal written notes with solutions could help people in putting the conscious effort of evaluating arguments in virtue of their content. It could even help the comprehension of arguments made by natives speakers…
I have a fear that, even knowing all this, if the conscious effort is lacking, the effect remains the same. I don’t feel very optimistic given what I’ve seen in similar contexts, but I hope I’m wrong. Note that there are many areas in which knowledge doesn’t really help without conscious effort. Rationality as a whole may qualify as one of these areas.
It’s not a matter of fairness. It’s a matter of reducing the probability of not hearing good ideas because of stupid reasons.
I used “Counterintuitive”, because people tend to think the person-affecting view generates more cost-effectiveness than the impersonal view (see comments under my first post), regardless of how the views affect the comparison with other causes. But yes, adopting the person-affective view seems to make aging research look better in comparison to the other causes you mention, since it negates a lot of their impact. Instead, adopting the impersonal view makes the comparison favour prevention of x-risks that could wipe out literally all of humanity (otherwise aging research looks far better), and probably some interventions regarding non-human animals, also depending on how much you value animals.
Note that this doesn’t make aging research worthless to evaluate from an EA perspective. Many people and orgs (eg. Open Philanthropy) donate to more than just two top causes… and aging research seems to be second or third place, probably depending on how much you value non-human animals. Mathematically, it makes sense to differentiate between various top causes in order to reduce risk. Differentiating also makes sense when there are single specific interventions, in a seemingly worse causa area, that may nonetheless be more cost-effective than available interventions in a cause-area that overall looks better, which includes cases in which the more cost-effective interventions in the top cause-areas are funded, or if there are particularly cost-effective interventions in the seemingly worse cause-area.
I think I upvote mostly like this (I’ll edit this answer if I remember more reasons):
Strong upvote: correct (if applicable) and important.
Upvote: correct (if applicable) and slightly important, or not completely correct but interesting/potentially important (In this case I usually also reply). I also tend to upvote comments under my own posts much more because I feel the need to thank the person who took the time to write the comment somehow.
I don’t always follow these rules exactly, but mostly. For example, sometimes I upvote to close the gap between two comments I consider equally important under a post.
Relevant people involved in potential top cause areas.
For me there is a strong “what the world could be if I did this, so it would be a huge waste if I didn’t do this” sense that motivates me, although in the past I used to overestimate the potential good effects of my actions. I think it is probably similar to the need for efficiency you mention, but it also generates an unpleasant but correct sense of urgency, because usually if I don’t do things fast, the effect could not be the same. There’s also wanting to have a good impact in the world, which is more core and generates meaning.
I’m not sure if these considerations would change how aging research looks from an EA perspective. It’s one of the many “rounding errors” that could be considered as side effects, besides the main purpose of buying QALYs and freedom. Moreover, all of these additional considerations, both positive and negative, might be made irrelevant by new disrupting tech and societal/political/organisational change. Examples: cognitive enhancements, AI, research funding management.
I’m not sure if there’s a definite answer about how much cognitive decline influences this kind of stuff, but I wouldn’t be surprised if “being stuck in old ways” or not being able to understand new developments and innovate are symptoms of neurological old age more than accumulated bias.
There are also other factors that are less related to aging (but which could still benefit from rejuvenation) that play into how superstar researchers hinder the careers of younger scientists (see Gavintaylor’s comment). These problems, though, don’t need people to die in order to be solved. Organisational improvements would be sufficient.
Population capacity gets larger as technology improves, so it’s not obvious we’ll reach maximum capacity in the near future (next centuries). Regardless of this, even if we reached it, the impact of aging research wouldn’t be impacted, because impact comes from making LEV closer, not from guaranteeing LEV’s existence. You will find answers in the second post of the framework: https://forum.effectivealtruism.org/posts/uR4mEzMR7fiQzb2c7/aging-research-and-population-ethics
Notice that the post you are commenting under is just the first in a series. I already published four of them! Here the second, third and fourth. There are more to come, and I also plan to do some interviews with organizations. I suggest you read what I wrote so far and get back if you have more doubts and still want more pieces of information or a primer on the field. It will be probably useful for you to subscribe to my posts through the “Subscribe to this user’s posts” option in my profile.
The fact that no new hallmark has been discovered in decades is probably telling. But I think it is reasonable to believe that there are different hallmarks that will be visible in longer-than-human lifespans.
Yes! What Hallmarks to prioritise is an extremely important thing to figure out. The next post is coming out soon, and this topic is a central part of it. In short, I think we should keep an eye on two things when trying to prioritise in this area: if a given research is necessary for achieving LEV, and how neglected it is. Neglectedness seems to be particularly important because the hardest research is often the most neglected (too long term for private investment, too risky for public funding). The hardest hallmarks will be cracked later, so they will more or less constitute the lat “bastions” before LEV. If we speed up progress on them, then we should impact the date of LEV the most. A way to measure neglectedness could be to browse papers by keyword and see what hallmarks have the least number of entries. Another useful preliminary tool could be this roadmap, by lifespan.io. The most neglected/hardest hallmarks are probably the ones currently in the earliest stages of research.
I think in general It would make most sense to prioritise research that would impact the date of LEV the most, because LEV results in both living healthier and longer. Also, it would be probably easier to do, since it’s difficult to know what hallmark/aspect of aging impacts healthspan the most, and they impact each other a lot. Instead, we probably can estimate the relative impact on the date of LEV using neglectedness (more on this in the next post). As a strategy, prioritising the short-term to have a bigger immediate effect I suspect would be less cost-effective.
Also note: therapies improving age-related diseases the most would also be the ones extending life the most. Curing aging and age-related diseases is the same thing. If aging is not cured some disease will always remain, because otherwise why would you die?
Thanks for the comments :) I basically agree with everything. The only thing I would add is this:
Getting a life satisfaction curve from 20 to 90 year old that don’t have age related-disabilities could be a step in the right direction for understanding how to extrapolate life satisfaction to life spans that are only possible through LEV. It has to be kept into account, though, that a healthy old person (or a healthy middle aged person) is still in worse health than a healthy young person. In fact, yesterday, it was suggested to me to add to the post the subtler effects of aging that aren’t counted as diseases. Things like, for example, loss of neuroplasticity and fluid intelligence. Another person reminded me of the fact that physical appearance also degrades very fast with age. Maybe it would turn out to be correct to extrapolate the life satisfaction curve you get from healthy old people, but I’m not sure how much. I think it’s at least very probable that doing that would fail for lives longer than a couple of centuries, although maybe we could still try to do a rough estimate while accounting for uncertainty. There are a lot of things to take into consideration that would complicate such an extrapolation. Examples: A possible different relationship with death and risk, higher possibility to try new things and take financial risks, more time for doing everything, being able to choose different life paths and careers, being able to experience new transformative technologies and human progress, experiencing the death of other people much more rarely and generally never seeing them lose their qualities. These things probably count as subtler possible benefits of aging research, although I didn’t list them in the post. There are probably many others.
Charles Babbage designed The Analytical Engine, that was a mechanical general purpose (Turing complete) computer, in 1837. This is remarkable, because it came a century before all the theory that was put in place by Turing, which inspired, and is at the heart of, today’s computers. You can find a description of The Analytical Engine in Babbage’s biography: “Passages from the Life of a Philosopher”. His apprentice Ada Lovelace wrote some programs for it, becoming the first programmer in history.
This fact inspired a lot of Steampunk fiction, reasoning along the lines of: “What if the Analytical Engine was actually built and improved upon at that time? What if other non general purpose mechanical calculators like The Difference Engine followed the same development of the ones based on circuits we saw during the twentieth century?”
This is true, but the carrying capacity increases as technology improves. This plus the fact that birthrates are under the replacement rate in the developed world and going down pretty much everywhere should make us think we will not be in a malthusian situation when LEV arrives.
It depends how you interpret PA. I don’t think there is a standard view—it could be ‘maximise the aggregate lifetime utility of everyone currently existing’, in which case what you say would be true, or ‘maximise the happiness of everyone currently existing while they continue to do so’, which I think would turn out to be a form of averaging utilitarianism, and on which what you say would be false.
Good points, although I’m not sure who would hold averaging utilitarianism. But yes, in this case prolonging life wouldn’t matter.
Yes, but this was a comment about the desirability of public advocacy of longevity therapies rather than the desirability of longevity therapies themselves. It’s quite plausible that the latter is desirable and the former undesirable—perhaps enough so to outweigh the latter.
I doubt that the damages of public advocacy would outweigh the good. Only if advocacy is really good at convincing people of the possibility of bringing aging under medical control, the large-scale distress you mention could happen. But then aging would become an issue under the eyes of everyone and funding would immediately spike up, along with policies to accelerate the process. If this happens, the supposed psychological distress would be a rounding error if compared even only with additional DALYs prevented at the end of life. Otherwise, if advocacy manages to convince people of the possibility of putting aging under medical but doesn’t bring additional money and talent in research, then yes the psychological damage would probably outweigh the positive impact. But is this a possibility? I don’t think it’s possible to convince a large fraction of the population and at the same time not cause resources to pour in the field. Then you could argue that research could be so ineffective that pouring resources into it wouldn’t accelerate anything. But I think this has a very low probability. Note also that in expectation even a very small hastening of the field would outweigh psychological distress.
Your argument was that it’s bigger subject to its not reducing the birthrate and adding net population in the near future is good in the long run. Both are claims for which I think there’s a reasonable case, neither are claims that seem to have .75 probability (I would go lower for at least the second one, but YMMV). With a .44+ probability that one assumption is false, I think it matters a lot.
At worst the PA view and the impersonal view have the same effect, so “it matters a lot” seems exaggerated to me. A totally unrelated idea would be introducing a discounting of impact because of these considerations, but it still wouldn’t be advisable using expected value.
Again this is totally wrong. Technologies don’t just come along and make some predetermined set of changes then leave the world otherwise unchanged—they have hugely divergent effects based on the culture of the time and countless other factors. You might as well argue that if humanity hadn’t developed the atomic bomb until last year, the world would look identical to today’s except that Japan would have two fewer cities (and that in a few years, after they’d been rebuilt, it would look identical again).
I think you are right here, but I still don’t think most of the impact would come from the ripple effects that hastening aging research would have on the far future. We don’t even know if the effects will be good or bad. In my view they would be probably just cultural and neutral cost-effectiveness wise.