I think in general It would make most sense to prioritise research that would impact the date of LEV the most, because LEV results in both living healthier and longer. Also, it would be probably easier to do, since it’s difficult to know what hallmark/aspect of aging impacts healthspan the most, and they impact each other a lot. Instead, we probably can estimate the relative impact on the date of LEV using neglectedness (more on this in the next post). As a strategy, prioritising the short-term to have a bigger immediate effect I suspect would be less cost-effective.
Also note: therapies improving age-related diseases the most would also be the ones extending life the most. Curing aging and age-related diseases is the same thing. If aging is not cured some disease will always remain, because otherwise why would you die?
Thanks for the comments :) I basically agree with everything. The only thing I would add is this:
Getting a life satisfaction curve from 20 to 90 year old that don’t have age related-disabilities could be a step in the right direction for understanding how to extrapolate life satisfaction to life spans that are only possible through LEV. It has to be kept into account, though, that a healthy old person (or a healthy middle aged person) is still in worse health than a healthy young person. In fact, yesterday, it was suggested to me to add to the post the subtler effects of aging that aren’t counted as diseases. Things like, for example, loss of neuroplasticity and fluid intelligence. Another person reminded me of the fact that physical appearance also degrades very fast with age. Maybe it would turn out to be correct to extrapolate the life satisfaction curve you get from healthy old people, but I’m not sure how much. I think it’s at least very probable that doing that would fail for lives longer than a couple of centuries, although maybe we could still try to do a rough estimate while accounting for uncertainty. There are a lot of things to take into consideration that would complicate such an extrapolation. Examples: A possible different relationship with death and risk, higher possibility to try new things and take financial risks, more time for doing everything, being able to choose different life paths and careers, being able to experience new transformative technologies and human progress, experiencing the death of other people much more rarely and generally never seeing them lose their qualities. These things probably count as subtler possible benefits of aging research, although I didn’t list them in the post. There are probably many others.
Charles Babbage designed The Analytical Engine, that was a mechanical general purpose (Turing complete) computer, in 1837. This is remarkable, because it came a century before all the theory that was put in place by Turing, which inspired, and is at the heart of, today’s computers. You can find a description of The Analytical Engine in Babbage’s biography: “Passages from the Life of a Philosopher”. His apprentice Ada Lovelace wrote some programs for it, becoming the first programmer in history.
This fact inspired a lot of Steampunk fiction, reasoning along the lines of: “What if the Analytical Engine was actually built and improved upon at that time? What if other non general purpose mechanical calculators like The Difference Engine followed the same development of the ones based on circuits we saw during the twentieth century?”
This is true, but the carrying capacity increases as technology improves. This plus the fact that birthrates are under the replacement rate in the developed world and going down pretty much everywhere should make us think we will not be in a malthusian situation when LEV arrives.
It depends how you interpret PA. I don’t think there is a standard view—it could be ‘maximise the aggregate lifetime utility of everyone currently existing’, in which case what you say would be true, or ‘maximise the happiness of everyone currently existing while they continue to do so’, which I think would turn out to be a form of averaging utilitarianism, and on which what you say would be false.
Good points, although I’m not sure who would hold averaging utilitarianism. But yes, in this case prolonging life wouldn’t matter.
Yes, but this was a comment about the desirability of public advocacy of longevity therapies rather than the desirability of longevity therapies themselves. It’s quite plausible that the latter is desirable and the former undesirable—perhaps enough so to outweigh the latter.
I doubt that the damages of public advocacy would outweigh the good. Only if advocacy is really good at convincing people of the possibility of bringing aging under medical control, the large-scale distress you mention could happen. But then aging would become an issue under the eyes of everyone and funding would immediately spike up, along with policies to accelerate the process. If this happens, the supposed psychological distress would be a rounding error if compared even only with additional DALYs prevented at the end of life. Otherwise, if advocacy manages to convince people of the possibility of putting aging under medical but doesn’t bring additional money and talent in research, then yes the psychological damage would probably outweigh the positive impact. But is this a possibility? I don’t think it’s possible to convince a large fraction of the population and at the same time not cause resources to pour in the field. Then you could argue that research could be so ineffective that pouring resources into it wouldn’t accelerate anything. But I think this has a very low probability. Note also that in expectation even a very small hastening of the field would outweigh psychological distress.
Your argument was that it’s bigger subject to its not reducing the birthrate and adding net population in the near future is good in the long run. Both are claims for which I think there’s a reasonable case, neither are claims that seem to have .75 probability (I would go lower for at least the second one, but YMMV). With a .44+ probability that one assumption is false, I think it matters a lot.
At worst the PA view and the impersonal view have the same effect, so “it matters a lot” seems exaggerated to me. A totally unrelated idea would be introducing a discounting of impact because of these considerations, but it still wouldn’t be advisable using expected value.
Again this is totally wrong. Technologies don’t just come along and make some predetermined set of changes then leave the world otherwise unchanged—they have hugely divergent effects based on the culture of the time and countless other factors. You might as well argue that if humanity hadn’t developed the atomic bomb until last year, the world would look identical to today’s except that Japan would have two fewer cities (and that in a few years, after they’d been rebuilt, it would look identical again).
I think you are right here, but I still don’t think most of the impact would come from the ripple effects that hastening aging research would have on the far future. We don’t even know if the effects will be good or bad. In my view they would be probably just cultural and neutral cost-effectiveness wise.
The person-affecting (PA) view doesn’t make this a slam-dunk. PAness doesn’t signify that death in itself has negative value, so given your assumption ‘that there isn’t suffering at the end of life and people get replaced immediately’, on the base PA view, increasing lifespans wouldn’t in itself generate value. No doubt there are flavours of PA that would claim death *does* have disvalue, but those would need to be argued for separately.
The PA view doesn’t need to assign disvalue to death to make increasing lifespans valuable. It just needs to assign to death a smaller value than being alive. Unless you literally don’t care if people live or die or you think that dying is better than living, my argument holds.
Obviously there often *is* profound suffering at the end of life, which IMO is a much stronger argument for longevity research—on both PA and totalising views. Though I would also be very wary of writing articles arguing on those grounds, since most people very sensibly try to come to terms with the process of ageing to reduce its subjective harm to them, and undoing that for the sake of moving LEV forward a few years might cause more psychological harm than it prevented.
If we make LEV nearer we don’t increase the distress anti-aging therapies will cause to people at first. We just anticipate the distress.
My impression is that the PA view is held by a fairly small minority of EAs and consequentialist moral philosophers (for advocates of nonconsequentialist moral views, I’m not sure the question would even make sense—and it would make a lot less sense to argue for longevity research based on its consequences), and if so, treating it as having equal evidential weight as totalising views is misleading.
I guess so. I used the same style being used in the introductory articles to EA, which are pretty neutral, although they recognise the neutral view as probably superior. This doesn’t matter though, since, as I wrote, impact under the neutral view is actually bigger.
3) ‘Reminder: most of the impact of aging research comes from making the date of LEV come closer and saving the people who wouldn’t otherwise have hit LEV.’
This is almost entirely wrong. Unless we a) wipe ourselves out shortly after hitting it (which would be an odd notion of longevity), or b) reach it within the lifespans of most existing people *and* take a death-averse-PA view, the vast majority of LEV’s impact of it will come on the ripple effect on the far future, and the vast majority of its expected impact will be our best guess as to that
Financing aging research has only the effect of hastening it, so moving the date of LEV closer. The ripple effect that defeating aging would cause on the far future would remain the same. People living 5000 years from now wouldn’t care if we hit LEV now or in 2040. So this isn’t even a measure of impact.
EAs tend to give near-term poverty/animal welfare causes a pass on that estimation, perhaps due to some PA intuitions, perhaps because they’re doing good and (almost) immediate work, which if nothing else gives them a good baseline for comparison, perhaps because the immediate measurable value might be as good a proxy as any for far-future expectation in the absence of good alternative ways to think about the latter (and plenty of people would argue that these are all wrong, and hence that we should focus more directly on the far future. But I doubt many of the people who disagree with *them* would claim on reflection that ‘most of the impact of poverty reduction comes from the individuals you’ve pulled out of poverty’).
Longevity research doesn’t really share these properties, though, and certainly doesn’t have them to the same degree, so it’s unlikely to have the same intuitive appeal, in which case it’s hard to argue that it *should*. Figuring out the short-term effects is probably the best first step towards doing this, but we shouldn’t confuse it with the end goal.
If you are curious, Sarah Constantin recently wrote an analysis using the shorter term effects of aging research as a measure of impact. This one. Also, my next post is exactly on the shorter term impact. I think it’ll be published in a couple of weeks. It will cover DALYs averted at the end of life, impact on life satisfaction, the economic and societal benefits, impact on non-human animals.
Hey, I just published the second post of the framework, which answers your comment pretty well. I even mentioned you inside it. Here it is: https://forum.effectivealtruism.org/posts/uR4mEzMR7fiQzb2c7/aging-research-and-population-ethics
Open Philanthropy, Give Well, Rethink Priorities probably qualify. To clarify: my phrase didn’t mean “devoted exclusively to finding new potential cause areas”.
In my understanding “Cause X” is something we almost take for granted today, but that people in the future will see as a moral catastrophe (similarly as to how we see slavery today, versus how people in the past saw it). So it has a bit more nuance than just being a “new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar”.
I think there are many candidates seeming to be overlooked by the majority of society. You could also argue that no one of these is a real Cause X due to the fact that they are still recognised as problems by a large number of people. But this could be just the baseline of “recognition”a neglected moral problem will start from in a very interconnected world like ours. Here what comes to my mind:
Wild animal suffering (probably not recognised as a moral problem by the majority of the population)
Aging (many people probably ascribe it a neutral moral value, maybe because it is rightly regarded as a “natural part of life”. Right consideration but it doesn’t imply its moral value or how many resources we should devote to the problem)
“Resurrection” or, in practice, right now, cryonics. (Probably neutral value/not even remotely in the radar of the general population, with many people possibly even ascribing it a negative moral value)
Something related to subjective experience? (stuff related to subjective experience that people don’t deem worthy to assign moral value to because “times are still too rough to notice them”, or stuff related to subjective experience that we are missing out but could achieve today with the right interventions).
Cause areas that I think don’t fit the definition above:
Mental Health, since it is recognised as a moral problem by a large enough fraction of the population (but still probably not large enough?). Although it is still too neglected.
X-risk. Recognised as a moral problem (who wants the apocalypse?) but too neglected for reasons probably not related to ethics.
But who is working on finding Cause X? I believe you could argue that every organisation devoted to finding new potential cause areas is. You could probably argue that moral philosophers, or even just thoughtful people, have a chance of recognising it. I’m not sure if there is a project or organisation devoted specifically to this task, but judging from the other answers here, probably not.
I just answered your other comment, but I saw this one only now. Apparently both notifications didn’t arrive. Thanks a lot for taking the time to read and answer both.
Some of my replies in the other comment apply here too. I’ll go in order.
Regarding your first paragraph: Yes, I’m preparing a post about potential age discounting that could be applied. I included it among the moral considerations that would correct impact. But you made a good point, and I may need to modify it in the light of it.
Regarding AI and other technology: For the very specific case of AI potentially automating R&D I think the timeline is longer than for LEV achieved through biomedical research (I’m taking the view that arises from the probability distribution given by AI researchers), but, as you said, it’s not the only technology that would make some of the efforts made now less useful.
Regarding your third paragraph: Yes, probably the only non-human animals benefitting from LEV would be pets, although I don’t know how many. I should try to do an estimate.
Regarding comparisons with other cause areas: I think there are some interventions in aging research that could reap massive benefits which are neglected and somewhat tractable. Copying from the other comment: The foundational research is not very neglected, while there are wide areas of translational research that could use much more funding and that are necessary to reach the final goal. From the lifespan.io’s Rejuvenation Roadmap you should get a preliminary idea.
Your example using the SENS approach is correct: areas like stem cell research and cancer research don’t seem to be underfunded. But they are only two pieces of the puzzle. Some others are being much more neglected. That’s why SENS itself gives higher priority to the most neglected areas, like mitochondrial dysfunction and crosslinks, which should be also more tractable (an interesting fact is that Aubrey de Grey often emphasises neglectedness, tractability and scope in his conferences, but I haven’t heard anyone within EA pointing this out). If stem cells research, cancer and other difficult and highly funded areas were all there is to aging research, it wouldn’t look like a very good candidate EA cause. In fact, not only de Grey but many researchers in the area are pursuing projects they believe are very much funding constrained (example: Steve Horvath).
About the comparison with x-risk reduction: Yes, I broadly agree that x-risk reduction looks overall more promising as a cause area. However I think that many x-risk focused interventions have a higher level of uncertainty. It also seems that within Effective Altruism little to no effort has been made to evaluate aging research, while, to me, it looks highly competitive with many of the other focuses of EAs (some specific interventions inside aging research should be very recognisably better). So it should be analysed further, especially because we may be missing out on especially important opportunities.
Hey! Thanks for the comment! I really appreciate it. For some reason I’m only seeing it now and by chance. I don’t know if I didn’t get the notification or if I missed it.
I’m not sure if this is the post I was asking feedback for though. This analysis is from nine months ago, and my views on it changed. On Facebook I was probably referring to this other post I made recently: A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity. [EDIT: I just saw you made a comment under that post too, so never mind].
Regarding the content of your comment: I agree with most of it. In fact 3.6 years is probably a big overestimation. However, I still think, in general, that bringing LEV forward could be a big contributor to the cost-effectiveness of aging research in general. In my newer post I lay your same arguments about improving technology that may subsume the effect of today’s research, making it less cost-effective. This factor also influences variable E in the TAME analysis, which I also probably vastly overestimated. For the very specific case of AI potentially automating R&D I think the timeline is longer than for LEV achieved through biomedical research (I’m taking the view that arises from the probability distribution given by AI researchers), but, as you said, it’s not the only technology that would make some of the efforts made now less useful.
Maybe I’m still less “pessimistic” than you in the sense that I think that an ice-breaking effect could enable more research on what are neglected facets of aging for which treatments could be devised much more quickly. The foundational research is not very neglected, while there are wide areas of translational research that could use much more funding and that are necessary to reach the final goal. From the lifespan.io’s Rejuvenation Roadmap you should get a preliminary idea.
Regarding the expected number of years added by metformin: I think “one year” is a very conservative number given the evidence I’ve presented, and you’ll often hear researchers estimating more.
I want to remind that in January some posts may not be browsable by day anymore. This happened to my post, but I don’t know if other people had this same problem. You may want to keep this in mind in order not to miss potential candidates.
Thanks for this post! I didn’t realise a description could be important. I added one :)
Hey, this is a great post! I’m really happy to see it, and it was a really nice and unexpected surprise.
I don’t know if you have seen it, but I recently published the first post of a (will be) series in which I’m trying to build a framework for evaluating the cost-effectiveness of any given aging research/project: this one.
In your model you only account for DALYs prevented for measuring impact, while I would like to account for many more things: all the considerations arising from the concept of Longevity Escape Velocity (e.g. bringing its date closer by one year could save roughly 36,500,000 lives of 1000QALYs each, using a conservative estimate), DALYs prevented, the economic and societal benefits of increased healthspan (the longevity dividend), the value of information.
I would also like to explore moral considerations that could potentially influence impact, such as if age discounting has to be applied and how population ethics influence the estimates, since at a first glance an impersonal view seems to imply that a sharp downward correction is necessary (although upon further analysis it turns out that this is not the case).
Another difference is that I’m trying to build the tools for evaluating specific interventions inside this cause area, and not strictly the cause area as a whole. I’m taking this approach since I believe there are some interventions that would be very ineffective to fund and others that would be extraordinarily cost-effective.
One implication of this is how I will measure tractability and neglectedness: to estimate neglectedness I will probably use the arguments OpenPhilanthropy’s made on the topic but with an important addition: it would be informational to list the organisations working on facets of aging that are the least further along in the pipeline that goes from in vitro research to clinical application. We can probably start from the lifespan.io’s Rejuvenation Roadmap to build a list of this kind. For evaluating tractability there will be probably some scientific arguments to make.
At the end I will also analyse specific non-profits and interview some people.
In case you want to take a glance on what I’m currently writing, I gave you access to my current drafts (which are not polished at all, but may give you an idea of how I’m proceeding): this, this and this.
P.s: Nine months ago I also made this estimate of the expected cost per life saved of the TAME trial. It’s not great, but it may be of interest. It was made before I begun thinking about the framework.
Edit: Are you planning on doing other cost-effectiveness estimates on this topic? Should we unite forces?
It’s not necessarily obvious that this is the case.
Premise: In probability theory the chance of two independent events happening together, which are events that don’t affect each other, like six coming up after you toss a dice and head coming up after you toss a coin, is calculated by multiplying the probability of the two events. In the case of the dice and the coin 1/6⋅1/2=1/12.
In the case of calculating expected future lifetime you need to sum all the additional number of years you could possibly live, each multiplied by their probability. This is how an expected value is calculated, and if you think about it it’s basically a weighted average: you want to know the “average” year you will live to, but in making the average each year can weight more or less depending on its probability.
It turns out, though, that in this case you can simplify the expected value formula, by only adding the probabilities of being alive at any given future year. This works, intuitively, because you are basically adding up all expected values that are made like this: 1MoreYear⋅probabilityOfSurvivingToThatYear. But what is the probability of surviving one more year? It is the probability of not dying all of the previous years! And so to find the probability you need to multiply all the independent events of not dying in a particular year between your current age and the year you are measuring the probability of arriving to. If the chance of dying any given year is constant at 1/1000 then the probability of not dying is 1−1/1000, and the multiplication is like this (1−1/1000)(1−1/1000)(1−1/1000)… the number of factors is the number of years between now and the year you are calculating the probability to arrive to. Let’s call this number k. The the multiplication becomes (1−1/1000)k
So you are basically adding up probabilities made like this: (1−1/1000)k but with k growing till infinite, since you want to account the probability of surviving to any arbitrary future year when calculating the expected value.
Why those probabilities add up exactly to 1/chance of death? I would think about it this way: when k is small, the term (1−1/1000)k is large, but still smaller than 1. The subsequent terms of the addition will be subsequently each time a little smaller. So you are basically adding up terms a little smaller than 1 but each time smaller and smaller. So what happens when you will have added 1000 terms? You will not quite reach 1000 in your sum. But this is compensated by all the subsequent super small terms you add till the infinite addition is complete.
I hope I have been clear. I don’t know if there is an easier way of thinking about it, but there probably is. In that case I apologise, since I may be overlooking some really obvious piece of intuition.
Regarding population ethics: I finished to write the first draft of the second post in the series, and it is exactly about this topic. Can I send the Google Doc to you so you can comment in advance on it? It’s around 2k words. I know you are a moral philosopher (I remember you writing so in your post about Hippo), so it would be great to have feedback.
I’m thinking about writing a post just for the proofs, so I can generalise to every technology. I could try to explain the maths in there for who has less background. It should be feasible making pictures of examples of graphs.
Thanks for the points made, it’s nice to hear from a biologist :)
I think your first point is a possibility, but almost only theoretical. Most medical technology drops in price over time. The possibility that some technology won’t ever drop in price has a place in the analysis, as people may want to correct their measure of impact if they think a situation of such extreme inequality has a non negligible probability of happening. I think, though, that it’s very improbable. Aging is such a burden on a state’s economy that it would make sense very soon to distribute therapies for free. I think this is similar to why basic education gets distributed for free. This may seem very utopian given the current state of healthcare accessibility in the US, but not so much for the rest of the world. I would also be very surprised if such inequality existed and policies against it wouldn’t been made. I think it’s safe to say that the population would be extremely outraged about it, and politicians proposing policies to make treatments accessible for everyone would be voted immediately.
Regarding the second observation: You make a good point. In the analysis this is probably not clear, but I’m also pretty sure that putting all the hallmarks described in “The Hallmarks of Aging” under medical control will not eradicate aging. Even for some parts of the hallmarks there is not complete consensus if they are dangerous in a normal human lifespan. This is mostly fine but it influences the probability. Here how I reason about this: since LEV is about how fast medical technologies and treatments are invented, If “post-hallmarks” therapies get on the market fast enough, then the people who benefitted from the first ones could further increase their lifespan, and so on. At that point I would expect that funding for aging research would have skyrocketed, due to the fact that society will be well aware of what’s happening, and the problems making my analysis necessary will not exist. So I think there’s at least a decent probability that the subsequent therapies will come fast enough. Regarding if the next problems will be more difficult or not: this is difficult to predict, but at least we know that we will probably benefit from better technology, so even if they will be somewhat more difficult we could be able to solve them faster.