Thanks for writing this!
Looking forward to a review of charities in that area.
I invite people interested in this topic to join the Effective Altruism & Life Extension Facebook group.
I suggest you add a summary at the top of the post.
Here’s a summary of LEV leverage points: Bringing the date we solve aging forward by one year could extend the life of 36,500,000 by 1000 years (under conservative assumptions^1). But if Longevity Escape Velocity (LEV) is reached before that (and maintained until we solve aging), then it’s bringing the LEV forward by 1 day that will be the crucial point. Note that solving other causes of death (than aging) near the LEV point would also bring the LEV point forward. Other leverage points would be increasing the probability that LEV is maintained until we solve aging, and increasing the speed of distribution of LEV technology (note that this doesn’t impact the value of the other leverage points).
1. a) Probably more than 36,500,000 actually given that—actually in the short term—population will increase, and the fraction of death from aging will also increase.
1. b) probably more than a 1000 years in expectation given a 1000 years might be enough to solve the other death causes, and radically increase that number
Musing: I wonder if it would be technically more accurate to call it Death Escape Velocity. And while solving aging is the crucial point in the model, solving other death causes near LEV could also expedite when LEV is achieve. And once we solve aging, the LEV model stays relevant: we could (realistically) still increase life expectancy by more than a year per year by reducing the rate of the other causes of death, such as accidents, until we stop adding a year every year, and eventually reach a maximum lifespan (or we get complete immortality).
Edit: previous version was mistakenly saying 36,500,000 lives per day instead of year
I second the suggestion to add a summary at the top of the post.
The Forum has a feature that it took me a while to notice: On pages that show lists of posts, each post has an estimated reading time. The time for this post, for example, was “20m”. If someone is thinking of investing 2o minutes in a post (and that number is likely conservative if they need to pause, think, go back, etc.), giving them a summary can be really valuable in helping them make that decision.
Thanks Mati_Roy and aarongertler for the suggestion of adding a summary. Now there is one!
Maty_Roy, thank you for the points made! I would like to correct what I think are a couple of misunderstandings and I would like to elaborate on your idea about using Death Escape Velocity, instead of Longevity Escape Velocity:
1) 36,500,000 are the people dying of aging in a year, so bringing LEV closer by one year (and not by one day) would save this number of lives.
2) If Longevity Escape Velocity doesn’t happen, bringing the date in which aging is cured completely closer could simply do nothing. This because people living at that time could have already a really low risk of death, that can’t go much further down with an additional improvement on treatments for aging. This because if Longevity Escape Velocity doesn’t happen, then I would expect the “very slow scenario” or the “dire roadblocks” scenario to be true, and aging would be eradicated really slowly, possibly in centuries.
The points about why my estimate is conservative are summarised well, thanks for doing that :)
Regarding the idea of using “death escape velocity”: I didn’t use it because technologies that would decrease the risk of death by other causes other than aging are substantially different from the ones brought about by aging research. So it would be another cause area completely! I also would expect them to become more relevant in the future. I think there is not much use of thinking about them now and they wouldn’t make potential EA interventions to fund, since our ideas will be probably be made useless by potentially much better technology existing after aging gets eradicated (that is the first step). “Death escape velocity”could be brought about, for example, by friendly AGI, if that ever comes about. I think this input is valuable though, since it’s an existing related concept that is not talked about much.