Open Philanthropy, Give Well, Rethink Priorities probably qualify. To clarify: my phrase didn’t mean “devoted exclusively to finding new potential cause areas”.
In my understanding “Cause X” is something we almost take for granted today, but that people in the future will see as a moral catastrophe (similarly as to how we see slavery today, versus how people in the past saw it). So it has a bit more nuance than just being a “new cause area that is competitive with the existing EA cause areas in terms of impact-per-dollar”.
I think there are many candidates seeming to be overlooked by the majority of society. You could also argue that no one of these is a real Cause X due to the fact that they are still recognised as problems by a large number of people. But this could be just the baseline of “recognition”a neglected moral problem will start from in a very interconnected world like ours. Here what comes to my mind:
Wild animal suffering (probably not recognised as a moral problem by the majority of the population)
Aging (many people probably ascribe it a neutral moral value, maybe because it is rightly regarded as a “natural part of life”. Right consideration but it doesn’t imply its moral value or how many resources we should devote to the problem)
“Resurrection” or, in practice, right now, cryonics. (Probably neutral value/not even remotely in the radar of the general population, with many people possibly even ascribing it a negative moral value)
Something related to subjective experience? (stuff related to subjective experience that people don’t deem worthy to assign moral value to because “times are still too rough to notice them”, or stuff related to subjective experience that we are missing out but could achieve today with the right interventions).
Cause areas that I think don’t fit the definition above:
Mental Health, since it is recognised as a moral problem by a large enough fraction of the population (but still probably not large enough?). Although it is still too neglected.
X-risk. Recognised as a moral problem (who wants the apocalypse?) but too neglected for reasons probably not related to ethics.
But who is working on finding Cause X? I believe you could argue that every organisation devoted to finding new potential cause areas is. You could probably argue that moral philosophers, or even just thoughtful people, have a chance of recognising it. I’m not sure if there is a project or organisation devoted specifically to this task, but judging from the other answers here, probably not.
I just answered your other comment, but I saw this one only now. Apparently both notifications didn’t arrive. Thanks a lot for taking the time to read and answer both.
Some of my replies in the other comment apply here too. I’ll go in order.
Regarding your first paragraph: Yes, I’m preparing a post about potential age discounting that could be applied. I included it among the moral considerations that would correct impact. But you made a good point, and I may need to modify it in the light of it.
Regarding AI and other technology: For the very specific case of AI potentially automating R&D I think the timeline is longer than for LEV achieved through biomedical research (I’m taking the view that arises from the probability distribution given by AI researchers), but, as you said, it’s not the only technology that would make some of the efforts made now less useful.
Regarding your third paragraph: Yes, probably the only non-human animals benefitting from LEV would be pets, although I don’t know how many. I should try to do an estimate.
Regarding comparisons with other cause areas: I think there are some interventions in aging research that could reap massive benefits which are neglected and somewhat tractable. Copying from the other comment: The foundational research is not very neglected, while there are wide areas of translational research that could use much more funding and that are necessary to reach the final goal. From the lifespan.io’s Rejuvenation Roadmap you should get a preliminary idea.
Your example using the SENS approach is correct: areas like stem cell research and cancer research don’t seem to be underfunded. But they are only two pieces of the puzzle. Some others are being much more neglected. That’s why SENS itself gives higher priority to the most neglected areas, like mitochondrial dysfunction and crosslinks, which should be also more tractable (an interesting fact is that Aubrey de Grey often emphasises neglectedness, tractability and scope in his conferences, but I haven’t heard anyone within EA pointing this out). If stem cells research, cancer and other difficult and highly funded areas were all there is to aging research, it wouldn’t look like a very good candidate EA cause. In fact, not only de Grey but many researchers in the area are pursuing projects they believe are very much funding constrained (example: Steve Horvath).
About the comparison with x-risk reduction: Yes, I broadly agree that x-risk reduction looks overall more promising as a cause area. However I think that many x-risk focused interventions have a higher level of uncertainty. It also seems that within Effective Altruism little to no effort has been made to evaluate aging research, while, to me, it looks highly competitive with many of the other focuses of EAs (some specific interventions inside aging research should be very recognisably better). So it should be analysed further, especially because we may be missing out on especially important opportunities.
Hey! Thanks for the comment! I really appreciate it. For some reason I’m only seeing it now and by chance. I don’t know if I didn’t get the notification or if I missed it.
I’m not sure if this is the post I was asking feedback for though. This analysis is from nine months ago, and my views on it changed. On Facebook I was probably referring to this other post I made recently: A general framework for evaluating aging research. Part 1: reasoning with Longevity Escape Velocity. [EDIT: I just saw you made a comment under that post too, so never mind].
Regarding the content of your comment: I agree with most of it. In fact 3.6 years is probably a big overestimation. However, I still think, in general, that bringing LEV forward could be a big contributor to the cost-effectiveness of aging research in general. In my newer post I lay your same arguments about improving technology that may subsume the effect of today’s research, making it less cost-effective. This factor also influences variable E in the TAME analysis, which I also probably vastly overestimated. For the very specific case of AI potentially automating R&D I think the timeline is longer than for LEV achieved through biomedical research (I’m taking the view that arises from the probability distribution given by AI researchers), but, as you said, it’s not the only technology that would make some of the efforts made now less useful.
Maybe I’m still less “pessimistic” than you in the sense that I think that an ice-breaking effect could enable more research on what are neglected facets of aging for which treatments could be devised much more quickly. The foundational research is not very neglected, while there are wide areas of translational research that could use much more funding and that are necessary to reach the final goal. From the lifespan.io’s Rejuvenation Roadmap you should get a preliminary idea.
Regarding the expected number of years added by metformin: I think “one year” is a very conservative number given the evidence I’ve presented, and you’ll often hear researchers estimating more.
I want to remind that in January some posts may not be browsable by day anymore. This happened to my post, but I don’t know if other people had this same problem. You may want to keep this in mind in order not to miss potential candidates.
Thanks for this post! I didn’t realise a description could be important. I added one :)
Hey, this is a great post! I’m really happy to see it, and it was a really nice and unexpected surprise.
I don’t know if you have seen it, but I recently published the first post of a (will be) series in which I’m trying to build a framework for evaluating the cost-effectiveness of any given aging research/project: this one.
In your model you only account for DALYs prevented for measuring impact, while I would like to account for many more things: all the considerations arising from the concept of Longevity Escape Velocity (e.g. bringing its date closer by one year could save roughly 36,500,000 lives of 1000QALYs each, using a conservative estimate), DALYs prevented, the economic and societal benefits of increased healthspan (the longevity dividend), the value of information.
I would also like to explore moral considerations that could potentially influence impact, such as if age discounting has to be applied and how population ethics influence the estimates, since at a first glance an impersonal view seems to imply that a sharp downward correction is necessary (although upon further analysis it turns out that this is not the case).
Another difference is that I’m trying to build the tools for evaluating specific interventions inside this cause area, and not strictly the cause area as a whole. I’m taking this approach since I believe there are some interventions that would be very ineffective to fund and others that would be extraordinarily cost-effective.
One implication of this is how I will measure tractability and neglectedness: to estimate neglectedness I will probably use the arguments OpenPhilanthropy’s made on the topic but with an important addition: it would be informational to list the organisations working on facets of aging that are the least further along in the pipeline that goes from in vitro research to clinical application. We can probably start from the lifespan.io’s Rejuvenation Roadmap to build a list of this kind. For evaluating tractability there will be probably some scientific arguments to make.
At the end I will also analyse specific non-profits and interview some people.
In case you want to take a glance on what I’m currently writing, I gave you access to my current drafts (which are not polished at all, but may give you an idea of how I’m proceeding): this, this and this.
P.s: Nine months ago I also made this estimate of the expected cost per life saved of the TAME trial. It’s not great, but it may be of interest. It was made before I begun thinking about the framework.
Edit: Are you planning on doing other cost-effectiveness estimates on this topic? Should we unite forces?
It’s not necessarily obvious that this is the case.
Premise: In probability theory the chance of two independent events happening together, which are events that don’t affect each other, like six coming up after you toss a dice and head coming up after you toss a coin, is calculated by multiplying the probability of the two events. In the case of the dice and the coin 1/6⋅1/2=1/12.
In the case of calculating expected future lifetime you need to sum all the additional number of years you could possibly live, each multiplied by their probability. This is how an expected value is calculated, and if you think about it it’s basically a weighted average: you want to know the “average” year you will live to, but in making the average each year can weight more or less depending on its probability.
It turns out, though, that in this case you can simplify the expected value formula, by only adding the probabilities of being alive at any given future year. This works, intuitively, because you are basically adding up all expected values that are made like this: 1MoreYear⋅probabilityOfSurvivingToThatYear. But what is the probability of surviving one more year? It is the probability of not dying all of the previous years! And so to find the probability you need to multiply all the independent events of not dying in a particular year between your current age and the year you are measuring the probability of arriving to. If the chance of dying any given year is constant at 1/1000 then the probability of not dying is 1−1/1000, and the multiplication is like this (1−1/1000)(1−1/1000)(1−1/1000)… the number of factors is the number of years between now and the year you are calculating the probability to arrive to. Let’s call this number k. The the multiplication becomes (1−1/1000)k
So you are basically adding up probabilities made like this: (1−1/1000)k but with k growing till infinite, since you want to account the probability of surviving to any arbitrary future year when calculating the expected value.
Why those probabilities add up exactly to 1/chance of death? I would think about it this way: when k is small, the term (1−1/1000)k is large, but still smaller than 1. The subsequent terms of the addition will be subsequently each time a little smaller. So you are basically adding up terms a little smaller than 1 but each time smaller and smaller. So what happens when you will have added 1000 terms? You will not quite reach 1000 in your sum. But this is compensated by all the subsequent super small terms you add till the infinite addition is complete.
I hope I have been clear. I don’t know if there is an easier way of thinking about it, but there probably is. In that case I apologise, since I may be overlooking some really obvious piece of intuition.
Regarding population ethics: I finished to write the first draft of the second post in the series, and it is exactly about this topic. Can I send the Google Doc to you so you can comment in advance on it? It’s around 2k words. I know you are a moral philosopher (I remember you writing so in your post about Hippo), so it would be great to have feedback.
I’m thinking about writing a post just for the proofs, so I can generalise to every technology. I could try to explain the maths in there for who has less background. It should be feasible making pictures of examples of graphs.
Thanks for the points made, it’s nice to hear from a biologist :)
I think your first point is a possibility, but almost only theoretical. Most medical technology drops in price over time. The possibility that some technology won’t ever drop in price has a place in the analysis, as people may want to correct their measure of impact if they think a situation of such extreme inequality has a non negligible probability of happening. I think, though, that it’s very improbable. Aging is such a burden on a state’s economy that it would make sense very soon to distribute therapies for free. I think this is similar to why basic education gets distributed for free. This may seem very utopian given the current state of healthcare accessibility in the US, but not so much for the rest of the world. I would also be very surprised if such inequality existed and policies against it wouldn’t been made. I think it’s safe to say that the population would be extremely outraged about it, and politicians proposing policies to make treatments accessible for everyone would be voted immediately.
Regarding the second observation: You make a good point. In the analysis this is probably not clear, but I’m also pretty sure that putting all the hallmarks described in “The Hallmarks of Aging” under medical control will not eradicate aging. Even for some parts of the hallmarks there is not complete consensus if they are dangerous in a normal human lifespan. This is mostly fine but it influences the probability. Here how I reason about this: since LEV is about how fast medical technologies and treatments are invented, If “post-hallmarks” therapies get on the market fast enough, then the people who benefitted from the first ones could further increase their lifespan, and so on. At that point I would expect that funding for aging research would have skyrocketed, due to the fact that society will be well aware of what’s happening, and the problems making my analysis necessary will not exist. So I think there’s at least a decent probability that the subsequent therapies will come fast enough. Regarding if the next problems will be more difficult or not: this is difficult to predict, but at least we know that we will probably benefit from better technology, so even if they will be somewhat more difficult we could be able to solve them faster.
Yes, if the chance of death each year is constant it turns out that remaining life expectancy is around 1/chance of death. In fact in a previous draft of the post I just used this fact and called it a day. I had to use more formalism though, because chance of death is not constant. After LEV hits there will be a period in which it will fall down, and that needs to be taken into account in order to find a true lower bound for life expectancy. The question that is still open is what is the minimum initial decrease of risk of death that ensures LEV.
Regarding the point about population ethics: Yes, impact depends on population ethics, in the sense that it is for sure as large as it can be under person-affecting deprivationism. Even on totalism though (which seems a more reasonable view of ethics to me), I expect the considerations made to really change the view on the impact of this cause area. This because, for example, a Malthusian outcome in which the disposable resources are always all employed is not necessarily the default outcome, and also not necessarily the most desirable one if well being is took into consideration. It’s also not clear if, under a non Malthusian condition, old people would take resources that would be also useful for young people. There could be vast amount of not used resources. Then added years to life expectancy could be “for free”, without negating the new younger people that would be born anyway. I think you could argue both ways, so the impact evaluation needs a downward correction but it is not invalidated. Another important thing to consider on totalism are moral weights: in general I don’t think that it would be better ethically to have generations and generations of people with a 5 years life span. At least if we don’t only account for how much time a person lives in our ethics, but how valuable that time is. The same argument could apply for much longer lifespans. Maybe a 1000 years life span is much more preferable than a 100 year one. Or maybe not, and time discounting is needed. Again, I think you could argue both ways, because the answer largely depends on informations we currently don’t have: how a 1000 years old mind looks like and how it is different from the one of a current adult.
Thanks Mati_Roy and aarongertler for the suggestion of adding a summary. Now there is one!
Maty_Roy, thank you for the points made! I would like to correct what I think are a couple of misunderstandings and I would like to elaborate on your idea about using Death Escape Velocity, instead of Longevity Escape Velocity:
1) 36,500,000 are the people dying of aging in a year, so bringing LEV closer by one year (and not by one day) would save this number of lives.
2) If Longevity Escape Velocity doesn’t happen, bringing the date in which aging is cured completely closer could simply do nothing. This because people living at that time could have already a really low risk of death, that can’t go much further down with an additional improvement on treatments for aging. This because if Longevity Escape Velocity doesn’t happen, then I would expect the “very slow scenario” or the “dire roadblocks” scenario to be true, and aging would be eradicated really slowly, possibly in centuries.
The points about why my estimate is conservative are summarised well, thanks for doing that :)
Regarding the idea of using “death escape velocity”: I didn’t use it because technologies that would decrease the risk of death by other causes other than aging are substantially different from the ones brought about by aging research. So it would be another cause area completely! I also would expect them to become more relevant in the future. I think there is not much use of thinking about them now and they wouldn’t make potential EA interventions to fund, since our ideas will be probably be made useless by potentially much better technology existing after aging gets eradicated (that is the first step). “Death escape velocity”could be brought about, for example, by friendly AGI, if that ever comes about. I think this input is valuable though, since it’s an existing related concept that is not talked about much.
No, I mean that you can upvote your own posts with the same account you used to post them. You can even upvote your own comments with the same account you used to comment.
Edit: it seems like that now there is a pre-casted vote when you post something, but you can still turn it in a strong vote.
It seems that now it is possible to upvote my own posts anonymously. The eternal question naturally rises: should I? In theory I think my post is useful, otherwise I wouldn’t have posted it, but at the same time if I upvote it, it feels like giving a high-five to myself. On a more serious note: is this a bug or a feature?
Thank you, I applied your suggestion by modifying the text. I just noticed that Guesstimate gives you the standard deviation. I guess I had to familiarise with the tool.
Here how I would reason about moral weights in this case:
In this case the definition of a “life saved” is pretty different than what normally means. Normally a life saved means 30 to 80 DALYs averted, depending if the intervention is on adults or children. In this case we are talking about potentially thousands of DALYs averted, so a life saved should count more. On the other hand there’s also to take into consideration that when saving, for example, children who would have died of malaria, you are also giving them a chance of reaching LEV. It’s not a full chance as in the present evaluation, but something probably ranging from 30% to 70%.
Additional consideration: some people may want to consider children more important to save than adults. Introducing age weighting and time discounting could seem reasonable in this case, since even if you save 5000 DALYs you are only saving one person, so you might want to discount DALYs saved later in life. On the other hand there are reasons to disagree with this approach: Saving an old person and guaranteeing him/her to reach LEV means also “saving a library”. A vast amount of knowledge and experience, especially future experience would have been otherwise completely destroyed. In fact I am not so sure I would apply time discounting myself for this reason.
Regarding bayesian discounting:
I just read how GiveWell would go about this (https://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/). To account for it I would need a prior distribution (or more than one?). I also have difficulty making the calculation, since Guesstimate doesn’t let me calculate the variance of the random variables. I will try with other means… maybe with smaller data sets and proceeding by hand or using online calculators.
I would also like to introduce probability distributions in the whole analysis and turn some arguments made in the explanations of some variables in variables in their own right, and I would like to add some more informations (for example the safety profile and history of metformin and the value of information of the trial) based on feedback I’m receiving. This would mean rewriting many sections though, and this will require time.
For now I put an “Edit” at the beginning in order to warn readers not to take the numbers reached too seriously, but I invited them to delve in some more broadly applicable ideas I presented in the analysis that could be useful for evaluating many interventions in the cause area of aging.
Thank you for the feedback!
I’m still learning and comments really help me to be more accurate and they steepen my learning curve. I set up a Guesstimate model (https://www.getguesstimate.com/models/10848). I didn’t know about this tool, it is really helpful!
Tomorrow I will improve the guesstimate and get back to you with another comment regarding the bayesian discounting you proposed and the moral weights. I also might make other changes to the evaluation together with the ones you suggested, especially considering that Guesstimate lets me toy with probability distributions.
Yes, maybe I exaggerated saying “almost always” or at least I have been too vague. If you haven’t any idea of specific interventions to evaluate, then a good way to go is to do superficial high level analyses first and then proceed with lower level ones. Sometimes the contrary could happen though, when a particular promising intervention is found without first investigating its cause area.
I want to add something: It probably has been discussed before, but it occurs to me that when thinking about prioritisation in general it’s almost always better to think at the lowest level possible. That’s because the impact per dollar is only evaluable for specific interventions, and because causes that at first don’t appear particularly cost effective can hide particular interventions that are. And those particular interventions could be in principle even more cost effective than other interventions in causes that do appear cost effective overall. I think high-level cause prioritisation is mostly good for gaining a first superficial understanding of the promise of a particular class of altruistic interventions.