This may fall under “general medical stuff” but I’ve always been surprised how little EA seems to care about aging and human longevity, especially given how fond this community is in measuring “quality life years”.
Progress here could solve depopulation problems among the other obvious benefits.
I think there’s a decent “extending lifespan would slow down generational replacement, slowing down moral progress” argument which means that extending lifespan is lower EV than lots of other stuff
I’m saying that’s a benefit of death and that it reduces the EV of extending lifespan, not that it makes death good overall or means that extending lifespan is net negative.
Even if lifespan extension is good, it shouldn’t be a major EA cause area unless it has a very high EV.
Agree that the importance of generational replacement for moral progress is not super clear, but I expect that the effect of generational replacement is large enough for maximum lifespan extension to not be a good focus for EA.
Also worth adding that there is strong private sector incentive to develop anti aging interventions which also makes this a less promising cause area for EA.
Also agree that “value lives equally” is a good principle, but when allocating limited resources to high impact interventions, I think it makes sense to account for all known factors, including the effects of the moral views of the beneficiaries of the interventions, even if that causes us to value lives slightly unequally.
Also I don’t think my views are generally representative of EA so would advise against making judgements about EA on my views alone.
I’m loathe to use this, but let’s use QLY and assume, as I believe, it can never be less than 0 (e.g. better to die than live).
There is nothing worse than death. There are no benefits unless that death unlocks life.
I don’t think the (likely nonexistent) positive effects of “generation replacement” will mean literally fewer deaths, and certainly not on a scale to justify discounting the deaths of entire generations of individuals.
I don’t think “personal beliefs” should be included in an “all known factors” analysis of how we invest our resources. Should I value Muslim lives less because they may disagree with me on gay rights? Or capital punishment? Why not, in your framework?
I also don’t think there’s a “but” after “all lives are equal”. That can be true AND we have to make judgment calls about how we invest our resources. My external action is not a reflection of your intrinsic worth as a human but merely my actions given constraints. Women and children may be first on the lifeboat, but that does not mean they are intrinsically worth more morally than men. I think it’s a subtle but extremely important distinction, lest we get to the kind of reasoning that permits explicitly morally elevating some subgroups over others.
I do agree that there is private sector incentive for anti-aging, but I think that’s true of a lot of EA initiatives. I’m personallg unsure of how wise diverting funds from Really Important Stuff is a good thing just because RIS happens to be profitable. I could perhaps make the case it’s even MORE important to invest there, if you’re inclined to be skeptical of the profit motive (though I’m not, so I’m not included).
FWIW, my view is that there are states worse than being dead, such as extreme suffering.
I don’t mean that we should place less intrinsic worth on people’s lives because of their views, but I think it is okay to make decisions which do effectively violate the principle of valuing people equally—your women and children on lifeboats first is a good example of this. (Also agree with you that there’s a slippery slope here and important to distinguish between the two)
I think “don’t donate to solving problems where there is strong private sector incentive to solve them” is a good heuristic for using charity money as effectively as possible, because there is a very large private sector trying to maximise profit and a very small EA movement trying to maximise impact. Agree that EA doesn’t follow this heuristic very consistently, eg—I think we should donate less to alternative protein development since there’s strong private sector incentive there.
This may fall under “general medical stuff” but I’ve always been surprised how little EA seems to care about aging and human longevity, especially given how fond this community is in measuring “quality life years”.
Progress here could solve depopulation problems among the other obvious benefits.
I think there’s a decent “extending lifespan would slow down generational replacement, slowing down moral progress” argument which means that extending lifespan is lower EV than lots of other stuff
If I understand what you’re saying correctly, this is another reason I don’t identify as EA.
You’re basically saying people dying is advantageous because their influence is replaced by people you deem as having superior virtues?
Its not obvious to me that “replacement” generations have superior values to those that they replace merely on account of being younger/newer etc..
But even accepting that’s the case, how is discounting someone’s life because they have the wrong opinions not morally demented?
I’m saying that’s a benefit of death and that it reduces the EV of extending lifespan, not that it makes death good overall or means that extending lifespan is net negative.
Even if lifespan extension is good, it shouldn’t be a major EA cause area unless it has a very high EV.
Agree that the importance of generational replacement for moral progress is not super clear, but I expect that the effect of generational replacement is large enough for maximum lifespan extension to not be a good focus for EA.
Also worth adding that there is strong private sector incentive to develop anti aging interventions which also makes this a less promising cause area for EA.
Also agree that “value lives equally” is a good principle, but when allocating limited resources to high impact interventions, I think it makes sense to account for all known factors, including the effects of the moral views of the beneficiaries of the interventions, even if that causes us to value lives slightly unequally.
Also I don’t think my views are generally representative of EA so would advise against making judgements about EA on my views alone.
I’m loathe to use this, but let’s use QLY and assume, as I believe, it can never be less than 0 (e.g. better to die than live).
There is nothing worse than death. There are no benefits unless that death unlocks life.
I don’t think the (likely nonexistent) positive effects of “generation replacement” will mean literally fewer deaths, and certainly not on a scale to justify discounting the deaths of entire generations of individuals.
I don’t think “personal beliefs” should be included in an “all known factors” analysis of how we invest our resources. Should I value Muslim lives less because they may disagree with me on gay rights? Or capital punishment? Why not, in your framework?
I also don’t think there’s a “but” after “all lives are equal”. That can be true AND we have to make judgment calls about how we invest our resources. My external action is not a reflection of your intrinsic worth as a human but merely my actions given constraints. Women and children may be first on the lifeboat, but that does not mean they are intrinsically worth more morally than men. I think it’s a subtle but extremely important distinction, lest we get to the kind of reasoning that permits explicitly morally elevating some subgroups over others.
I do agree that there is private sector incentive for anti-aging, but I think that’s true of a lot of EA initiatives. I’m personallg unsure of how wise diverting funds from Really Important Stuff is a good thing just because RIS happens to be profitable. I could perhaps make the case it’s even MORE important to invest there, if you’re inclined to be skeptical of the profit motive (though I’m not, so I’m not included).
FWIW, my view is that there are states worse than being dead, such as extreme suffering.
I don’t mean that we should place less intrinsic worth on people’s lives because of their views, but I think it is okay to make decisions which do effectively violate the principle of valuing people equally—your women and children on lifeboats first is a good example of this. (Also agree with you that there’s a slippery slope here and important to distinguish between the two)
I think “don’t donate to solving problems where there is strong private sector incentive to solve them” is a good heuristic for using charity money as effectively as possible, because there is a very large private sector trying to maximise profit and a very small EA movement trying to maximise impact. Agree that EA doesn’t follow this heuristic very consistently, eg—I think we should donate less to alternative protein development since there’s strong private sector incentive there.