I’m loathe to use this, but let’s use QLY and assume, as I believe, it can never be less than 0 (e.g. better to die than live).
There is nothing worse than death. There are no benefits unless that death unlocks life.
I don’t think the (likely nonexistent) positive effects of “generation replacement” will mean literally fewer deaths, and certainly not on a scale to justify discounting the deaths of entire generations of individuals.
I don’t think “personal beliefs” should be included in an “all known factors” analysis of how we invest our resources. Should I value Muslim lives less because they may disagree with me on gay rights? Or capital punishment? Why not, in your framework?
I also don’t think there’s a “but” after “all lives are equal”. That can be true AND we have to make judgment calls about how we invest our resources. My external action is not a reflection of your intrinsic worth as a human but merely my actions given constraints. Women and children may be first on the lifeboat, but that does not mean they are intrinsically worth more morally than men. I think it’s a subtle but extremely important distinction, lest we get to the kind of reasoning that permits explicitly morally elevating some subgroups over others.
I do agree that there is private sector incentive for anti-aging, but I think that’s true of a lot of EA initiatives. I’m personallg unsure of how wise diverting funds from Really Important Stuff is a good thing just because RIS happens to be profitable. I could perhaps make the case it’s even MORE important to invest there, if you’re inclined to be skeptical of the profit motive (though I’m not, so I’m not included).
FWIW, my view is that there are states worse than being dead, such as extreme suffering.
I don’t mean that we should place less intrinsic worth on people’s lives because of their views, but I think it is okay to make decisions which do effectively violate the principle of valuing people equally—your women and children on lifeboats first is a good example of this. (Also agree with you that there’s a slippery slope here and important to distinguish between the two)
I think “don’t donate to solving problems where there is strong private sector incentive to solve them” is a good heuristic for using charity money as effectively as possible, because there is a very large private sector trying to maximise profit and a very small EA movement trying to maximise impact. Agree that EA doesn’t follow this heuristic very consistently, eg—I think we should donate less to alternative protein development since there’s strong private sector incentive there.
I’m loathe to use this, but let’s use QLY and assume, as I believe, it can never be less than 0 (e.g. better to die than live).
There is nothing worse than death. There are no benefits unless that death unlocks life.
I don’t think the (likely nonexistent) positive effects of “generation replacement” will mean literally fewer deaths, and certainly not on a scale to justify discounting the deaths of entire generations of individuals.
I don’t think “personal beliefs” should be included in an “all known factors” analysis of how we invest our resources. Should I value Muslim lives less because they may disagree with me on gay rights? Or capital punishment? Why not, in your framework?
I also don’t think there’s a “but” after “all lives are equal”. That can be true AND we have to make judgment calls about how we invest our resources. My external action is not a reflection of your intrinsic worth as a human but merely my actions given constraints. Women and children may be first on the lifeboat, but that does not mean they are intrinsically worth more morally than men. I think it’s a subtle but extremely important distinction, lest we get to the kind of reasoning that permits explicitly morally elevating some subgroups over others.
I do agree that there is private sector incentive for anti-aging, but I think that’s true of a lot of EA initiatives. I’m personallg unsure of how wise diverting funds from Really Important Stuff is a good thing just because RIS happens to be profitable. I could perhaps make the case it’s even MORE important to invest there, if you’re inclined to be skeptical of the profit motive (though I’m not, so I’m not included).
FWIW, my view is that there are states worse than being dead, such as extreme suffering.
I don’t mean that we should place less intrinsic worth on people’s lives because of their views, but I think it is okay to make decisions which do effectively violate the principle of valuing people equally—your women and children on lifeboats first is a good example of this. (Also agree with you that there’s a slippery slope here and important to distinguish between the two)
I think “don’t donate to solving problems where there is strong private sector incentive to solve them” is a good heuristic for using charity money as effectively as possible, because there is a very large private sector trying to maximise profit and a very small EA movement trying to maximise impact. Agree that EA doesn’t follow this heuristic very consistently, eg—I think we should donate less to alternative protein development since there’s strong private sector incentive there.