Hi Jeremy, as far as I can tell, nearly all of the QALYs are dependent upon the idea that it’s better to extend someone’s life than to replace them with a new person. Because by the time that revival is possible, we will likely be able to create new people at will. (This is assuming that society does not decide to not create more people before reaching Malthusian limits.)
Basically, we get rapidly into population ethics if you want to debate whether lives are fungible. As Ariel points out elsewhere in the comments—I was not aware of this connection, but it seems fruitful—“Deciding whether lives are fungible is a key part of the debate between ‘person-affecting’ and ‘total’ utilitarians, and as of-yet unsettled as I see it in the EA community.”
To me, the idea that humans are fungible and that it doesn’t matter if someone dies because we can just create a new person, goes so strongly against my altruistic intuitions that the whole notion is difficult to think about. There is a reason that similar reasoning leads to the repugnant conclusion.
This is part of why I said “I think the field may be among the most cost-effective ways to convert money into long-term QALYs, given certain beliefs and values”; the idea that humans are not fungible is one of those values. I’m not sure how to calculate the QALYs without assuming that value. I don’t think it’s possible to quantify the “sadness”. Do you have any ideas?
Thank you for the thoughtful reply. I jotted down the original comment out on my phone and I am realizing it came across more argumentative than I intended. I apologize for that.
I have similar intuitions that creating a new person doesn’t make up for the badness of someone dying, but if it is better, I would like to have an idea how much better and why.
Assuming we could create new people for some cost, and that those new people have value, it would be important to be able to compare that with the cost/value of reviving someone, to most efficiently spend limited resources.
Focusing on the subject of the intervention, the value of 1000 years lived to a new person would be the same as the value of 1000 years lived to the revived person, no?
The only difference would seem to be the value to anyone else—other people who care about them.
I can’t say precisely how you would quantify that, but additional relevant factors might be
how long it might take the technology to develop, and, by that point, how many preserved people would have anyone who cared about them remaining
Hi Jeremy, as far as I can tell, nearly all of the QALYs are dependent upon the idea that it’s better to extend someone’s life than to replace them with a new person. Because by the time that revival is possible, we will likely be able to create new people at will. (This is assuming that society does not decide to not create more people before reaching Malthusian limits.)
Basically, we get rapidly into population ethics if you want to debate whether lives are fungible. As Ariel points out elsewhere in the comments—I was not aware of this connection, but it seems fruitful—“Deciding whether lives are fungible is a key part of the debate between ‘person-affecting’ and ‘total’ utilitarians, and as of-yet unsettled as I see it in the EA community.”
To me, the idea that humans are fungible and that it doesn’t matter if someone dies because we can just create a new person, goes so strongly against my altruistic intuitions that the whole notion is difficult to think about. There is a reason that similar reasoning leads to the repugnant conclusion.
This is part of why I said “I think the field may be among the most cost-effective ways to convert money into long-term QALYs, given certain beliefs and values”; the idea that humans are not fungible is one of those values. I’m not sure how to calculate the QALYs without assuming that value. I don’t think it’s possible to quantify the “sadness”. Do you have any ideas?
Thank you for the thoughtful reply. I jotted down the original comment out on my phone and I am realizing it came across more argumentative than I intended. I apologize for that.
I have similar intuitions that creating a new person doesn’t make up for the badness of someone dying, but if it is better, I would like to have an idea how much better and why.
Assuming we could create new people for some cost, and that those new people have value, it would be important to be able to compare that with the cost/value of reviving someone, to most efficiently spend limited resources.
Focusing on the subject of the intervention, the value of 1000 years lived to a new person would be the same as the value of 1000 years lived to the revived person, no?
The only difference would seem to be the value to anyone else—other people who care about them.
I can’t say precisely how you would quantify that, but additional relevant factors might be
how long it might take the technology to develop, and, by that point, how many preserved people would have anyone who cared about them remaining
the probability of revival technology working
I’m sure there’s more I haven’t thought of.