(EDIT: It looks like the section 3.2. S-risk Interventions? is somewhat relevant, but I think the probabilities here for people living very long voluntarily aren’t as small as those for people alive today being subject to extended torture, except for those already being tortured.)
I wonder if the possibility of people living extremely long lives, e.g. thousands of years via anti-aging tech or mind uploading, would change the conclusions here, by dramatically increasing the ex ante person-affecting stakes, assuming each person’s welfare aggregates over time. Now, it’s possible that AMF beneficiaries will end up benefitting from this tech and saving them increases their chances of this happening, so in fact the number of person-affecting years of life saved in expectation by AMF could be much larger. However, it’s not obvious this beats x-risk work, especially aligning AI, which could help with R&D. Also, instead of either, there’s direct work on longevity or mind uploading, or even accelerating AI (which would increase x-risk) to use AI for R&D to save more people alive now from death by aging.
I think most people discount their future welfare substantially, though (perhaps other than for meeting some important life goals, like getting married and raising children), so living so much longer may not be that valuable according to their current preferences. To dramatically increase the ex ante stakes, one of the following should hold:
We need to not use their own preferences and say their stakes are higher than they would recognize them to be, which may seem paternalistic and will fail to respect their current preferences in other ways.
The vast majority of the benefit comes from the (possibly small and/or atypical) subset of people who don’t discount their future welfare much, which gets into objections on the basis of utility monsters, inequity and elitism (maybe only the relatively wealthy/educated have very low discount rates). Or, maybe these interpersonal utility comparisons aren’t valid in the first place. It’s not clear what would ground them.
Also, following up on 2, depending on how we make interpersonal utility comparisons, rather than focusing on those with low personal time discount rates, those with the largest preference-based stakes could be utilitarians, especially those with the widest moral circles, or people with fanatical views or absolutist deontological views.
Thanks for this, Michael. You’re right that if people could be kept alive a lot longer (and, perhaps, made to suffer more intensely than they once could as well), this could change the stakes. It will then come down to the probability you assign to a malicious AI’s inflicting this situation on people. If you thought it was likely enough (and I’m unsure what that threshold is), it could just straightforwardly follow that s-risk work beats all else. And perhaps there are folks in the community who think the likelihood is sufficiently high. If so, then what we’ve drafted here certainly shouldn’t sway them away from focusing on s-risk.
Oh, sorry if I was unclear. I didn’t have in mind torture scenarios here (although that’s a possibility), just people living very long voluntarily and to their own benefit. So rather than AMF saving like 50 years of valuable life in expectation per life saved, it could save thousands or millions or more. And other work may increase some individual’s life expectancy even more.
I think it’s not too unlikely that we’ll cure aging or solve mind uploading in our lifetimes, especially if we get superintelligence.
(EDIT: It looks like the section 3.2. S-risk Interventions? is somewhat relevant, but I think the probabilities here for people living very long voluntarily aren’t as small as those for people alive today being subject to extended torture, except for those already being tortured.)
I wonder if the possibility of people living extremely long lives, e.g. thousands of years via anti-aging tech or mind uploading, would change the conclusions here, by dramatically increasing the ex ante person-affecting stakes, assuming each person’s welfare aggregates over time. Now, it’s possible that AMF beneficiaries will end up benefitting from this tech and saving them increases their chances of this happening, so in fact the number of person-affecting years of life saved in expectation by AMF could be much larger. However, it’s not obvious this beats x-risk work, especially aligning AI, which could help with R&D. Also, instead of either, there’s direct work on longevity or mind uploading, or even accelerating AI (which would increase x-risk) to use AI for R&D to save more people alive now from death by aging.
See also:
Gustafsson, J. E., & Kosonen, P. (20??). Prudential Longtermism.
Carl Shulman. (2019). Person-affecting views may be dominated by possibilities of large future populations of necessary people.
Matthew Barnett. (2023). The possibility of an indefinite AI pause, section The opportunity cost of delayed technological progress.
Chad I. Jones. (2023). The A.I. Dilemma: Growth versus Existential Risk. (talk, slides).
I think most people discount their future welfare substantially, though (perhaps other than for meeting some important life goals, like getting married and raising children), so living so much longer may not be that valuable according to their current preferences. To dramatically increase the ex ante stakes, one of the following should hold:
We need to not use their own preferences and say their stakes are higher than they would recognize them to be, which may seem paternalistic and will fail to respect their current preferences in other ways.
The vast majority of the benefit comes from the (possibly small and/or atypical) subset of people who don’t discount their future welfare much, which gets into objections on the basis of utility monsters, inequity and elitism (maybe only the relatively wealthy/educated have very low discount rates). Or, maybe these interpersonal utility comparisons aren’t valid in the first place. It’s not clear what would ground them.
Also, following up on 2, depending on how we make interpersonal utility comparisons, rather than focusing on those with low personal time discount rates, those with the largest preference-based stakes could be utilitarians, especially those with the widest moral circles, or people with fanatical views or absolutist deontological views.
Thanks for this, Michael. You’re right that if people could be kept alive a lot longer (and, perhaps, made to suffer more intensely than they once could as well), this could change the stakes. It will then come down to the probability you assign to a malicious AI’s inflicting this situation on people. If you thought it was likely enough (and I’m unsure what that threshold is), it could just straightforwardly follow that s-risk work beats all else. And perhaps there are folks in the community who think the likelihood is sufficiently high. If so, then what we’ve drafted here certainly shouldn’t sway them away from focusing on s-risk.
Oh, sorry if I was unclear. I didn’t have in mind torture scenarios here (although that’s a possibility), just people living very long voluntarily and to their own benefit. So rather than AMF saving like 50 years of valuable life in expectation per life saved, it could save thousands or millions or more. And other work may increase some individual’s life expectancy even more.
I think it’s not too unlikely that we’ll cure aging or solve mind uploading in our lifetimes, especially if we get superintelligence.