(Setting aside the non-identity problem … which I think is pretty plausibly solvable) …
I recall reading others saying that total-population utilitarian is very much pivotal to the case for prioritizing the mitigation of x-risk relative to near term interventions.
So it’s true that LT interventions are not all irrelevant with the PAV, but the leading intervention/ToC seems to be dramatically less of a slam dunk.
I recall reading others saying that total-population utilitarian is very much pivotal to the case for prioritizing the mitigation of x-risk relative to near term interventions.
Not exactly, and this is what I was trying to explain with my first comment.
Person-affecting views (PAVs) strongly count against interventions that aim to reduce the probability of human extinction (although not entirely because extinction still means the deaths of billions of people which PAVs do find bad).
The key point is that existential risks are broader than extinction risks. Some existential risks can lock us into bad states of the world in which humanity still continues to exist.
For example, if AI enslaves us and we have little to no hope of escaping until the universe becomes uninhabitable, that still seems very bad according to pretty much any plausible PAV. Michael may even agree with this as non-identity issues don’t usually bite when we’re considering future lives that are worse than neutral (as might be the case if we’re literally enslaved by AI!).
More generally, any intervention that improves the wellbeing of future people (and not by ensuring these people actually exist) should be good under plausible PAVs. Michael seems to disagree with this by raising the non-identity problem but I don’t find this persuasive and I don’t think others would either. I list my proposed longtermist interventions that still work under PAVs again:
Mitigating climate change
Aligning AI (if this goes well we could have a utopia, if it goes poorly we could have dystopia)
Improving institutions
Improving values
Global Priorities Research
Patient philanthropy
Growing the EA movement / building EA infrastructure
I think I agree with everything you list. But I also think that the extinction risk (especially from misaligned AI?) and the loss of the trillions of potential people is the slam-dunk case for longtermism that is usually espoused. And PAV dramatically affects that case.
Mitigating extinction risk also seems a lot more tractable to me than doing something about s-risk. With S risk seems so much harder to make predictions about what would influence what in the long-ish run. But we have a pretty good sense of things that reduce near term x risk… preventing dangerous bio research etc.
I’m not sure what the probability of misaligned AI making us go extinct is vs keeping us around but in a bad state, but it’s worth noting that people have written about how misaligned AI could cause s-risks.
Also, I think it’s plausible that considerations of artificial sentience dominate all others in a way that is robust to PAVs. We could create vast amounts of artificial sentience that have experiences ranging from the the very worst to very best we can possibly imagine. Making sure we don’t create suffering artificial sentience / making sure we do create vast amounts of happy sentience both potentially seem overwhelmingly important to me. This will require:
Understanding consciousness far better than we currently do
Improving values and expanding moral circle
So I’m a bit more optimistic than you are about non-extinction risk reducing longtermist approaches.
(Setting aside the non-identity problem … which I think is pretty plausibly solvable) …
I recall reading others saying that total-population utilitarian is very much pivotal to the case for prioritizing the mitigation of x-risk relative to near term interventions.
So it’s true that LT interventions are not all irrelevant with the PAV, but the leading intervention/ToC seems to be dramatically less of a slam dunk.
Not exactly, and this is what I was trying to explain with my first comment.
Person-affecting views (PAVs) strongly count against interventions that aim to reduce the probability of human extinction (although not entirely because extinction still means the deaths of billions of people which PAVs do find bad).
The key point is that existential risks are broader than extinction risks. Some existential risks can lock us into bad states of the world in which humanity still continues to exist.
For example, if AI enslaves us and we have little to no hope of escaping until the universe becomes uninhabitable, that still seems very bad according to pretty much any plausible PAV. Michael may even agree with this as non-identity issues don’t usually bite when we’re considering future lives that are worse than neutral (as might be the case if we’re literally enslaved by AI!).
More generally, any intervention that improves the wellbeing of future people (and not by ensuring these people actually exist) should be good under plausible PAVs. Michael seems to disagree with this by raising the non-identity problem but I don’t find this persuasive and I don’t think others would either. I list my proposed longtermist interventions that still work under PAVs again:
Mitigating climate change
Aligning AI (if this goes well we could have a utopia, if it goes poorly we could have dystopia)
Improving institutions
Improving values
Global Priorities Research
Patient philanthropy
Growing the EA movement / building EA infrastructure
Boosting economic growth / tech progress
Reducing s-risks
I think I agree with everything you list. But I also think that the extinction risk (especially from misaligned AI?) and the loss of the trillions of potential people is the slam-dunk case for longtermism that is usually espoused. And PAV dramatically affects that case.
Mitigating extinction risk also seems a lot more tractable to me than doing something about s-risk. With S risk seems so much harder to make predictions about what would influence what in the long-ish run. But we have a pretty good sense of things that reduce near term x risk… preventing dangerous bio research etc.
I’m not sure what the probability of misaligned AI making us go extinct is vs keeping us around but in a bad state, but it’s worth noting that people have written about how misaligned AI could cause s-risks.
Also, I think it’s plausible that considerations of artificial sentience dominate all others in a way that is robust to PAVs. We could create vast amounts of artificial sentience that have experiences ranging from the the very worst to very best we can possibly imagine. Making sure we don’t create suffering artificial sentience / making sure we do create vast amounts of happy sentience both potentially seem overwhelmingly important to me. This will require:
Understanding consciousness far better than we currently do
Improving values and expanding moral circle
So I’m a bit more optimistic than you are about non-extinction risk reducing longtermist approaches.