Work in population ethics, particularly on the plausibility of ‘person-affecting’ views: in slogan form, those “in favour of making people happy, but indifferent about making happy people”. If such views are true, that would count against longtermism; how much they count also depends on one’s view of moral uncertainty. [emphasis mine]
What exact person-affecting view are you considering here? I don’t think it is at all clear that standard person-affecting views count against longtermism. This is because plausible person-affecting views will still find it important to improve the lives of future people who will necessarily exist.
Therefore longtermist interventions that don’t focus on reducing risks of extinction but instead look to improve the lives of future necessary people should remain robust to a person-affecting view. These might include:
Mitigating climate change
Aligning AI (if this goes well we could have a utopia, if it goes poorly we could have dystopia)
Improving institutions
Improving values
Global Priorities Research
Patient philanthropy
Growing the EA movement / building EA infrastructure
(Setting aside the non-identity problem … which I think is pretty plausibly solvable) …
I recall reading others saying that total-population utilitarian is very much pivotal to the case for prioritizing the mitigation of x-risk relative to near term interventions.
So it’s true that LT interventions are not all irrelevant with the PAV, but the leading intervention/ToC seems to be dramatically less of a slam dunk.
I recall reading others saying that total-population utilitarian is very much pivotal to the case for prioritizing the mitigation of x-risk relative to near term interventions.
Not exactly, and this is what I was trying to explain with my first comment.
Person-affecting views (PAVs) strongly count against interventions that aim to reduce the probability of human extinction (although not entirely because extinction still means the deaths of billions of people which PAVs do find bad).
The key point is that existential risks are broader than extinction risks. Some existential risks can lock us into bad states of the world in which humanity still continues to exist.
For example, if AI enslaves us and we have little to no hope of escaping until the universe becomes uninhabitable, that still seems very bad according to pretty much any plausible PAV. Michael may even agree with this as non-identity issues don’t usually bite when we’re considering future lives that are worse than neutral (as might be the case if we’re literally enslaved by AI!).
More generally, any intervention that improves the wellbeing of future people (and not by ensuring these people actually exist) should be good under plausible PAVs. Michael seems to disagree with this by raising the non-identity problem but I don’t find this persuasive and I don’t think others would either. I list my proposed longtermist interventions that still work under PAVs again:
Mitigating climate change
Aligning AI (if this goes well we could have a utopia, if it goes poorly we could have dystopia)
Improving institutions
Improving values
Global Priorities Research
Patient philanthropy
Growing the EA movement / building EA infrastructure
I think I agree with everything you list. But I also think that the extinction risk (especially from misaligned AI?) and the loss of the trillions of potential people is the slam-dunk case for longtermism that is usually espoused. And PAV dramatically affects that case.
Mitigating extinction risk also seems a lot more tractable to me than doing something about s-risk. With S risk seems so much harder to make predictions about what would influence what in the long-ish run. But we have a pretty good sense of things that reduce near term x risk… preventing dangerous bio research etc.
I’m not sure what the probability of misaligned AI making us go extinct is vs keeping us around but in a bad state, but it’s worth noting that people have written about how misaligned AI could cause s-risks.
Also, I think it’s plausible that considerations of artificial sentience dominate all others in a way that is robust to PAVs. We could create vast amounts of artificial sentience that have experiences ranging from the the very worst to very best we can possibly imagine. Making sure we don’t create suffering artificial sentience / making sure we do create vast amounts of happy sentience both potentially seem overwhelmingly important to me. This will require:
Understanding consciousness far better than we currently do
Improving values and expanding moral circle
So I’m a bit more optimistic than you are about non-extinction risk reducing longtermist approaches.
This is because plausible person-affecting views will still find it important to improve the lives of future people who will necessarily exist.
I agree with this. But the challenge from the Non-Identity problem is that there are few, if any, necessarily existing future individuals: what we do causes different people to come into existence. This raises a challenge to longtermism: how can we make the future go better if we can’t make it go better for anyone in particular? If an outcome is not better for anyone, how can it be better? In the discourse, philosophers tend to accept that it is the implication of (some) person-affecting views that we can’t (really) make the future go better for anyone, but take this implication as a decisive reason to reject those views. My suspicion is that philosophers have been too quick to dismiss such person-affecting views and they merit another look.
Hmm. Do you seriously think that philosophers have been too quick to dismiss such person-affecting views?
If you accept that impacts on the future generally don’t matter because you won’t really be harming anyone, as they wouldn’t have existed if you hadn’t done the act, then you can justify doing some things that I’d imagine pretty much everyone would agree is wrong.
For example, you could justify going around putting millions of landmines underground set to blow up in 200 years time causing immense misery to future people for no other reason than you want to cause their suffering. Provided those people will still live net positive lives overall, your logic says this isn’t a bad thing to do. Do you really think it’s OK to place the mines? Do you think anyone bar a psychopath thinks it’s OK to place the mines?
Of course, as you imply, there are other ways to respond to the non-identity problem. You could resort to an impersonal utilitarianism where you say no, don’t place the mines because it will cause immense suffering and suffering is intrinsically bad. Do you really think this is a weaker response?
What exact person-affecting view are you considering here? I don’t think it is at all clear that standard person-affecting views count against longtermism. This is because plausible person-affecting views will still find it important to improve the lives of future people who will necessarily exist.
Therefore longtermist interventions that don’t focus on reducing risks of extinction but instead look to improve the lives of future necessary people should remain robust to a person-affecting view. These might include:
Mitigating climate change
Aligning AI (if this goes well we could have a utopia, if it goes poorly we could have dystopia)
Improving institutions
Improving values
Global Priorities Research
Patient philanthropy
Growing the EA movement / building EA infrastructure
Boosting economic growth / tech progress
Reducing s-risks
(Setting aside the non-identity problem … which I think is pretty plausibly solvable) …
I recall reading others saying that total-population utilitarian is very much pivotal to the case for prioritizing the mitigation of x-risk relative to near term interventions.
So it’s true that LT interventions are not all irrelevant with the PAV, but the leading intervention/ToC seems to be dramatically less of a slam dunk.
Not exactly, and this is what I was trying to explain with my first comment.
Person-affecting views (PAVs) strongly count against interventions that aim to reduce the probability of human extinction (although not entirely because extinction still means the deaths of billions of people which PAVs do find bad).
The key point is that existential risks are broader than extinction risks. Some existential risks can lock us into bad states of the world in which humanity still continues to exist.
For example, if AI enslaves us and we have little to no hope of escaping until the universe becomes uninhabitable, that still seems very bad according to pretty much any plausible PAV. Michael may even agree with this as non-identity issues don’t usually bite when we’re considering future lives that are worse than neutral (as might be the case if we’re literally enslaved by AI!).
More generally, any intervention that improves the wellbeing of future people (and not by ensuring these people actually exist) should be good under plausible PAVs. Michael seems to disagree with this by raising the non-identity problem but I don’t find this persuasive and I don’t think others would either. I list my proposed longtermist interventions that still work under PAVs again:
Mitigating climate change
Aligning AI (if this goes well we could have a utopia, if it goes poorly we could have dystopia)
Improving institutions
Improving values
Global Priorities Research
Patient philanthropy
Growing the EA movement / building EA infrastructure
Boosting economic growth / tech progress
Reducing s-risks
I think I agree with everything you list. But I also think that the extinction risk (especially from misaligned AI?) and the loss of the trillions of potential people is the slam-dunk case for longtermism that is usually espoused. And PAV dramatically affects that case.
Mitigating extinction risk also seems a lot more tractable to me than doing something about s-risk. With S risk seems so much harder to make predictions about what would influence what in the long-ish run. But we have a pretty good sense of things that reduce near term x risk… preventing dangerous bio research etc.
I’m not sure what the probability of misaligned AI making us go extinct is vs keeping us around but in a bad state, but it’s worth noting that people have written about how misaligned AI could cause s-risks.
Also, I think it’s plausible that considerations of artificial sentience dominate all others in a way that is robust to PAVs. We could create vast amounts of artificial sentience that have experiences ranging from the the very worst to very best we can possibly imagine. Making sure we don’t create suffering artificial sentience / making sure we do create vast amounts of happy sentience both potentially seem overwhelmingly important to me. This will require:
Understanding consciousness far better than we currently do
Improving values and expanding moral circle
So I’m a bit more optimistic than you are about non-extinction risk reducing longtermist approaches.
Hello Jack (again!),
I agree with this. But the challenge from the Non-Identity problem is that there are few, if any, necessarily existing future individuals: what we do causes different people to come into existence. This raises a challenge to longtermism: how can we make the future go better if we can’t make it go better for anyone in particular? If an outcome is not better for anyone, how can it be better? In the discourse, philosophers tend to accept that it is the implication of (some) person-affecting views that we can’t (really) make the future go better for anyone, but take this implication as a decisive reason to reject those views. My suspicion is that philosophers have been too quick to dismiss such person-affecting views and they merit another look.
Hmm. Do you seriously think that philosophers have been too quick to dismiss such person-affecting views?
If you accept that impacts on the future generally don’t matter because you won’t really be harming anyone, as they wouldn’t have existed if you hadn’t done the act, then you can justify doing some things that I’d imagine pretty much everyone would agree is wrong.
For example, you could justify going around putting millions of landmines underground set to blow up in 200 years time causing immense misery to future people for no other reason than you want to cause their suffering. Provided those people will still live net positive lives overall, your logic says this isn’t a bad thing to do. Do you really think it’s OK to place the mines? Do you think anyone bar a psychopath thinks it’s OK to place the mines?
Of course, as you imply, there are other ways to respond to the non-identity problem. You could resort to an impersonal utilitarianism where you say no, don’t place the mines because it will cause immense suffering and suffering is intrinsically bad. Do you really think this is a weaker response?