Just to clarify, population ethics ādeals with the moral problems that arise when our actions affect who and how many people are born and at what quality of lifeā. You can reject the total view, and at the same time engage with population ethics.
It is the quality of life, not the duration of an individualās life or the sheer number of lives that determines value.
Since the industrial revolution, increases in quality of life (welfare per person per year) have gone hand in hand with increases in both population and life expectancy. So a priori opposing the latter may hinder the former.
My ethics are utilitarian but definitely lean more toward the suffering-avoidance end of thingsāand lower populations have lower potential for suffering (at least in aggregate).
Lower population have lower potential for suffering, but, at least based on the last few hundred years, they may also have greater potential for suffering per person per year. I wonder whether you mostly care about minimising total or average suffering. If total, I can see how maintaining 2010 would be good. If average, as you seemed to suggest in your comment, technological progress still looks very good to me.
Iāve had to sit with this comment for a bit, both to make sure I didnāt misunderstand your perspective and that I was conveying my views accurately.
I agree that population ethics can still be relevant to the conversation even if its full conclusion isnāt accepted. Moral problems can arise from, for instance, a one-child policy, and this is in the purview of population ethics without requiring the acceptance of some kind of population-maximizing hedonic system (which some PE proponents seem to support).
As for sufferingāit is important to remember what it actually is. It is the pain of wanting to survive but being unable to escape disease, predators, war, poverty, violence, or myriad other horrors. Itās the gazelleās agony at the lionās bite, the starving childās cry for sustenance, and the dispossessed workerās sigh of despair. Itās easy (at least for me) to lose sight of this, of what āsufferingā actually is, and so itās important for me to state this flat out.
So, being reminded of what suffering is, letās think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering.
Most people Iāve seen espouse a pro-tech view seem to think (properly aligned) smarter-than-human AI will bring a utopia, similar to the paradises of many myths and faiths. Unless it can actually do that (and I have no reason to believe it will), suffering-absence (and therefore moral good, in my perspective) will always be associated with lower populations of sentient beings.
So, being reminded of what suffering is, letās think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering.
You seem to be supporting the reduction of total suffering. Which of the following would you pick:
A: your perfect utopia forever plus a very tiny amount of suffering (e.g. the mildest of headaches) for 1 second.
B: nothing forever (e.g. suffering-free collapse of the whole universe forever).
I think A is way way better than B, even though B has less suffering. If you prefer A to B, I think you would be putting some value on happiness (which I think is totally reasonable!). So the possibility of a much happier future if technological progress continues should be given significant consideration?
In this (purely hypothetical, functionally impossible) scenario, I would choose option Bānot because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesnāt exist on B).
Happiness is also extremely subjective, and therefore canāt be meaningfully quantified, while the things that cause suffering tend to be remarkably consistent across times, places, and even species. So basing a moral system on happiness (rather than suffering-reduction) seems to make no sense to me.
In this (purely hypothetical, functionally impossible) scenario, I would choose option Bānot because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesnāt exist on B).
Scenario A assumed āyour perfect utopia foreverā, so there would be no chance for serious suffering to emerge.
Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, Iām not sure which scenario Iād prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.
I also get that itās an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isnāt hypothetical anymore.
Itās likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence Iāve seen recently, it doesnāt seem to be). [epistemic certainly ā relatively low ā 60%]
Just to clarify, population ethics ādeals with the moral problems that arise when our actions affect who and how many people are born and at what quality of lifeā. You can reject the total view, and at the same time engage with population ethics.
Since the industrial revolution, increases in quality of life (welfare per person per year) have gone hand in hand with increases in both population and life expectancy. So a priori opposing the latter may hinder the former.
Lower population have lower potential for suffering, but, at least based on the last few hundred years, they may also have greater potential for suffering per person per year. I wonder whether you mostly care about minimising total or average suffering. If total, I can see how maintaining 2010 would be good. If average, as you seemed to suggest in your comment, technological progress still looks very good to me.
Iāve had to sit with this comment for a bit, both to make sure I didnāt misunderstand your perspective and that I was conveying my views accurately.
I agree that population ethics can still be relevant to the conversation even if its full conclusion isnāt accepted. Moral problems can arise from, for instance, a one-child policy, and this is in the purview of population ethics without requiring the acceptance of some kind of population-maximizing hedonic system (which some PE proponents seem to support).
As for sufferingāit is important to remember what it actually is. It is the pain of wanting to survive but being unable to escape disease, predators, war, poverty, violence, or myriad other horrors. Itās the gazelleās agony at the lionās bite, the starving childās cry for sustenance, and the dispossessed workerās sigh of despair. Itās easy (at least for me) to lose sight of this, of what āsufferingā actually is, and so itās important for me to state this flat out.
So, being reminded of what suffering is, letās think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering.
Most people Iāve seen espouse a pro-tech view seem to think (properly aligned) smarter-than-human AI will bring a utopia, similar to the paradises of many myths and faiths. Unless it can actually do that (and I have no reason to believe it will), suffering-absence (and therefore moral good, in my perspective) will always be associated with lower populations of sentient beings.
Thanks for following up.
You seem to be supporting the reduction of total suffering. Which of the following would you pick:
A: your perfect utopia forever plus a very tiny amount of suffering (e.g. the mildest of headaches) for 1 second.
B: nothing forever (e.g. suffering-free collapse of the whole universe forever).
I think A is way way better than B, even though B has less suffering. If you prefer A to B, I think you would be putting some value on happiness (which I think is totally reasonable!). So the possibility of a much happier future if technological progress continues should be given significant consideration?
In this (purely hypothetical, functionally impossible) scenario, I would choose option Bānot because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesnāt exist on B).
Happiness is also extremely subjective, and therefore canāt be meaningfully quantified, while the things that cause suffering tend to be remarkably consistent across times, places, and even species. So basing a moral system on happiness (rather than suffering-reduction) seems to make no sense to me.
Scenario A assumed āyour perfect utopia foreverā, so there would be no chance for serious suffering to emerge.
Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, Iām not sure which scenario Iād prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.
I also get that itās an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isnāt hypothetical anymore.
Itās likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence Iāve seen recently, it doesnāt seem to be). [epistemic certainly ā relatively low ā 60%]