I’m a technoskeptic because I’m a longtermist. I don’t want AI to destroy the potential of the future persons you describe (whose numbers are vast, as you linked) to exist and find happiness and fulfillment.
True, but they are still vastly large numbers—and they are all biological, Earth-based beings given we continue to exist as in 2010. I think that is far more valuable than transforming the affect able universe for the benefit of “digital persons” (who aren’t actual persons, since to be a person is to be both sentient and biological).
I also don’t really buy population ethics. It is the quality of life, not the duration of an individual’s life or the sheer number of lives that determines value. My ethics are utilitarian but definitely lean more toward the suffering-avoidance end of things—and lower populations have lower potential for suffering (at least in aggregate).
Just to clarify, population ethics “deals with the moral problems that arise when our actions affect who and how many people are born and at what quality of life”. You can reject the total view, and at the same time engage with population ethics.
It is the quality of life, not the duration of an individual’s life or the sheer number of lives that determines value.
Since the industrial revolution, increases in quality of life (welfare per person per year) have gone hand in hand with increases in both population and life expectancy. So a priori opposing the latter may hinder the former.
My ethics are utilitarian but definitely lean more toward the suffering-avoidance end of things—and lower populations have lower potential for suffering (at least in aggregate).
Lower population have lower potential for suffering, but, at least based on the last few hundred years, they may also have greater potential for suffering per person per year. I wonder whether you mostly care about minimising total or average suffering. If total, I can see how maintaining 2010 would be good. If average, as you seemed to suggest in your comment, technological progress still looks very good to me.
I’ve had to sit with this comment for a bit, both to make sure I didn’t misunderstand your perspective and that I was conveying my views accurately.
I agree that population ethics can still be relevant to the conversation even if its full conclusion isn’t accepted. Moral problems can arise from, for instance, a one-child policy, and this is in the purview of population ethics without requiring the acceptance of some kind of population-maximizing hedonic system (which some PE proponents seem to support).
As for suffering—it is important to remember what it actually is. It is the pain of wanting to survive but being unable to escape disease, predators, war, poverty, violence, or myriad other horrors. It’s the gazelle’s agony at the lion’s bite, the starving child’s cry for sustenance, and the dispossessed worker’s sigh of despair. It’s easy (at least for me) to lose sight of this, of what “suffering” actually is, and so it’s important for me to state this flat out.
So, being reminded of what suffering is, let’s think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering.
Most people I’ve seen espouse a pro-tech view seem to think (properly aligned) smarter-than-human AI will bring a utopia, similar to the paradises of many myths and faiths. Unless it can actually do that (and I have no reason to believe it will), suffering-absence (and therefore moral good, in my perspective) will always be associated with lower populations of sentient beings.
So, being reminded of what suffering is, let’s think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering.
You seem to be supporting the reduction of total suffering. Which of the following would you pick:
A: your perfect utopia forever plus a very tiny amount of suffering (e.g. the mildest of headaches) for 1 second.
B: nothing forever (e.g. suffering-free collapse of the whole universe forever).
I think A is way way better than B, even though B has less suffering. If you prefer A to B, I think you would be putting some value on happiness (which I think is totally reasonable!). So the possibility of a much happier future if technological progress continues should be given significant consideration?
In this (purely hypothetical, functionally impossible) scenario, I would choose option B—not because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesn’t exist on B).
Happiness is also extremely subjective, and therefore can’t be meaningfully quantified, while the things that cause suffering tend to be remarkably consistent across times, places, and even species. So basing a moral system on happiness (rather than suffering-reduction) seems to make no sense to me.
In this (purely hypothetical, functionally impossible) scenario, I would choose option B—not because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesn’t exist on B).
Scenario A assumed “your perfect utopia forever”, so there would be no chance for serious suffering to emerge.
Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, I’m not sure which scenario I’d prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.
I also get that it’s an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isn’t hypothetical anymore.
It’s likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence I’ve seen recently, it doesn’t seem to be). [epistemic certainly — relatively low — 60%]
I’m a technoskeptic because I’m a longtermist. I don’t want AI to destroy the potential of the future persons you describe (whose numbers are vast, as you linked) to exist and find happiness and fulfillment.
Note only the 4 smallest estimates would apply if humans continued to exist as in 2010.
True, but they are still vastly large numbers—and they are all biological, Earth-based beings given we continue to exist as in 2010. I think that is far more valuable than transforming the affect able universe for the benefit of “digital persons” (who aren’t actual persons, since to be a person is to be both sentient and biological).
I also don’t really buy population ethics. It is the quality of life, not the duration of an individual’s life or the sheer number of lives that determines value. My ethics are utilitarian but definitely lean more toward the suffering-avoidance end of things—and lower populations have lower potential for suffering (at least in aggregate).
Just to clarify, population ethics “deals with the moral problems that arise when our actions affect who and how many people are born and at what quality of life”. You can reject the total view, and at the same time engage with population ethics.
Since the industrial revolution, increases in quality of life (welfare per person per year) have gone hand in hand with increases in both population and life expectancy. So a priori opposing the latter may hinder the former.
Lower population have lower potential for suffering, but, at least based on the last few hundred years, they may also have greater potential for suffering per person per year. I wonder whether you mostly care about minimising total or average suffering. If total, I can see how maintaining 2010 would be good. If average, as you seemed to suggest in your comment, technological progress still looks very good to me.
I’ve had to sit with this comment for a bit, both to make sure I didn’t misunderstand your perspective and that I was conveying my views accurately.
I agree that population ethics can still be relevant to the conversation even if its full conclusion isn’t accepted. Moral problems can arise from, for instance, a one-child policy, and this is in the purview of population ethics without requiring the acceptance of some kind of population-maximizing hedonic system (which some PE proponents seem to support).
As for suffering—it is important to remember what it actually is. It is the pain of wanting to survive but being unable to escape disease, predators, war, poverty, violence, or myriad other horrors. It’s the gazelle’s agony at the lion’s bite, the starving child’s cry for sustenance, and the dispossessed worker’s sigh of despair. It’s easy (at least for me) to lose sight of this, of what “suffering” actually is, and so it’s important for me to state this flat out.
So, being reminded of what suffering is, let’s think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering.
Most people I’ve seen espouse a pro-tech view seem to think (properly aligned) smarter-than-human AI will bring a utopia, similar to the paradises of many myths and faiths. Unless it can actually do that (and I have no reason to believe it will), suffering-absence (and therefore moral good, in my perspective) will always be associated with lower populations of sentient beings.
Thanks for following up.
You seem to be supporting the reduction of total suffering. Which of the following would you pick:
A: your perfect utopia forever plus a very tiny amount of suffering (e.g. the mildest of headaches) for 1 second.
B: nothing forever (e.g. suffering-free collapse of the whole universe forever).
I think A is way way better than B, even though B has less suffering. If you prefer A to B, I think you would be putting some value on happiness (which I think is totally reasonable!). So the possibility of a much happier future if technological progress continues should be given significant consideration?
In this (purely hypothetical, functionally impossible) scenario, I would choose option B—not because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesn’t exist on B).
Happiness is also extremely subjective, and therefore can’t be meaningfully quantified, while the things that cause suffering tend to be remarkably consistent across times, places, and even species. So basing a moral system on happiness (rather than suffering-reduction) seems to make no sense to me.
Scenario A assumed “your perfect utopia forever”, so there would be no chance for serious suffering to emerge.
Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, I’m not sure which scenario I’d prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.
I also get that it’s an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isn’t hypothetical anymore.
It’s likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence I’ve seen recently, it doesn’t seem to be). [epistemic certainly — relatively low — 60%]