So, being reminded of what suffering is, letās think about the kind of world where it can flourish. More population = more beings capable of suffering = more suffering in existence, for all instantiations of reality that are not literally perfect (since any non-perfect reality would contain some suffering, and this would scale up linearly with population). So lower populations are better from a moral perspective, because they have lower potential for suffering.
You seem to be supporting the reduction of total suffering. Which of the following would you pick:
A: your perfect utopia forever plus a very tiny amount of suffering (e.g. the mildest of headaches) for 1 second.
B: nothing forever (e.g. suffering-free collapse of the whole universe forever).
I think A is way way better than B, even though B has less suffering. If you prefer A to B, I think you would be putting some value on happiness (which I think is totally reasonable!). So the possibility of a much happier future if technological progress continues should be given significant consideration?
In this (purely hypothetical, functionally impossible) scenario, I would choose option Bānot because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesnāt exist on B).
Happiness is also extremely subjective, and therefore canāt be meaningfully quantified, while the things that cause suffering tend to be remarkably consistent across times, places, and even species. So basing a moral system on happiness (rather than suffering-reduction) seems to make no sense to me.
In this (purely hypothetical, functionally impossible) scenario, I would choose option Bānot because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesnāt exist on B).
Scenario A assumed āyour perfect utopia foreverā, so there would be no chance for serious suffering to emerge.
Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, Iām not sure which scenario Iād prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.
I also get that itās an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isnāt hypothetical anymore.
Itās likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence Iāve seen recently, it doesnāt seem to be). [epistemic certainly ā relatively low ā 60%]
Thanks for following up.
You seem to be supporting the reduction of total suffering. Which of the following would you pick:
A: your perfect utopia forever plus a very tiny amount of suffering (e.g. the mildest of headaches) for 1 second.
B: nothing forever (e.g. suffering-free collapse of the whole universe forever).
I think A is way way better than B, even though B has less suffering. If you prefer A to B, I think you would be putting some value on happiness (which I think is totally reasonable!). So the possibility of a much happier future if technological progress continues should be given significant consideration?
In this (purely hypothetical, functionally impossible) scenario, I would choose option Bānot because of the mild, transient suffering in scenario A, but the possibility of the emergence of serious suffering in the future (which doesnāt exist on B).
Happiness is also extremely subjective, and therefore canāt be meaningfully quantified, while the things that cause suffering tend to be remarkably consistent across times, places, and even species. So basing a moral system on happiness (rather than suffering-reduction) seems to make no sense to me.
Scenario A assumed āyour perfect utopia foreverā, so there would be no chance for serious suffering to emerge.
Then that would make Scenario A much more attractive to me (not necessarily from a moral perspective), and I apologize for misunderstanding your hypothetical. To be honest, with the caveat of forever, Iām not sure which scenario Iād prefer. A is certainly much more interesting to me, but my moral calculus pushes me to conclude B is more rational.
I also get that itās an analogy to get me thinking about the deeper issues here, and I understand. My perspective is just that, while I certainly find the philosophy behind this interesting, the issue of AI permanently limiting human potential isnāt hypothetical anymore.
Itās likely to happen in a very short period of time (relative to the total lifespan of the species), unless indefinite delay of AI development really is socio-politically possible (and from what evidence Iāve seen recently, it doesnāt seem to be). [epistemic certainly ā relatively low ā 60%]