I would estimate my disagreement at roughly 90% to 95%.
Default human values are largely indifferent or actively hostile to the suffering of non-human animals.
Humanity currently oversees massive amounts of animal suffering through factory farming &habitat destruction.
If an AGI would be perfectly aligned to make things “go well” for humans, it will likely prioritize human flourishing, economic growth + resource acquisition. If human preferences do not drastically shift toward minimizing animal suffering, an AGI will have no inherent reason to protect animals, & might simply optimize the systems that currently exploit them.
A scenario where AGI goes exceptionally well for humans often includes escaping Earth, avoiding extinction & engaging in massive space colonization. From my perspective, this is a prime driver of astronomical suffering (s-risks).
Humans often romanticize nature. If humanity uses AGI to terraform other planets or seed life across the galaxy, they might intentionally or accidentally spread wild animal suffering on an astronomical scale.
A highly advanced, human-aligned AGI might run countless simulations of Earth’s evolutionary history for scientific or entertainment purposes. Tomasik has written extensively on the catastrophic moral implications if these simulated animals possess sentience & experience pain. That’s not so improbable given enough time imo.
Suffering-focused ethics prioritizes the prevention and reduction of extreme suffering over the promotion of happiness or human survival at all costs.
A future that “goes well” for humanity typically implies human survival, joy, and unfettered expansion.
For me a future only goes objectively “well” if the total amount of extreme suffering is minimized. Therefore, a human utopia built alongside, or simply ignoring, the continuous suffering of biological or digital animals would be viewed as a profound moral failure.
The small percentage of agreement would stem from the idea that if humans are wiped out by a misaligned AGI, animals might also be destroyed in the process (e.g., if the AGI harvests all biological matter on Earth). If AGI goes well for humans, animals at least avoid that specific instrumental convergence scenario. Furthermore, human prosperity could eventually lead to moral circle expansion, where humans use AGI to actively intervene in nature to reduce wild animal suffering, but I view this as highly contingent.
I would estimate my disagreement at roughly 90% to 95%.
Default human values are largely indifferent or actively hostile to the suffering of non-human animals.
Humanity currently oversees massive amounts of animal suffering through factory farming &habitat destruction.
If an AGI would be perfectly aligned to make things “go well” for humans, it will likely prioritize human flourishing, economic growth + resource acquisition. If human preferences do not drastically shift toward minimizing animal suffering, an AGI will have no inherent reason to protect animals, & might simply optimize the systems that currently exploit them.
A scenario where AGI goes exceptionally well for humans often includes escaping Earth, avoiding extinction & engaging in massive space colonization. From my perspective, this is a prime driver of astronomical suffering (s-risks).
Humans often romanticize nature. If humanity uses AGI to terraform other planets or seed life across the galaxy, they might intentionally or accidentally spread wild animal suffering on an astronomical scale.
A highly advanced, human-aligned AGI might run countless simulations of Earth’s evolutionary history for scientific or entertainment purposes. Tomasik has written extensively on the catastrophic moral implications if these simulated animals possess sentience & experience pain. That’s not so improbable given enough time imo.
Suffering-focused ethics prioritizes the prevention and reduction of extreme suffering over the promotion of happiness or human survival at all costs.
A future that “goes well” for humanity typically implies human survival, joy, and unfettered expansion.
For me a future only goes objectively “well” if the total amount of extreme suffering is minimized. Therefore, a human utopia built alongside, or simply ignoring, the continuous suffering of biological or digital animals would be viewed as a profound moral failure.
The small percentage of agreement would stem from the idea that if humans are wiped out by a misaligned AGI, animals might also be destroyed in the process (e.g., if the AGI harvests all biological matter on Earth). If AGI goes well for humans, animals at least avoid that specific instrumental convergence scenario. Furthermore, human prosperity could eventually lead to moral circle expansion, where humans use AGI to actively intervene in nature to reduce wild animal suffering, but I view this as highly contingent.