I don’t think this is a compelling argument. Being less immoral than the worst doesn’t lead me to conclude we should increase the immorality further. I do think it should lead us to have compassion in so far as humanity makes it very difficult not to be immoral — it’s an evolutionary problem.
That’s true! But still very bad for many. And of course, I’m concerned about all sentient beings, not just humans — the math looks truly horrible when non-humans are in concluded. I do credit humans for unintentionally reducing wild animal suffering by being so drawn to destroying the planet, but I expect the opposite will happen in space colonization situations (i.e. we will see wildlife or create more digital minds, etc.)
I’m a longtermist in this sense. I’m concerned about us torturing non-humans not just in the next several decades, but eons after. This could look like factory farming animals, seeding wild animals, creating digital minds, bringing pets with us, and so on.
Is that transhumanism to the max? I need to learn more about those who endorse this philosophy—I imagine there is some diversity. Would the immorality in us be eradicated under the ideal circumstances, in their minds (s-risks and x-risks aside from AI acceleration)? Sounds like they are a different kind of utopian.
Thanks for your comment.
I don’t think this is a compelling argument. Being less immoral than the worst doesn’t lead me to conclude we should increase the immorality further. I do think it should lead us to have compassion in so far as humanity makes it very difficult not to be immoral — it’s an evolutionary problem.
That’s true! But still very bad for many. And of course, I’m concerned about all sentient beings, not just humans — the math looks truly horrible when non-humans are in concluded. I do credit humans for unintentionally reducing wild animal suffering by being so drawn to destroying the planet, but I expect the opposite will happen in space colonization situations (i.e. we will see wildlife or create more digital minds, etc.)
I’m a longtermist in this sense. I’m concerned about us torturing non-humans not just in the next several decades, but eons after. This could look like factory farming animals, seeding wild animals, creating digital minds, bringing pets with us, and so on.
Is that transhumanism to the max? I need to learn more about those who endorse this philosophy—I imagine there is some diversity. Would the immorality in us be eradicated under the ideal circumstances, in their minds (s-risks and x-risks aside from AI acceleration)? Sounds like they are a different kind of utopian.