Jacy has argued that farm-animal suffering is a closer analogy to most far-future suffering than wild-animal suffering, and I largely agree with his arguments, although he and I both believe that some concern for naturogenic suffering is an important part of a “moral-circle-expansion portfolio”, especially if events within some large simulations fall mainly into the “naturogenic” moral category. There could also be explicit nature simulations run for reasons of intrinsic/aesthetic value or entertainment.
I agree that terraforming and directed panspermia, if they occur at all, will be relatively brief preludes to a much larger and longer artificial future. A main reason I mention terraforming and directed panspermia at all is because they’re less speculative/weird, and there’s already a fair amount of discussion about them. But as I said here: “in the long run, it seems likely that most Earth-originating agents will be artificial: robots and other artificial intelligences (AIs). [...] we should expect that digital, not biological, minds will dominate in the future, barring unforeseen technical difficulties or extreme bio-nostalgic preferences on the part of the colonizers.”
Then we can have a reasonable expectation that quality of life will be positive, as people will have plenty of contact and responsibility for other organisms.
...only if (1) concern for the experienced welfare (rather than, say, autonomy) of animals increases significantly from where it is now (including for invertebrates, who hold the majority of the neurons) and (2) such concern doesn’t later decrease. Both of these assumptions aren’t obvious. Personally I find it probable that moral concern for the suffering of animal-like creatures, like most human values, will be a distant memory within 5000 years, for similar reasons as worship of the ancient-Egyptian deities is a distant memory today.
Interesting info. :)
Jacy has argued that farm-animal suffering is a closer analogy to most far-future suffering than wild-animal suffering, and I largely agree with his arguments, although he and I both believe that some concern for naturogenic suffering is an important part of a “moral-circle-expansion portfolio”, especially if events within some large simulations fall mainly into the “naturogenic” moral category. There could also be explicit nature simulations run for reasons of intrinsic/aesthetic value or entertainment.
I agree that terraforming and directed panspermia, if they occur at all, will be relatively brief preludes to a much larger and longer artificial future. A main reason I mention terraforming and directed panspermia at all is because they’re less speculative/weird, and there’s already a fair amount of discussion about them. But as I said here: “in the long run, it seems likely that most Earth-originating agents will be artificial: robots and other artificial intelligences (AIs). [...] we should expect that digital, not biological, minds will dominate in the future, barring unforeseen technical difficulties or extreme bio-nostalgic preferences on the part of the colonizers.”
...only if (1) concern for the experienced welfare (rather than, say, autonomy) of animals increases significantly from where it is now (including for invertebrates, who hold the majority of the neurons) and (2) such concern doesn’t later decrease. Both of these assumptions aren’t obvious. Personally I find it probable that moral concern for the suffering of animal-like creatures, like most human values, will be a distant memory within 5000 years, for similar reasons as worship of the ancient-Egyptian deities is a distant memory today.