Caring about hypothetical future people whose wellbeing could be affected by hypothetical (and oftentimes nebulous) problems like misaligned AI surely means we’ve stopped caring about those living in extreme poverty — so it goes.
If the future people are not hypothetical, but people that you believe will exist, then I think your triage approach is virtuous. That is a lot of people to care about, but like you say, your moral circle has widened. Widening it to non-human animals is a conceptual challenge as well. For now, I consider dolphins, whales, primates, some smaller mammals, farm animals, and squid as deserving moral status comparable to humans. I’m curious where you draw the line.
Your write-up is inspiring. I would still shrink from the challenge you face by adding in far-future people, but I consider them to be hypothetical: I don’t believe that they will necessarily exist. That doesn’t stop me from making up my own ideas of what makes a good long-term future for humanity.
If the future people are not hypothetical, but people that you believe will exist, then I think your triage approach is virtuous. That is a lot of people to care about, but like you say, your moral circle has widened. Widening it to non-human animals is a conceptual challenge as well. For now, I consider dolphins, whales, primates, some smaller mammals, farm animals, and squid as deserving moral status comparable to humans. I’m curious where you draw the line.
Your write-up is inspiring. I would still shrink from the challenge you face by adding in far-future people, but I consider them to be hypothetical: I don’t believe that they will necessarily exist. That doesn’t stop me from making up my own ideas of what makes a good long-term future for humanity.