I don’t think people don’t not care about digital minds just because they are digital. Watching the first episode of Black Mirror, it’s hard not to feel sympathy for the simulated people. It would probably be a very unsuccessful show if the audience had no emotional investment in what happens to the simulated people.
Some objections to target when trying to increase moral concern for digital minds might be:
they don’t exist, and feel much more hypothetical than “future generations”
it feels unclear what could be done to help them (them specifically, as opposed to helping future generations in general)
it feels hard to determine whether a digital mind (that is not just a human or animal consciousness upload) is sentient and what they would feel as positive or negative valence
I don’t think people don’t not care about digital minds just because they are digital. Watching the first episode of Black Mirror, it’s hard not to feel sympathy for the simulated people. It would probably be a very unsuccessful show if the audience had no emotional investment in what happens to the simulated people.
Some objections to target when trying to increase moral concern for digital minds might be:
they don’t exist, and feel much more hypothetical than “future generations”
it feels unclear what could be done to help them (them specifically, as opposed to helping future generations in general)
it feels hard to determine whether a digital mind (that is not just a human or animal consciousness upload) is sentient and what they would feel as positive or negative valence