In all of these situations, I think we can still say people “count” equally.
I don’t think this goes through. Let’s just talk about the hypothetical of humanity’s evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn’t even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don’t see how you can be confident that your moral concern in the present day is independent of exactly that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.
And the last part of the sentence that I quoted seems also not very compatible with this. Digital people might have hugely varying levels of capacity for suffering and happiness and other things we care about, including different EMs. I indeed hope we create beings with much greater capacity for happiness than us, and would consider that among one of the moral priorities of our time.
For information, CEA’s OP links to an explanation of impartiality:
Impartial altruism: We believe that all people count equally. Of course it’s reasonable to have special concern for one’s own family, friends and life. But, when trying to do as much good as possible, we aim to give everyone’s interests equal weight, no matter where or when they live. This means focusing on the groups who are most neglected, which usually means focusing on those who don’t have as much power to protect their own interests.
That paragraph does feel kind of confused to me, though it’s hard to be precise in lists of principles like this.
As jimrandomh says above, it is widely accepted in EA that time and location do not matter morally (well, moreso location, I think it’s actually pretty common for EAs to think that far future lives are worth less than present lives, though I don’t agree with this reasoning). But that clearly does not imply that all people count equally, given that there are many possible reasons for differing moral weights.
Brain Emulations—basically taking a person and running a simulation of them on a computer, where they could potentially be copied, run faster or slower, etc.
I don’t think this goes through. Let’s just talk about the hypothetical of humanity’s evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn’t even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don’t see how you can be confident that your moral concern in the present day is independent of exactly that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.
And the last part of the sentence that I quoted seems also not very compatible with this. Digital people might have hugely varying levels of capacity for suffering and happiness and other things we care about, including different EMs. I indeed hope we create beings with much greater capacity for happiness than us, and would consider that among one of the moral priorities of our time.
For information, CEA’s OP links to an explanation of impartiality:
That paragraph does feel kind of confused to me, though it’s hard to be precise in lists of principles like this.
As jimrandomh says above, it is widely accepted in EA that time and location do not matter morally (well, moreso location, I think it’s actually pretty common for EAs to think that far future lives are worth less than present lives, though I don’t agree with this reasoning). But that clearly does not imply that all people count equally, given that there are many possible reasons for differing moral weights.
EMs?
“Emulated Minds” aka “Mind uploads”.
Brain Emulations—basically taking a person and running a simulation of them on a computer, where they could potentially be copied, run faster or slower, etc.