I definitely agree that competence is not the measure of worth, but I am also worried in this comment you are kind of shoving out of view a potentially pretty important question, which is the genuine moral and game-theoretic relevance of different minds (both human and artificial minds).
I wrote up my thoughts here in this other comment, so I will mostly quote:
it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus.
Saying “all people count equally” is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn’t really hold any water after even just a tiny bit of poking, and your only link for this assertion is a random article written by CEA, which doesn’t argue for this claim at all and also just blindly asserts it). It is still the case that most EAs believe that the variance in the importance of different people’s experience is relatively small, that variance almost certainly does not align with historical conceptions of racism, and that there are at least some decent game-theoretic arguments to ignore a good chunk of this variance, but this does not mean that “all people count equally” is a “core belief” which should clearly only be reserved for an extremely small number of values and claims. It might be a good enough approximation in almost all practical situations, but it is really not a deep philosophical assumption of any of the things that I am working on, and I am confident that if I were to bring it up at an EA meetup, someone would quite convincingly argue against it.
This might seem like a technicality, but in this context the statement is specifically made to claim that EA has a deep philosophical commitment to valuing all people equally, independently of the details about how their mind works (either because of genetics, or development environment, or education). This reassurance does not work. I (and my guess is also almost all extrapolations of the EA philosophy) value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality. If it was the case that different human populations did differ on the relevant dimensions a lot, this would spell a real moral dilemma for the EA community, with no deep philosophical commitments to guard us from coming to uncomfortable conclusions (luckily, as far as I can tell, in this case almost all analyses from an EA perspective lead to the conclusion that it’s probably reasonable to weigh people equally in impact estimates, which doesn’t conflict with society’s taboos, so this is not de-facto a problem).
In another comment:
In all of these situations, I think we can still say people “count” equally.
I don’t think this goes through. Let’s just talk about the hypothetical of humanity’s evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn’t even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don’t see how you can be confident that your moral concern in the present day is independent of that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.
I think the conflation of capability with moral worth is indeed pretty bad in a bunch of different situations, but like, I also think different minds probably genuinely have different moral weights, and while I don’t think the variance in human minds here rises to have much relevance in daily decision-making, I do think the broader questions around engineering beings capable of achieving heights of much greater experience, or self-modifying in that direction, as well as the construction of artificial minds where its a huge open question what moral consideration we should extend them, are quite important, and something about your comment feels like it’s making that conversation harder.
Like, the sentence: “Acting outraged at the mere possibility that some group might be inferior to another, as if that would be morally relevant in any way whatsoever—”
Like, I don’t know, there are definitely dimensions of capacity (probably not intelligence, though honestly also not definitely not-intelligence) that play at least some role in the actual moral relevance of a person. It has to, otherwise I definitely no longer have a good answer to many moral questions around animal ethics and the ethics of artificial minds. And I think empirically, after thinking about this question a bunch, de-facto I think the variance among the human population here is pretty small, but I do actually think it was worth checking and thinking about, and I also feel like if someone was to show up and was skeptical of my position here, I wouldn’t be particularly outraged or confused, it feels like a genuinely difficult question.
Yep, basically endorsed; this is like the next layer of nuance and consideration to be laid down; I suspect I was subconsciously thinking that one couldn’t easily get the-audience-I-was-speaking-to across both inferential leaps at once?
There’s also something about the difference between triaged and limited systems (which we are, in fact, in) and ultimate utopian ideals. I think that in the ultimate utopian ideal we do not give people less moral weight based on their capacity, but I agree that in the meantime scarce resources do indeed sometimes need dividing.
IMO I think part of the issue is we live in the convenient world where differences do not matter so much as to make hard work irrelevant.
But I disagree with the general statement of Duncan Sabien that arbitrarily large capabilities differentials do not matter morally.
More generally, if capabilities differentials mattered much more through say genetic engineering or whole brain emulation or AI, then I wouldn’t support the thesis that all sentient beings should be equal.
So I heavily disagree with this quoted section:
Competence is not the measure of worth. Fundamental equality is not the justification for fair and moral treatment.
I definitely agree that competence is not the measure of worth, but I am also worried in this comment you are kind of shoving out of view a potentially pretty important question, which is the genuine moral and game-theoretic relevance of different minds (both human and artificial minds).
I wrote up my thoughts here in this other comment, so I will mostly quote:
In another comment:
I think the conflation of capability with moral worth is indeed pretty bad in a bunch of different situations, but like, I also think different minds probably genuinely have different moral weights, and while I don’t think the variance in human minds here rises to have much relevance in daily decision-making, I do think the broader questions around engineering beings capable of achieving heights of much greater experience, or self-modifying in that direction, as well as the construction of artificial minds where its a huge open question what moral consideration we should extend them, are quite important, and something about your comment feels like it’s making that conversation harder.
Like, the sentence: “Acting outraged at the mere possibility that some group might be inferior to another, as if that would be morally relevant in any way whatsoever—”
Like, I don’t know, there are definitely dimensions of capacity (probably not intelligence, though honestly also not definitely not-intelligence) that play at least some role in the actual moral relevance of a person. It has to, otherwise I definitely no longer have a good answer to many moral questions around animal ethics and the ethics of artificial minds. And I think empirically, after thinking about this question a bunch, de-facto I think the variance among the human population here is pretty small, but I do actually think it was worth checking and thinking about, and I also feel like if someone was to show up and was skeptical of my position here, I wouldn’t be particularly outraged or confused, it feels like a genuinely difficult question.
Yep, basically endorsed; this is like the next layer of nuance and consideration to be laid down; I suspect I was subconsciously thinking that one couldn’t easily get the-audience-I-was-speaking-to across both inferential leaps at once?
There’s also something about the difference between triaged and limited systems (which we are, in fact, in) and ultimate utopian ideals. I think that in the ultimate utopian ideal we do not give people less moral weight based on their capacity, but I agree that in the meantime scarce resources do indeed sometimes need dividing.
IMO I think part of the issue is we live in the convenient world where differences do not matter so much as to make hard work irrelevant.
But I disagree with the general statement of Duncan Sabien that arbitrarily large capabilities differentials do not matter morally.
More generally, if capabilities differentials mattered much more through say genetic engineering or whole brain emulation or AI, then I wouldn’t support the thesis that all sentient beings should be equal.
So I heavily disagree with this quoted section: