What follows is a tangent, but it feels like a relevant tangent. Like, I do not claim this is quite the same conversation as the above, it’s slightly in a different direction, but it’s not fully a non-sequitur.
Forgive the slightly-not-normal-for-this-venue language; this was originally a personal Facebook comment.
Here is a point that I don’t think gets made often enough:
It doesn’t matter whether other people are inferior, when it comes to talking about their fundamental dignity and the rights that a civilized society should grant them.
Like, often nazis or misogynists or whatever will try to start demonstrating that [some group] is objectively inferior on [some axis], and often the opposition will come right back with NUH-UH, [group] IS EVERY BIT AS CAPABLE—
I think there’s a mistake, there, and I think that mistake is *acting like that would matter,* even if true. Playing into the frame of the bigot, letting them set the terms of the debate, implicitly conceding that the question of two different group’s equality or inequality is the *crux* of the issue.
It isn’t.
I happen to think that it’s *false* that [race] or [gender] or whatever is inferior; my sense is that even if the bell curves for different groups peak in slightly different places and have their tails in slightly different places, they basically cover the same ground and overwhelmingly overlap anyway, so whatever.
But even if it were *demonstrably true* that [group] were inferior, that wouldn’t change my sense of moral obligation toward its members, and it wouldn’t change my beliefs about what kinds of treatment are fair or unfair.
I know for a fact that I have more raw intelligence than most humans! Even in nerd circles, I’m more-than-half-the-time in the upper quartile of whatever room I’m in, and guess what! Doesn’t matter! Practically every human outstrips me in some domain or other anyway! I can’t step to someone’s unique expertise, nor can I compete with them along domains orthogonal to intelligence (e.g. physical prowess), and even if I were superior to someone along 10 out of 10 of the *most* important axes …
… EVEN THEN, I do not think that gives me the right to dictate the terms of their existence, cut them off from opportunity, or take a larger share of the social pie.
The whole *point* of civilization is moving away from a state of base natural anarchy, where your value is tied to your capability. The whole point of building a safe, stable, cooperative society is making it so that you *don’t* have to pull your whole weight every second of every day or else be abandoned to the wolves or enslaved by strongmen.
The thing we’re trying to build here is a world where the absolutely inferior—
(To the extent that’s even a category that exists; a lot depends on your point of view and what axes you consider relevant)
The thing we’re trying to build here is a world where *even the absolutely inferior* get to have the maximum achievable amount of sovereignty, and agency, and happiness, and health, and get to participate in society to the greatest possible degree permitted by their personal limitations and the technology we have available (both literal technology and social/metaphorical tech).
IDGAF if you can “prove” some group’s inferiority. It means nothing to me. It changes nothing. It was never the key hinge of the conversation for me. Superiority is not the foundation of my sense of my fellow humans’ dignity.
(And that’s setting *aside* the fact that even if you’ve proven a difference between groups at the statistical level, you’ve done very little to demonstrate the relevance of that statistical difference on individual members; bell curves are not their averages.)
I think it’s good to push back on bigots when they are spreading straightforward falsehoods. I’m not saying “don’t fire back with facts” in these conversations.
But the *fire* with which people fire back seems to me to be counterproductive and wrong, and it worries me. Acting outraged at the mere possibility that some group might be inferior to another, as if that would be morally relevant in any way whatsoever—
I kind of fear that those people are closer to the bigots than I might wish. That they’re responding with such fervor because they *do* believe, on some gut level, that if the groups are different, then the moral standards must necessarily also be different. They don’t want to conclude that the moral standards should be different, and so they object with *desperation* to any evidence that threatens to show actual differences between groups.
Potential competence differences between groups don’t matter on a moral level. Or at least, let me-and-my-philosophy be an existence proof to you: they don’t HAVE to matter.
You can build a society that doesn’t give a fuck if people are fundamentally inferior, and that does its best to be fair and moral toward them anyway.
That’s the society you *should* be trying to build. If for no other reason than the fact that that’s going to be you one day, when you break a leg or have a stroke or just succumb to the vicissitudes of time. If for no other reason than the fact that that could be your kid, or the kid of someone you care about.
(There are other reasons, too, but that’s the one that’s hopefully at least a little bit persuasive even to selfish egotists.)
Competence is not the measure of worth. Fundamental equality is *not* the justification for fair and moral treatment.
I definitely agree that competence is not the measure of worth, but I am also worried in this comment you are kind of shoving out of view a potentially pretty important question, which is the genuine moral and game-theoretic relevance of different minds (both human and artificial minds).
I wrote up my thoughts here in this other comment, so I will mostly quote:
it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus.
Saying “all people count equally” is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn’t really hold any water after even just a tiny bit of poking, and your only link for this assertion is a random article written by CEA, which doesn’t argue for this claim at all and also just blindly asserts it). It is still the case that most EAs believe that the variance in the importance of different people’s experience is relatively small, that variance almost certainly does not align with historical conceptions of racism, and that there are at least some decent game-theoretic arguments to ignore a good chunk of this variance, but this does not mean that “all people count equally” is a “core belief” which should clearly only be reserved for an extremely small number of values and claims. It might be a good enough approximation in almost all practical situations, but it is really not a deep philosophical assumption of any of the things that I am working on, and I am confident that if I were to bring it up at an EA meetup, someone would quite convincingly argue against it.
This might seem like a technicality, but in this context the statement is specifically made to claim that EA has a deep philosophical commitment to valuing all people equally, independently of the details about how their mind works (either because of genetics, or development environment, or education). This reassurance does not work. I (and my guess is also almost all extrapolations of the EA philosophy) value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality. If it was the case that different human populations did differ on the relevant dimensions a lot, this would spell a real moral dilemma for the EA community, with no deep philosophical commitments to guard us from coming to uncomfortable conclusions (luckily, as far as I can tell, in this case almost all analyses from an EA perspective lead to the conclusion that it’s probably reasonable to weigh people equally in impact estimates, which doesn’t conflict with society’s taboos, so this is not de-facto a problem).
In another comment:
In all of these situations, I think we can still say people “count” equally.
I don’t think this goes through. Let’s just talk about the hypothetical of humanity’s evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn’t even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don’t see how you can be confident that your moral concern in the present day is independent of that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.
I think the conflation of capability with moral worth is indeed pretty bad in a bunch of different situations, but like, I also think different minds probably genuinely have different moral weights, and while I don’t think the variance in human minds here rises to have much relevance in daily decision-making, I do think the broader questions around engineering beings capable of achieving heights of much greater experience, or self-modifying in that direction, as well as the construction of artificial minds where its a huge open question what moral consideration we should extend them, are quite important, and something about your comment feels like it’s making that conversation harder.
Like, the sentence: “Acting outraged at the mere possibility that some group might be inferior to another, as if that would be morally relevant in any way whatsoever—”
Like, I don’t know, there are definitely dimensions of capacity (probably not intelligence, though honestly also not definitely not-intelligence) that play at least some role in the actual moral relevance of a person. It has to, otherwise I definitely no longer have a good answer to many moral questions around animal ethics and the ethics of artificial minds. And I think empirically, after thinking about this question a bunch, de-facto I think the variance among the human population here is pretty small, but I do actually think it was worth checking and thinking about, and I also feel like if someone was to show up and was skeptical of my position here, I wouldn’t be particularly outraged or confused, it feels like a genuinely difficult question.
Yep, basically endorsed; this is like the next layer of nuance and consideration to be laid down; I suspect I was subconsciously thinking that one couldn’t easily get the-audience-I-was-speaking-to across both inferential leaps at once?
There’s also something about the difference between triaged and limited systems (which we are, in fact, in) and ultimate utopian ideals. I think that in the ultimate utopian ideal we do not give people less moral weight based on their capacity, but I agree that in the meantime scarce resources do indeed sometimes need dividing.
IMO I think part of the issue is we live in the convenient world where differences do not matter so much as to make hard work irrelevant.
But I disagree with the general statement of Duncan Sabien that arbitrarily large capabilities differentials do not matter morally.
More generally, if capabilities differentials mattered much more through say genetic engineering or whole brain emulation or AI, then I wouldn’t support the thesis that all sentient beings should be equal.
So I heavily disagree with this quoted section:
Competence is not the measure of worth. Fundamental equality is not the justification for fair and moral treatment.
mild tangent, but ultimately not really a tangent -
The whole *point* of civilization is moving away from a state of base natural anarchy, where your value is tied to your capability
yeah, maybe; but anarchy.works. non-authoritarianism, as the word was originally meant, is about forming stable multiscale bonds of non-dominating microsolidarity. non archy has worked very well before; in order to work well, there has to be a large cooperation bubble that prevents takeover by authority structures.
that isn’t what you meant, of course—you meant destructive chaos, the meaning usually expected from the word. but I claim that it is worth understanding why the word anarchy has such strong detractors and supporters, and learning what the underlying principles of those ethics are.
Strongly agreed with the point actually being made by the word in this context, and with the entire comment to which I reply, I just wanted to comment on the word as used.
What follows is a tangent, but it feels like a relevant tangent. Like, I do not claim this is quite the same conversation as the above, it’s slightly in a different direction, but it’s not fully a non-sequitur.
Forgive the slightly-not-normal-for-this-venue language; this was originally a personal Facebook comment.
I definitely agree that competence is not the measure of worth, but I am also worried in this comment you are kind of shoving out of view a potentially pretty important question, which is the genuine moral and game-theoretic relevance of different minds (both human and artificial minds).
I wrote up my thoughts here in this other comment, so I will mostly quote:
In another comment:
I think the conflation of capability with moral worth is indeed pretty bad in a bunch of different situations, but like, I also think different minds probably genuinely have different moral weights, and while I don’t think the variance in human minds here rises to have much relevance in daily decision-making, I do think the broader questions around engineering beings capable of achieving heights of much greater experience, or self-modifying in that direction, as well as the construction of artificial minds where its a huge open question what moral consideration we should extend them, are quite important, and something about your comment feels like it’s making that conversation harder.
Like, the sentence: “Acting outraged at the mere possibility that some group might be inferior to another, as if that would be morally relevant in any way whatsoever—”
Like, I don’t know, there are definitely dimensions of capacity (probably not intelligence, though honestly also not definitely not-intelligence) that play at least some role in the actual moral relevance of a person. It has to, otherwise I definitely no longer have a good answer to many moral questions around animal ethics and the ethics of artificial minds. And I think empirically, after thinking about this question a bunch, de-facto I think the variance among the human population here is pretty small, but I do actually think it was worth checking and thinking about, and I also feel like if someone was to show up and was skeptical of my position here, I wouldn’t be particularly outraged or confused, it feels like a genuinely difficult question.
Yep, basically endorsed; this is like the next layer of nuance and consideration to be laid down; I suspect I was subconsciously thinking that one couldn’t easily get the-audience-I-was-speaking-to across both inferential leaps at once?
There’s also something about the difference between triaged and limited systems (which we are, in fact, in) and ultimate utopian ideals. I think that in the ultimate utopian ideal we do not give people less moral weight based on their capacity, but I agree that in the meantime scarce resources do indeed sometimes need dividing.
IMO I think part of the issue is we live in the convenient world where differences do not matter so much as to make hard work irrelevant.
But I disagree with the general statement of Duncan Sabien that arbitrarily large capabilities differentials do not matter morally.
More generally, if capabilities differentials mattered much more through say genetic engineering or whole brain emulation or AI, then I wouldn’t support the thesis that all sentient beings should be equal.
So I heavily disagree with this quoted section:
mild tangent, but ultimately not really a tangent -
yeah, maybe; but anarchy.works. non-authoritarianism, as the word was originally meant, is about forming stable multiscale bonds of non-dominating microsolidarity. non archy has worked very well before; in order to work well, there has to be a large cooperation bubble that prevents takeover by authority structures.
that isn’t what you meant, of course—you meant destructive chaos, the meaning usually expected from the word. but I claim that it is worth understanding why the word anarchy has such strong detractors and supporters, and learning what the underlying principles of those ethics are.
Strongly agreed with the point actually being made by the word in this context, and with the entire comment to which I reply, I just wanted to comment on the word as used.