At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.
I think what Habryka is saying is that while EA does have some notion of equality, the reason it sticks so close to mainstream egalitarianism is because humans don’t differ much. If there were multiple species civilizations like those in Orion’s Arm for example, where multiple orders of magnitude differences in abilities are present, than a lot of stratification and non-egalitarianism will happen solely by the value of freedom/empowerment.
And this poses a real moral dilemma for EA, primarily because of impossibility results around fairness/egalitarianism.
or sentient beings count equally regardless of their species
Who supports this? This is an extremely radical proposal, that I also haven’t seen defended anywhere. Of course sentient beings don’t count equally regardless of their species, that would imply that if fish turn out to be sentient (which they might) their moral weight would completely outweigh all of humanity right now. Maybe you buy that, but it’s definitely extremely far from consensus in EA.
In-general I feel like you just listed 6 different principles, some of which are much more sensible than others. I still agree that indifference to location and time is a pretty core principle, but I don’t see the relevance of it to the Bostrom discussion at hand, and so I assumed that it was not the one CEA was referring to. This might be a misunderstanding, but I feel like I don’t really have any story where stating that principle is relevant to Bostrom’s original statement or apology, given that racism concerns are present in the current day and affect people in the same places as we are. If that is the statement CEA was referring to, then I do withdraw that part of the criticism and replace it with “why are you bringing up a principle that doesn’t seem to have much to do with the situation?”.
And then beyond that, I do indeed think asserting there is no difference whatsoever in moral consideration between people seems pretty crazy to me, and I haven’t seen it defended. I am not that familiar with Bentham’s exact arguments here, and I don’t think he is particularly frequently cited (or at least I haven’t seen it). I also think I haven’t seen most of the other philosopher’s cited here except Singer, and I would be happy to have my first object level discussion now about whether you think a principle of perfectly equal moral consideration should hold. Singer has gone on record thinking that indeed different people have different moral weight, and this is one of his most controversial beliefs (i.e. his disability stuff is a consequence of that and has in the past gotten him cancelled at various universities), so I don’t know what you are referring to here as the principle, though I also feel pretty confused about Singer’s reasoning here.
In-general I think we discuss the differing moral weight of different animals all the time, and I don’t see us following a principle that puts sentient/conscious beings into one large uniform bucket.
Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.
Even if you disagree with some of the extreme applications of the principle, race is easy mode for this. Virtually everyone today agrees with equality in this case, so given what a unique cornerstone of EA philosophy this type of equality is in general, in cases where it seems that people are being treated with callousness and disrespect based on their race, it makes sense to reiterate it, it is an especially worrying sign for us. Again, you might disagree that Bostrom is failing to apply equal respect of this sort, or that this use of the word equality is not how you usually think of it, but I find it suspicious that so many people are boosting your comment given how common, even mundane a statement in EA philosophy ones like this are, and that the statement links directly to a page explaining it on the main EA website.
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it
This feels to me like it is begging the question, so I am not sure I understand this principle. This framing leaves open the whole question of “what determines how much capacity for things mattering to them someone has?”. Clearly we agree that different animals have different capacities here. Even if a fish managed so somehow communicate “the only thing I want is fish food”, I am going to spend much less money on fulfilling that desire of theirs than I am going to spend on fulfilling an equivalent desire from another human.
Given that you didn’t explain that difference, I don’t currently understand how to apply this principle that you are talking about practically, since its definition seems to have a hole exactly the shape of the question you purported it would answer.
That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Yeah, I think there are a bunch of different ways to answer this question, and active research on it, but I feel like the answer here does indeed depend on empirical details and there is no central guiding principle that we are confident in that gives us one specific answer.
Like, I think the correct defense is to just be straightforward and say “look, I think different people are basically worth the same, since cognitive variance just isn’t that high”. I just don’t think there is a core principle of EA that would prevent someone from believing that people who have a substantially different cognitive makeup would also deserve less or more moral consideration (though the game-theory here also often makes it so that you should still trade with them in a way that evens stuff out, though it’s not guaranteed).
I personally don’t find hedonic utilitarianism very compelling (and I think this is true for a lot of EA), so am not super interested in valence-based approaches to answering this question, though I am still glad about the work Rethink is doing since I still think it helps me think about how to answer this question in-general.
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstream view within EA. If it means (as MacAskill suggests it might, in his latest book) that the total well-being of fish outweighs the total well-being of humanity, then this is not an objectionable conclusion (and to think otherwise would be speciesist, on this view).
At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.
I think what Habryka is saying is that while EA does have some notion of equality, the reason it sticks so close to mainstream egalitarianism is because humans don’t differ much. If there were multiple species civilizations like those in Orion’s Arm for example, where multiple orders of magnitude differences in abilities are present, than a lot of stratification and non-egalitarianism will happen solely by the value of freedom/empowerment.
And this poses a real moral dilemma for EA, primarily because of impossibility results around fairness/egalitarianism.
Who supports this? This is an extremely radical proposal, that I also haven’t seen defended anywhere. Of course sentient beings don’t count equally regardless of their species, that would imply that if fish turn out to be sentient (which they might) their moral weight would completely outweigh all of humanity right now. Maybe you buy that, but it’s definitely extremely far from consensus in EA.
In-general I feel like you just listed 6 different principles, some of which are much more sensible than others. I still agree that indifference to location and time is a pretty core principle, but I don’t see the relevance of it to the Bostrom discussion at hand, and so I assumed that it was not the one CEA was referring to. This might be a misunderstanding, but I feel like I don’t really have any story where stating that principle is relevant to Bostrom’s original statement or apology, given that racism concerns are present in the current day and affect people in the same places as we are. If that is the statement CEA was referring to, then I do withdraw that part of the criticism and replace it with “why are you bringing up a principle that doesn’t seem to have much to do with the situation?”.
And then beyond that, I do indeed think asserting there is no difference whatsoever in moral consideration between people seems pretty crazy to me, and I haven’t seen it defended. I am not that familiar with Bentham’s exact arguments here, and I don’t think he is particularly frequently cited (or at least I haven’t seen it). I also think I haven’t seen most of the other philosopher’s cited here except Singer, and I would be happy to have my first object level discussion now about whether you think a principle of perfectly equal moral consideration should hold. Singer has gone on record thinking that indeed different people have different moral weight, and this is one of his most controversial beliefs (i.e. his disability stuff is a consequence of that and has in the past gotten him cancelled at various universities), so I don’t know what you are referring to here as the principle, though I also feel pretty confused about Singer’s reasoning here.
In-general I think we discuss the differing moral weight of different animals all the time, and I don’t see us following a principle that puts sentient/conscious beings into one large uniform bucket.
Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.
Even if you disagree with some of the extreme applications of the principle, race is easy mode for this. Virtually everyone today agrees with equality in this case, so given what a unique cornerstone of EA philosophy this type of equality is in general, in cases where it seems that people are being treated with callousness and disrespect based on their race, it makes sense to reiterate it, it is an especially worrying sign for us. Again, you might disagree that Bostrom is failing to apply equal respect of this sort, or that this use of the word equality is not how you usually think of it, but I find it suspicious that so many people are boosting your comment given how common, even mundane a statement in EA philosophy ones like this are, and that the statement links directly to a page explaining it on the main EA website.
This feels to me like it is begging the question, so I am not sure I understand this principle. This framing leaves open the whole question of “what determines how much capacity for things mattering to them someone has?”. Clearly we agree that different animals have different capacities here. Even if a fish managed so somehow communicate “the only thing I want is fish food”, I am going to spend much less money on fulfilling that desire of theirs than I am going to spend on fulfilling an equivalent desire from another human.
Given that you didn’t explain that difference, I don’t currently understand how to apply this principle that you are talking about practically, since its definition seems to have a hole exactly the shape of the question you purported it would answer.
That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Yeah, I think there are a bunch of different ways to answer this question, and active research on it, but I feel like the answer here does indeed depend on empirical details and there is no central guiding principle that we are confident in that gives us one specific answer.
Like, I think the correct defense is to just be straightforward and say “look, I think different people are basically worth the same, since cognitive variance just isn’t that high”. I just don’t think there is a core principle of EA that would prevent someone from believing that people who have a substantially different cognitive makeup would also deserve less or more moral consideration (though the game-theory here also often makes it so that you should still trade with them in a way that evens stuff out, though it’s not guaranteed).
I personally don’t find hedonic utilitarianism very compelling (and I think this is true for a lot of EA), so am not super interested in valence-based approaches to answering this question, though I am still glad about the work Rethink is doing since I still think it helps me think about how to answer this question in-general.
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
Just to clarify, I am a utilitarian, approximately, just not a hedonic utilitarian.
Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstream view within EA. If it means (as MacAskill suggests it might, in his latest book) that the total well-being of fish outweighs the total well-being of humanity, then this is not an objectionable conclusion (and to think otherwise would be speciesist, on this view).