I feel really quite bad about this post. Despite it being only a single paragraph it succeeds at confidently making a wrong claim, pretending to speak on behalf of both an organization and community that it is not accurately representing, communicating ambiguously (probably intentionally in order to avoid being able to be pinned on any specific position), and for some reason omitting crucial context.
Contrary to the OP it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus.
Saying “all people count equally” is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn’t really hold any water after even just a tiny bit of poking, and your only link for this assertion is a random article written by CEA, which doesn’t argue for this claim at all and also just blindly asserts it). It is still the case that most EAs believe that the variance in the importance of different people’s experience is relatively small, that variance almost certainly does not align with historical conceptions of racism, and that there are at least some decent game-theoretic arguments to ignore a good chunk of this variance, but this does not mean that “all people count equally” is a “core belief” which should clearly only be reserved for an extremely small number of values and claims. It might be a good enough approximation in almost all practical situations, but it is really not a deep philosophical assumption of any of the things that I am working on, and I am confident that if I were to bring it up at an EA meetup, someone would quite convincingly argue against it.
This might seem like a technicality, but in this context the statement is specifically made to claim that EA has a deep philosophical commitment to valuing all people equally, independently of the details about how their mind works (either because of genetics, or development environment, or education). This reassurance does not work. I (and my guess is also almost all extrapolations of the EA philosophy) value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality. If it was the case that different human populations did differ on the relevant dimensions a lot, this would spell a real moral dilemma for the EA community, with no deep philosophical commitments to guard us from coming to uncomfortable conclusions (luckily, as far as I can tell, in this case almost all analyses from an EA perspective lead to the conclusion that it’s probably reasonable to weigh people equally in impact estimates, which doesn’t conflict with society’s taboos, so this is not de-facto a problem).
Moving on, I do not believe that this statement is speaking on behalf of the employees of CEA, many of which I am confident also feel quite badly represented by this statement, and is also not speaking on behalf of Effective Altruism. I don’t know what process has produced it, but I don’t think it is speaking for me or almost anyone else I know within the EA community. Organizations themselves don’t have beliefs, and EA has generally successfully avoided descending into meaningless marketing and PR speech where organizations take positions despite nobody at those organizations actually believing those positions. If you want to make a statement on this matter, speak as an individual. Individuals can meaningfully have beliefs. Organizations pretending to have beliefs is usually primarily a tactic to avoid taking responsibility and creating a diffuse target.
Additionally, it is completely unclear from your statement whether you are referring to Bostrom’s original email or whether you are referring to Bostrom’s apology. I don’t know why you are being ambiguous, but it seems quite plausible that you are doing so in order to not be able to be pinned on either repudiating the statements in Bostrom’s apology, which seem quite reasonable to me and many other EAs (and would therefore attract ire from the community), or failing to repudiate those same statements which are attracting a lot of ire publicly due to not being explicitly anti-racist enough. If this is indeed what you are doing, then please stop. This ambiguity is toxic to clear communication. If this is not what you are doing, then please clarify, and also please get better at writing, it seems really extremely obvious that this was going to be a problem with this statement.
Lastly, you are also not linking to either Bostrom’s original statement, or his apology. I don’t know why. It would both clear up the ambiguity discussed above, and it would provide crucial context for anyone trying to understand what is going on, and who might have not seen Bostrom’s apology. My guess is you are doing this with some other PR reason in mind. Maybe so that when people Google this topic later on this doesn’t show up in search? Maybe so that the lack of context makes it less likely that other people outside of the community will understand what this statement is about? In any case, either please get better at communicating, or stop the weird PR games that you are seemingly trying to play here.
Overall, despite this only being a single paragraph, I think there has been little produced by CEA that has made me feel as badly represented, and that has made me feel as alienated from the EA community as this statement. Please stop on whatever course you are setting out where this is how you communicate with both the public and the community.
I think I do see “all people count equally” as a foundational EA belief. This might be partly because I understand “count” differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were “core” to EA, rather than idiosyncratic to me). What I understand by “people count equally” is something like “1 person’s wellbeing is not more important than another’s”.
E.g. a British nationalist might not think that all people count equally, because they think their copatriots’ wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people.
“most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus”
In all of these situations, I think we can still say people “count” equally. QALY frameworks don’t say that young people’s wellbeing matters more—just that if they die or get sick, they stand to lose more wellbeing than older people, so it might make sense to prioritize them. This seems similar to how I prioritize donating to poor people over rich people—it’s not that rich people’s wellbeing matters less, it’s just that poor people are generally further from optimal wellbeing in the first place. And I think this reasoning can be applied to hypothetical people/beings with greater capacity for suffering. I think greater capacity for happiness is trickier and possibly an object-level disagreement—I wouldn’t be inclined to prioritize Happiness Georg’s happiness above all else, because his happiness outweights the suffering of many others, but maybe you would bite that bullet.
Thanks for writing out a reaction very similar to my own. As I wrote in a comment on a different topic, “it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time.”
I agree that “all people count equally” is an imprecise way to express that value (and I would probably choose to frame in in the lens of “value” rather than “belief”) but I read this as an imprecise expression of a common value in the movement rather than a deep philosophical commitment to valuing all minds exactly the same.
But there is a huge difference in this case between something being a common belief and a philosophical commitment, and there is also a huge difference between saying that space/time does not matter and that all people count equally.
I agree that most EAs believe that people roughly count equally, but if someone was to argue against that, I would in no way think they are violating any core tenets of the EA community. And that makes the sentence in this PR statement fall flat, since I don’t think we can give any reassurance that empirical details will not change our mind on this point.
And yeah, I think time/space not mattering is a much stronger core belief, but as far as I can tell that doesn’t seem to have anything to do with the concerns this statement is trying to preempt. I don’t think racism and similar stuff is usually motivated by people being far away in time and space (and indeed, my guess is something closer to the opposite is true, where racist individuals are more likely to feel hate towards the immigrants in their country, and more sympathy for people in third world countries).
One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever the donor is.
This is narrower than the broad philosophical commitment Habryka is talking about, though. Taken as a broad philosophical commitment, “all people count equally” would force some strange conclusions when translated into a QALY framework, and when applied to AI, and also would imply that you shouldn’t favor people close to you over people in distant poor countries at all,even if the QUALYs-per-dollar were similar. I think most EAs are in a position where they’re willing to pay $X/QALY to extend the lives of distant strangers, $5X/QALY to extend the lives of acquaintances, and $100X/QALY to extend the lives of close friends and family. And I think this is philosophically coherent and consistent with being an effective altruist.
In all of these situations, I think we can still say people “count” equally.
I don’t think this goes through. Let’s just talk about the hypothetical of humanity’s evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn’t even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don’t see how you can be confident that your moral concern in the present day is independent of exactly that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.
And the last part of the sentence that I quoted seems also not very compatible with this. Digital people might have hugely varying levels of capacity for suffering and happiness and other things we care about, including different EMs. I indeed hope we create beings with much greater capacity for happiness than us, and would consider that among one of the moral priorities of our time.
For information, CEA’s OP links to an explanation of impartiality:
Impartial altruism: We believe that all people count equally. Of course it’s reasonable to have special concern for one’s own family, friends and life. But, when trying to do as much good as possible, we aim to give everyone’s interests equal weight, no matter where or when they live. This means focusing on the groups who are most neglected, which usually means focusing on those who don’t have as much power to protect their own interests.
That paragraph does feel kind of confused to me, though it’s hard to be precise in lists of principles like this.
As jimrandomh says above, it is widely accepted in EA that time and location do not matter morally (well, moreso location, I think it’s actually pretty common for EAs to think that far future lives are worth less than present lives, though I don’t agree with this reasoning). But that clearly does not imply that all people count equally, given that there are many possible reasons for differing moral weights.
Brain Emulations—basically taking a person and running a simulation of them on a computer, where they could potentially be copied, run faster or slower, etc.
Thanks for writing this up Amber — this is the sense that we intended in our statement and in the intro essay that it refers to (though I didn’t write the intro essay). We have edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are more like “core hypotheses, but subject to revision” than “set in stone”.
I wanted to clarify and apologise for some things here (not all of these are criticisms you’ve specifically made, but this is the best place I can think of to respond to various criticisms that have been made):
This statement was drafted and originally intended to be a short quote that we could send to journalists if asked for comment. On reflection, I think that posting something written for that purpose on the Forum was the wrong way to communicate with the community and a mistake. I am glad that we posted something, because I think that it’s important for community members to hear that CEA cares about inclusion, and (along with legitimate criticism like yours) I’ve heard from many community members who are glad we said something. But I wish that I had said something on the Forum with more precision and nuance, and will try to be better at this in future.
The first sentence was not meant to imply that we think that Bostrom disagrees with this view, but we can see why people would draw this implication. It’s included because we thought lots of people might get the impression from Bostrom’s email that EA is racist and I don’t want anyone — within or outside the community — to think that. Nevertheless this was sloppy, and is something that we should have caught when drafting it. Sorry.
We also intended the first sentence to have a meaning like Amber’s interpretation above, rather than the interpretation you had, but we agree that this is unclear. We’ve just edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are mostly more like “core hypotheses, but subject to revision” than “set in stone”.
This statement was intended as a reaction to Bostrom’s initial email (CW that this link includes a racial slur). I agree that if we had linked to that email it would have been clearer, and at the time I posted it I didn’t even consider that this might be ambiguous. Sorry.
More generally, we’re thinking about how we can improve our responses to situations like this in the future. I’m also planning to write up more about our overall approach to comms (TL;DR is that I agree with various concerns that have been raised about CEA and others in the community caring too much about PR concerns; I think truthfully saying what you believe — carefully and with compassion — is almost always more important than anything else), but it might be a little while before I get round to that.
I agree with various concerns that have been raised about CEA and others in the community caring too much about PR concerns; I think truthfully saying what you believe — carefully and with compassion — is almost always more important than anything else
CEA’s current media policy forbids employees from commenting on controversial issues without permission from leaders (including you). Does the view you express here mean you disagree with this policy? At present it seems that you have had the right to shoot from the hip with your personal opinions but ordinary CEA employees do not.
At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.
I think what Habryka is saying is that while EA does have some notion of equality, the reason it sticks so close to mainstream egalitarianism is because humans don’t differ much. If there were multiple species civilizations like those in Orion’s Arm for example, where multiple orders of magnitude differences in abilities are present, than a lot of stratification and non-egalitarianism will happen solely by the value of freedom/empowerment.
And this poses a real moral dilemma for EA, primarily because of impossibility results around fairness/egalitarianism.
or sentient beings count equally regardless of their species
Who supports this? This is an extremely radical proposal, that I also haven’t seen defended anywhere. Of course sentient beings don’t count equally regardless of their species, that would imply that if fish turn out to be sentient (which they might) their moral weight would completely outweigh all of humanity right now. Maybe you buy that, but it’s definitely extremely far from consensus in EA.
In-general I feel like you just listed 6 different principles, some of which are much more sensible than others. I still agree that indifference to location and time is a pretty core principle, but I don’t see the relevance of it to the Bostrom discussion at hand, and so I assumed that it was not the one CEA was referring to. This might be a misunderstanding, but I feel like I don’t really have any story where stating that principle is relevant to Bostrom’s original statement or apology, given that racism concerns are present in the current day and affect people in the same places as we are. If that is the statement CEA was referring to, then I do withdraw that part of the criticism and replace it with “why are you bringing up a principle that doesn’t seem to have much to do with the situation?”.
And then beyond that, I do indeed think asserting there is no difference whatsoever in moral consideration between people seems pretty crazy to me, and I haven’t seen it defended. I am not that familiar with Bentham’s exact arguments here, and I don’t think he is particularly frequently cited (or at least I haven’t seen it). I also think I haven’t seen most of the other philosopher’s cited here except Singer, and I would be happy to have my first object level discussion now about whether you think a principle of perfectly equal moral consideration should hold. Singer has gone on record thinking that indeed different people have different moral weight, and this is one of his most controversial beliefs (i.e. his disability stuff is a consequence of that and has in the past gotten him cancelled at various universities), so I don’t know what you are referring to here as the principle, though I also feel pretty confused about Singer’s reasoning here.
In-general I think we discuss the differing moral weight of different animals all the time, and I don’t see us following a principle that puts sentient/conscious beings into one large uniform bucket.
Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.
Even if you disagree with some of the extreme applications of the principle, race is easy mode for this. Virtually everyone today agrees with equality in this case, so given what a unique cornerstone of EA philosophy this type of equality is in general, in cases where it seems that people are being treated with callousness and disrespect based on their race, it makes sense to reiterate it, it is an especially worrying sign for us. Again, you might disagree that Bostrom is failing to apply equal respect of this sort, or that this use of the word equality is not how you usually think of it, but I find it suspicious that so many people are boosting your comment given how common, even mundane a statement in EA philosophy ones like this are, and that the statement links directly to a page explaining it on the main EA website.
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it
This feels to me like it is begging the question, so I am not sure I understand this principle. This framing leaves open the whole question of “what determines how much capacity for things mattering to them someone has?”. Clearly we agree that different animals have different capacities here. Even if a fish managed so somehow communicate “the only thing I want is fish food”, I am going to spend much less money on fulfilling that desire of theirs than I am going to spend on fulfilling an equivalent desire from another human.
Given that you didn’t explain that difference, I don’t currently understand how to apply this principle that you are talking about practically, since its definition seems to have a hole exactly the shape of the question you purported it would answer.
That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Yeah, I think there are a bunch of different ways to answer this question, and active research on it, but I feel like the answer here does indeed depend on empirical details and there is no central guiding principle that we are confident in that gives us one specific answer.
Like, I think the correct defense is to just be straightforward and say “look, I think different people are basically worth the same, since cognitive variance just isn’t that high”. I just don’t think there is a core principle of EA that would prevent someone from believing that people who have a substantially different cognitive makeup would also deserve less or more moral consideration (though the game-theory here also often makes it so that you should still trade with them in a way that evens stuff out, though it’s not guaranteed).
I personally don’t find hedonic utilitarianism very compelling (and I think this is true for a lot of EA), so am not super interested in valence-based approaches to answering this question, though I am still glad about the work Rethink is doing since I still think it helps me think about how to answer this question in-general.
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstream view within EA. If it means (as MacAskill suggests it might, in his latest book) that the total well-being of fish outweighs the total well-being of humanity, then this is not an objectionable conclusion (and to think otherwise would be speciesist, on this view).
It’s interesting to read this critique of a EVF/CEA press statement through the lens of EVF/CEA’s own fidelity model, which emphasizes the problems/challenges with communicating EA ideas in low-bandwidth channels.
I don’t agree with the specific critique here, but would be curious as to how the decision to publish a near-tweet-level public statement fits into the fidelity model.
in addition to all of this, the statement compounds the already existent trust problem EA has. It was already extremely bad in the aftermath of FTX that people were running to journos to leak them screenshots from private EA governance channels (vide that New Yorker piece). You can’t trust people in an organization or culture who all start briefing the press against each other the minute the chips are down! Now we have CEA publicly knifing a long-term colleague and movement founder figure with this unbelievably short and brutal statement, more or less a complete disowning, when really they needed to say nothing at all, or at least nothing right now.
When your whole movement is founded on the idea of utility maximizing, trust is already impaired because you forever feel that you’re only going to be backed for as long as you’re perceived useful: virtues such as loyalty and friendship are not really important in the mainstream EA ethical framework. It’s already discomfiting enough to feel that EAs might slit your throat in exchange for the lives of a million chickens, but when they appear to metaphorically be quite prepared to slit each other’s throats for much less, it’s even worse!
Sabs—I agree. EAs need to learn much better PR crisis management skills, and apply them carefully, soberly, carefully, and expertly.
Putting out very short, reactive, panicked statements that publicly disavow key founders of our movement is not a constructive strategy for defending a movement against hostile outsiders, or promoting trust within the movement, or encouraging ethical self-reflection among movement members.
I’ve seen this error again, and again, and again, in academia—when administrators panic about some public blowback about something someone has allegedly done. We should be better than that.
Agree. At a meta-level, I was disappointed by the seemingly panicked and reactive nature of the statement. The statement is bad, and so, it seems, is the process that produced it.
Hm, I don’t much agree with this because I think the statement is basically consistent with Bostrom’s own apology. (Though it can still be rough to have other people agree with your criticisms of yourself).
Trust does not mean circling the wagons and remaining silent about seriously bad behavior. That kind of “trust” would be toxic to community health because it would privilege the comfort of the leader who made a racist comment over maintaining a safe, healthy community for everyone else.
Being a leader means accepting more scrutiny and criticism of your actions, not getting a pass because you’re a “long-term colleague and movement founder figure.”
Sounds like you feel pretty strongly about this and feel like this was very poorly communicated. What would you have preferred the statement to be instead?
I would also like to add to the other comments that EA Intro Fellowship has included a book section titled “All Animals Are Equal” for quite some time.
I feel really quite bad about this post. Despite it being only a single paragraph it succeeds at confidently making a wrong claim, pretending to speak on behalf of both an organization and community that it is not accurately representing, communicating ambiguously (probably intentionally in order to avoid being able to be pinned on any specific position), and for some reason omitting crucial context.
Contrary to the OP it is easy to come up with examples where within the Effective Altruism framework two people do not count equally. Indeed most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus.
Saying “all people count equally” is not a core belief of EA, and indeed I do not remember hearing it seriously argued for a single time in my almost 10 years in this community (which is not surprising, since it indeed doesn’t really hold any water after even just a tiny bit of poking, and your only link for this assertion is a random article written by CEA, which doesn’t argue for this claim at all and also just blindly asserts it). It is still the case that most EAs believe that the variance in the importance of different people’s experience is relatively small, that variance almost certainly does not align with historical conceptions of racism, and that there are at least some decent game-theoretic arguments to ignore a good chunk of this variance, but this does not mean that “all people count equally” is a “core belief” which should clearly only be reserved for an extremely small number of values and claims. It might be a good enough approximation in almost all practical situations, but it is really not a deep philosophical assumption of any of the things that I am working on, and I am confident that if I were to bring it up at an EA meetup, someone would quite convincingly argue against it.
This might seem like a technicality, but in this context the statement is specifically made to claim that EA has a deep philosophical commitment to valuing all people equally, independently of the details about how their mind works (either because of genetics, or development environment, or education). This reassurance does not work. I (and my guess is also almost all extrapolations of the EA philosophy) value people approximately equally in impact estimates because it looks like the relative moral patienthood of different people, and the basic cognitive makeup of people, does not seem to differ much between different populations, not because I have a foundational philosophical commitment to impartiality. If it was the case that different human populations did differ on the relevant dimensions a lot, this would spell a real moral dilemma for the EA community, with no deep philosophical commitments to guard us from coming to uncomfortable conclusions (luckily, as far as I can tell, in this case almost all analyses from an EA perspective lead to the conclusion that it’s probably reasonable to weigh people equally in impact estimates, which doesn’t conflict with society’s taboos, so this is not de-facto a problem).
Moving on, I do not believe that this statement is speaking on behalf of the employees of CEA, many of which I am confident also feel quite badly represented by this statement, and is also not speaking on behalf of Effective Altruism. I don’t know what process has produced it, but I don’t think it is speaking for me or almost anyone else I know within the EA community. Organizations themselves don’t have beliefs, and EA has generally successfully avoided descending into meaningless marketing and PR speech where organizations take positions despite nobody at those organizations actually believing those positions. If you want to make a statement on this matter, speak as an individual. Individuals can meaningfully have beliefs. Organizations pretending to have beliefs is usually primarily a tactic to avoid taking responsibility and creating a diffuse target.
Additionally, it is completely unclear from your statement whether you are referring to Bostrom’s original email or whether you are referring to Bostrom’s apology. I don’t know why you are being ambiguous, but it seems quite plausible that you are doing so in order to not be able to be pinned on either repudiating the statements in Bostrom’s apology, which seem quite reasonable to me and many other EAs (and would therefore attract ire from the community), or failing to repudiate those same statements which are attracting a lot of ire publicly due to not being explicitly anti-racist enough. If this is indeed what you are doing, then please stop. This ambiguity is toxic to clear communication. If this is not what you are doing, then please clarify, and also please get better at writing, it seems really extremely obvious that this was going to be a problem with this statement.
Lastly, you are also not linking to either Bostrom’s original statement, or his apology. I don’t know why. It would both clear up the ambiguity discussed above, and it would provide crucial context for anyone trying to understand what is going on, and who might have not seen Bostrom’s apology. My guess is you are doing this with some other PR reason in mind. Maybe so that when people Google this topic later on this doesn’t show up in search? Maybe so that the lack of context makes it less likely that other people outside of the community will understand what this statement is about? In any case, either please get better at communicating, or stop the weird PR games that you are seemingly trying to play here.
Overall, despite this only being a single paragraph, I think there has been little produced by CEA that has made me feel as badly represented, and that has made me feel as alienated from the EA community as this statement. Please stop on whatever course you are setting out where this is how you communicate with both the public and the community.
I think I do see “all people count equally” as a foundational EA belief. This might be partly because I understand “count” differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were “core” to EA, rather than idiosyncratic to me).
What I understand by “people count equally” is something like “1 person’s wellbeing is not more important than another’s”.
E.g. a British nationalist might not think that all people count equally, because they think their copatriots’ wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people.
“most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus”
In all of these situations, I think we can still say people “count” equally. QALY frameworks don’t say that young people’s wellbeing matters more—just that if they die or get sick, they stand to lose more wellbeing than older people, so it might make sense to prioritize them. This seems similar to how I prioritize donating to poor people over rich people—it’s not that rich people’s wellbeing matters less, it’s just that poor people are generally further from optimal wellbeing in the first place. And I think this reasoning can be applied to hypothetical people/beings with greater capacity for suffering. I think greater capacity for happiness is trickier and possibly an object-level disagreement—I wouldn’t be inclined to prioritize Happiness Georg’s happiness above all else, because his happiness outweights the suffering of many others, but maybe you would bite that bullet.
Thanks for writing out a reaction very similar to my own. As I wrote in a comment on a different topic, “it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time.”
I agree that “all people count equally” is an imprecise way to express that value (and I would probably choose to frame in in the lens of “value” rather than “belief”) but I read this as an imprecise expression of a common value in the movement rather than a deep philosophical commitment to valuing all minds exactly the same.
But there is a huge difference in this case between something being a common belief and a philosophical commitment, and there is also a huge difference between saying that space/time does not matter and that all people count equally.
I agree that most EAs believe that people roughly count equally, but if someone was to argue against that, I would in no way think they are violating any core tenets of the EA community. And that makes the sentence in this PR statement fall flat, since I don’t think we can give any reassurance that empirical details will not change our mind on this point.
And yeah, I think time/space not mattering is a much stronger core belief, but as far as I can tell that doesn’t seem to have anything to do with the concerns this statement is trying to preempt. I don’t think racism and similar stuff is usually motivated by people being far away in time and space (and indeed, my guess is something closer to the opposite is true, where racist individuals are more likely to feel hate towards the immigrants in their country, and more sympathy for people in third world countries).
One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever the donor is.
This is narrower than the broad philosophical commitment Habryka is talking about, though. Taken as a broad philosophical commitment, “all people count equally” would force some strange conclusions when translated into a QALY framework, and when applied to AI, and also would imply that you shouldn’t favor people close to you over people in distant poor countries at all, even if the QUALYs-per-dollar were similar. I think most EAs are in a position where they’re willing to pay $X/QALY to extend the lives of distant strangers, $5X/QALY to extend the lives of acquaintances, and $100X/QALY to extend the lives of close friends and family. And I think this is philosophically coherent and consistent with being an effective altruist.
I don’t think this goes through. Let’s just talk about the hypothetical of humanity’s evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn’t even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don’t see how you can be confident that your moral concern in the present day is independent of exactly that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.
And the last part of the sentence that I quoted seems also not very compatible with this. Digital people might have hugely varying levels of capacity for suffering and happiness and other things we care about, including different EMs. I indeed hope we create beings with much greater capacity for happiness than us, and would consider that among one of the moral priorities of our time.
For information, CEA’s OP links to an explanation of impartiality:
That paragraph does feel kind of confused to me, though it’s hard to be precise in lists of principles like this.
As jimrandomh says above, it is widely accepted in EA that time and location do not matter morally (well, moreso location, I think it’s actually pretty common for EAs to think that far future lives are worth less than present lives, though I don’t agree with this reasoning). But that clearly does not imply that all people count equally, given that there are many possible reasons for differing moral weights.
EMs?
“Emulated Minds” aka “Mind uploads”.
Brain Emulations—basically taking a person and running a simulation of them on a computer, where they could potentially be copied, run faster or slower, etc.
Thanks for writing this up Amber — this is the sense that we intended in our statement and in the intro essay that it refers to (though I didn’t write the intro essay). We have edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are more like “core hypotheses, but subject to revision” than “set in stone”.
Sorry for the slow response.
I wanted to clarify and apologise for some things here (not all of these are criticisms you’ve specifically made, but this is the best place I can think of to respond to various criticisms that have been made):
This statement was drafted and originally intended to be a short quote that we could send to journalists if asked for comment. On reflection, I think that posting something written for that purpose on the Forum was the wrong way to communicate with the community and a mistake. I am glad that we posted something, because I think that it’s important for community members to hear that CEA cares about inclusion, and (along with legitimate criticism like yours) I’ve heard from many community members who are glad we said something. But I wish that I had said something on the Forum with more precision and nuance, and will try to be better at this in future.
The first sentence was not meant to imply that we think that Bostrom disagrees with this view, but we can see why people would draw this implication. It’s included because we thought lots of people might get the impression from Bostrom’s email that EA is racist and I don’t want anyone — within or outside the community — to think that. Nevertheless this was sloppy, and is something that we should have caught when drafting it. Sorry.
We also intended the first sentence to have a meaning like Amber’s interpretation above, rather than the interpretation you had, but we agree that this is unclear. We’ve just edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are mostly more like “core hypotheses, but subject to revision” than “set in stone”.
This statement was intended as a reaction to Bostrom’s initial email (CW that this link includes a racial slur). I agree that if we had linked to that email it would have been clearer, and at the time I posted it I didn’t even consider that this might be ambiguous. Sorry.
More generally, we’re thinking about how we can improve our responses to situations like this in the future. I’m also planning to write up more about our overall approach to comms (TL;DR is that I agree with various concerns that have been raised about CEA and others in the community caring too much about PR concerns; I think truthfully saying what you believe — carefully and with compassion — is almost always more important than anything else), but it might be a little while before I get round to that.
I appreciate this
CEA’s current media policy forbids employees from commenting on controversial issues without permission from leaders (including you). Does the view you express here mean you disagree with this policy? At present it seems that you have had the right to shoot from the hip with your personal opinions but ordinary CEA employees do not.
At the risk of running afoul of the moderation guidelines, this comment reads to me as very obtuse. The sort of equality you are responding to is one that I think almost nobody endorses. The natural reading of “equality” in this piece is the one very typical of, even to an extent uniquely radical about, EA. When Bentham says “each to count for one and none for more than one”, or Sidgwick talking about the point of view of the universe, or Singer discusses equal consideration of equal interests. I would read this charitably and chalk it up to an isolated failure to read the statement charitably, but it is incredibly implausible to me that this becoming the top voted comment can be accounted for by mass reading comprehension problems. If this were not a statement critical of an EA darling, but rather a more mundane statement of EA values that said something about how people count equally regardless of where in space and time they are, or sentient beings count equally regardless of their species, I would be extremely surprised to see a comment like this make it to the top of the post. I get that taking this much scandal in a row hurts, but guys, for the love of god just take the L, this behavior is very uncharming.
I think what Habryka is saying is that while EA does have some notion of equality, the reason it sticks so close to mainstream egalitarianism is because humans don’t differ much. If there were multiple species civilizations like those in Orion’s Arm for example, where multiple orders of magnitude differences in abilities are present, than a lot of stratification and non-egalitarianism will happen solely by the value of freedom/empowerment.
And this poses a real moral dilemma for EA, primarily because of impossibility results around fairness/egalitarianism.
Who supports this? This is an extremely radical proposal, that I also haven’t seen defended anywhere. Of course sentient beings don’t count equally regardless of their species, that would imply that if fish turn out to be sentient (which they might) their moral weight would completely outweigh all of humanity right now. Maybe you buy that, but it’s definitely extremely far from consensus in EA.
In-general I feel like you just listed 6 different principles, some of which are much more sensible than others. I still agree that indifference to location and time is a pretty core principle, but I don’t see the relevance of it to the Bostrom discussion at hand, and so I assumed that it was not the one CEA was referring to. This might be a misunderstanding, but I feel like I don’t really have any story where stating that principle is relevant to Bostrom’s original statement or apology, given that racism concerns are present in the current day and affect people in the same places as we are. If that is the statement CEA was referring to, then I do withdraw that part of the criticism and replace it with “why are you bringing up a principle that doesn’t seem to have much to do with the situation?”.
And then beyond that, I do indeed think asserting there is no difference whatsoever in moral consideration between people seems pretty crazy to me, and I haven’t seen it defended. I am not that familiar with Bentham’s exact arguments here, and I don’t think he is particularly frequently cited (or at least I haven’t seen it). I also think I haven’t seen most of the other philosopher’s cited here except Singer, and I would be happy to have my first object level discussion now about whether you think a principle of perfectly equal moral consideration should hold. Singer has gone on record thinking that indeed different people have different moral weight, and this is one of his most controversial beliefs (i.e. his disability stuff is a consequence of that and has in the past gotten him cancelled at various universities), so I don’t know what you are referring to here as the principle, though I also feel pretty confused about Singer’s reasoning here.
In-general I think we discuss the differing moral weight of different animals all the time, and I don’t see us following a principle that puts sentient/conscious beings into one large uniform bucket.
Equality is always “equality with respect to what”. In one sense giving a begger a hundred dollars and giving a billionaire a hundred dollars is treating them equally, but only with respect to money. With respect to the important, fundamental things (improvement in wellbeing) the two are very unequal. I take it that the natural reading of “equal” is “equal with respect to what matters”, as otherwise it is trivial to point out some way in which any possible treatment of beings that differ in some respect must be unequal in some way (either you treat the two unequally with respect to money, or with respect to welfare for instance).
The most radical view of equality of this sort, is that for any being for whom what matters can to some extent matter to them, one ought to treat them equally with respect to it, this is for instance the view of people like Singer, Bentham, and Sidgwick (yes, including non-human animals, which is my view as well). It is also, if not universally at least to a greater degree than average, one of the cornerstones of the philosophy and culture of Effective Altruism, it is also the reading implied by the post linked in that part of the statement.
Even if you disagree with some of the extreme applications of the principle, race is easy mode for this. Virtually everyone today agrees with equality in this case, so given what a unique cornerstone of EA philosophy this type of equality is in general, in cases where it seems that people are being treated with callousness and disrespect based on their race, it makes sense to reiterate it, it is an especially worrying sign for us. Again, you might disagree that Bostrom is failing to apply equal respect of this sort, or that this use of the word equality is not how you usually think of it, but I find it suspicious that so many people are boosting your comment given how common, even mundane a statement in EA philosophy ones like this are, and that the statement links directly to a page explaining it on the main EA website.
This feels to me like it is begging the question, so I am not sure I understand this principle. This framing leaves open the whole question of “what determines how much capacity for things mattering to them someone has?”. Clearly we agree that different animals have different capacities here. Even if a fish managed so somehow communicate “the only thing I want is fish food”, I am going to spend much less money on fulfilling that desire of theirs than I am going to spend on fulfilling an equivalent desire from another human.
Given that you didn’t explain that difference, I don’t currently understand how to apply this principle that you are talking about practically, since its definition seems to have a hole exactly the shape of the question you purported it would answer.
That’s a good question, and is part of what Rethink Priorities are working on in their moral weight project! A hedonistic utilitarian would say that if fulfilment of the fish’s desire brings them greater pleasure (even after correcting for the intensity of pleasure perhaps generally being lower in fish) than the fulfilment of the human’s desire, then satisfying the fish’s desire should be prioritised. The key thing is that one unit of pleasure matters equally, regardless of the species of the being experiencing it.
Yeah, I think there are a bunch of different ways to answer this question, and active research on it, but I feel like the answer here does indeed depend on empirical details and there is no central guiding principle that we are confident in that gives us one specific answer.
Like, I think the correct defense is to just be straightforward and say “look, I think different people are basically worth the same, since cognitive variance just isn’t that high”. I just don’t think there is a core principle of EA that would prevent someone from believing that people who have a substantially different cognitive makeup would also deserve less or more moral consideration (though the game-theory here also often makes it so that you should still trade with them in a way that evens stuff out, though it’s not guaranteed).
I personally don’t find hedonic utilitarianism very compelling (and I think this is true for a lot of EA), so am not super interested in valence-based approaches to answering this question, though I am still glad about the work Rethink is doing since I still think it helps me think about how to answer this question in-general.
Agree that not all EAs are utilitarians (though a majority of EAs who answer community surveys do appear to be utilitarian). I was just describing why it is that people who (as you said in many of your comments) think some capacities (like the capacity to suffer) are morally relevant still, despite this, also describe themselves as philosophically committed to some form of impartiality. I think Amber’s comment also covers this nicely.
Just to clarify, I am a utilitarian, approximately, just not a hedonic utilitarian.
Bentham’s view was that the ability to suffer means that we ought to give at least some moral weight to a being (their capacity to suffer determining how much weight they are given). Singer’s view, when he was a preference utilitarian, was that we should equally consider the comparable interests of all sentient beings. Every classical utilitarian will give equal weight to one unit of pleasure or one unit of suffering (taken on their own), regardless of the species, gender or race of the being experiencing the pleasure or suffering. This is a pretty mainstream view within EA. If it means (as MacAskill suggests it might, in his latest book) that the total well-being of fish outweighs the total well-being of humanity, then this is not an objectionable conclusion (and to think otherwise would be speciesist, on this view).
It’s interesting to read this critique of a EVF/CEA press statement through the lens of EVF/CEA’s own fidelity model, which emphasizes the problems/challenges with communicating EA ideas in low-bandwidth channels.
I don’t agree with the specific critique here, but would be curious as to how the decision to publish a near-tweet-level public statement fits into the fidelity model.
in addition to all of this, the statement compounds the already existent trust problem EA has. It was already extremely bad in the aftermath of FTX that people were running to journos to leak them screenshots from private EA governance channels (vide that New Yorker piece). You can’t trust people in an organization or culture who all start briefing the press against each other the minute the chips are down! Now we have CEA publicly knifing a long-term colleague and movement founder figure with this unbelievably short and brutal statement, more or less a complete disowning, when really they needed to say nothing at all, or at least nothing right now.
When your whole movement is founded on the idea of utility maximizing, trust is already impaired because you forever feel that you’re only going to be backed for as long as you’re perceived useful: virtues such as loyalty and friendship are not really important in the mainstream EA ethical framework. It’s already discomfiting enough to feel that EAs might slit your throat in exchange for the lives of a million chickens, but when they appear to metaphorically be quite prepared to slit each other’s throats for much less, it’s even worse!
Sabs—I agree. EAs need to learn much better PR crisis management skills, and apply them carefully, soberly, carefully, and expertly.
Putting out very short, reactive, panicked statements that publicly disavow key founders of our movement is not a constructive strategy for defending a movement against hostile outsiders, or promoting trust within the movement, or encouraging ethical self-reflection among movement members.
I’ve seen this error again, and again, and again, in academia—when administrators panic about some public blowback about something someone has allegedly done. We should be better than that.
Agree. At a meta-level, I was disappointed by the seemingly panicked and reactive nature of the statement. The statement is bad, and so, it seems, is the process that produced it.
Hm, I don’t much agree with this because I think the statement is basically consistent with Bostrom’s own apology. (Though it can still be rough to have other people agree with your criticisms of yourself).
Trust does not mean circling the wagons and remaining silent about seriously bad behavior. That kind of “trust” would be toxic to community health because it would privilege the comfort of the leader who made a racist comment over maintaining a safe, healthy community for everyone else.
Being a leader means accepting more scrutiny and criticism of your actions, not getting a pass because you’re a “long-term colleague and movement founder figure.”
Sounds like you feel pretty strongly about this and feel like this was very poorly communicated. What would you have preferred the statement to be instead?
I would also like to add to the other comments that EA Intro Fellowship has included a book section titled “All Animals Are Equal” for quite some time.
Another statement that “people are equal” from GWWC.