I think I do see “all people count equally” as a foundational EA belief. This might be partly because I understand “count” differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were “core” to EA, rather than idiosyncratic to me). What I understand by “people count equally” is something like “1 person’s wellbeing is not more important than another’s”.
E.g. a British nationalist might not think that all people count equally, because they think their copatriots’ wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people.
“most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus”
In all of these situations, I think we can still say people “count” equally. QALY frameworks don’t say that young people’s wellbeing matters more—just that if they die or get sick, they stand to lose more wellbeing than older people, so it might make sense to prioritize them. This seems similar to how I prioritize donating to poor people over rich people—it’s not that rich people’s wellbeing matters less, it’s just that poor people are generally further from optimal wellbeing in the first place. And I think this reasoning can be applied to hypothetical people/beings with greater capacity for suffering. I think greater capacity for happiness is trickier and possibly an object-level disagreement—I wouldn’t be inclined to prioritize Happiness Georg’s happiness above all else, because his happiness outweights the suffering of many others, but maybe you would bite that bullet.
Thanks for writing out a reaction very similar to my own. As I wrote in a comment on a different topic, “it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time.”
I agree that “all people count equally” is an imprecise way to express that value (and I would probably choose to frame in in the lens of “value” rather than “belief”) but I read this as an imprecise expression of a common value in the movement rather than a deep philosophical commitment to valuing all minds exactly the same.
But there is a huge difference in this case between something being a common belief and a philosophical commitment, and there is also a huge difference between saying that space/time does not matter and that all people count equally.
I agree that most EAs believe that people roughly count equally, but if someone was to argue against that, I would in no way think they are violating any core tenets of the EA community. And that makes the sentence in this PR statement fall flat, since I don’t think we can give any reassurance that empirical details will not change our mind on this point.
And yeah, I think time/space not mattering is a much stronger core belief, but as far as I can tell that doesn’t seem to have anything to do with the concerns this statement is trying to preempt. I don’t think racism and similar stuff is usually motivated by people being far away in time and space (and indeed, my guess is something closer to the opposite is true, where racist individuals are more likely to feel hate towards the immigrants in their country, and more sympathy for people in third world countries).
One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever the donor is.
This is narrower than the broad philosophical commitment Habryka is talking about, though. Taken as a broad philosophical commitment, “all people count equally” would force some strange conclusions when translated into a QALY framework, and when applied to AI, and also would imply that you shouldn’t favor people close to you over people in distant poor countries at all,even if the QUALYs-per-dollar were similar. I think most EAs are in a position where they’re willing to pay $X/QALY to extend the lives of distant strangers, $5X/QALY to extend the lives of acquaintances, and $100X/QALY to extend the lives of close friends and family. And I think this is philosophically coherent and consistent with being an effective altruist.
In all of these situations, I think we can still say people “count” equally.
I don’t think this goes through. Let’s just talk about the hypothetical of humanity’s evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn’t even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don’t see how you can be confident that your moral concern in the present day is independent of exactly that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.
And the last part of the sentence that I quoted seems also not very compatible with this. Digital people might have hugely varying levels of capacity for suffering and happiness and other things we care about, including different EMs. I indeed hope we create beings with much greater capacity for happiness than us, and would consider that among one of the moral priorities of our time.
For information, CEA’s OP links to an explanation of impartiality:
Impartial altruism: We believe that all people count equally. Of course it’s reasonable to have special concern for one’s own family, friends and life. But, when trying to do as much good as possible, we aim to give everyone’s interests equal weight, no matter where or when they live. This means focusing on the groups who are most neglected, which usually means focusing on those who don’t have as much power to protect their own interests.
That paragraph does feel kind of confused to me, though it’s hard to be precise in lists of principles like this.
As jimrandomh says above, it is widely accepted in EA that time and location do not matter morally (well, moreso location, I think it’s actually pretty common for EAs to think that far future lives are worth less than present lives, though I don’t agree with this reasoning). But that clearly does not imply that all people count equally, given that there are many possible reasons for differing moral weights.
Brain Emulations—basically taking a person and running a simulation of them on a computer, where they could potentially be copied, run faster or slower, etc.
Thanks for writing this up Amber — this is the sense that we intended in our statement and in the intro essay that it refers to (though I didn’t write the intro essay). We have edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are more like “core hypotheses, but subject to revision” than “set in stone”.
I think I do see “all people count equally” as a foundational EA belief. This might be partly because I understand “count” differently to you, partly because I have actually-different beliefs (and assumed that these beliefs were “core” to EA, rather than idiosyncratic to me).
What I understand by “people count equally” is something like “1 person’s wellbeing is not more important than another’s”.
E.g. a British nationalist might not think that all people count equally, because they think their copatriots’ wellbeing is more important than that of people in other countries. They would take a small improvement in wellbeing for Brits over a large improvement in wellbeing for non-Brits. An EA would be impartial between improvements in wellbeing for British people vs non-British people.
“most QALY frameworks value young people more than older people, many discussions have been had about hypothetical utility monsters, and about how some people might have more moral patienthood due to being able to experience more happiness or more suffering, and of course the moral patienthood of artificial systems immediately makes it clear that different minds likely matter differently in moral calculus”
In all of these situations, I think we can still say people “count” equally. QALY frameworks don’t say that young people’s wellbeing matters more—just that if they die or get sick, they stand to lose more wellbeing than older people, so it might make sense to prioritize them. This seems similar to how I prioritize donating to poor people over rich people—it’s not that rich people’s wellbeing matters less, it’s just that poor people are generally further from optimal wellbeing in the first place. And I think this reasoning can be applied to hypothetical people/beings with greater capacity for suffering. I think greater capacity for happiness is trickier and possibly an object-level disagreement—I wouldn’t be inclined to prioritize Happiness Georg’s happiness above all else, because his happiness outweights the suffering of many others, but maybe you would bite that bullet.
Thanks for writing out a reaction very similar to my own. As I wrote in a comment on a different topic, “it seems to me that one of the core values of effective altruism is that of impartiality― giving equal moral weight to people who are distant from me in space and/or time.”
I agree that “all people count equally” is an imprecise way to express that value (and I would probably choose to frame in in the lens of “value” rather than “belief”) but I read this as an imprecise expression of a common value in the movement rather than a deep philosophical commitment to valuing all minds exactly the same.
But there is a huge difference in this case between something being a common belief and a philosophical commitment, and there is also a huge difference between saying that space/time does not matter and that all people count equally.
I agree that most EAs believe that people roughly count equally, but if someone was to argue against that, I would in no way think they are violating any core tenets of the EA community. And that makes the sentence in this PR statement fall flat, since I don’t think we can give any reassurance that empirical details will not change our mind on this point.
And yeah, I think time/space not mattering is a much stronger core belief, but as far as I can tell that doesn’t seem to have anything to do with the concerns this statement is trying to preempt. I don’t think racism and similar stuff is usually motivated by people being far away in time and space (and indeed, my guess is something closer to the opposite is true, where racist individuals are more likely to feel hate towards the immigrants in their country, and more sympathy for people in third world countries).
One of the defining characteristics of EA is rejecting certain specific reasons for counting people unequally; in particular, under EA ideology, helping someone in a distant country is just as good as helping a nearby person by the same amout. Combined with the empirical fact that a dollar has much larger effect when spent on carefully chosen interventions in poorer countries, this leads to EA emphazing on poverty-reduction programs in poor, mainly African countries, in contrast to non-EA philanthropy which tends to favor donations local to wherever the donor is.
This is narrower than the broad philosophical commitment Habryka is talking about, though. Taken as a broad philosophical commitment, “all people count equally” would force some strange conclusions when translated into a QALY framework, and when applied to AI, and also would imply that you shouldn’t favor people close to you over people in distant poor countries at all, even if the QUALYs-per-dollar were similar. I think most EAs are in a position where they’re willing to pay $X/QALY to extend the lives of distant strangers, $5X/QALY to extend the lives of acquaintances, and $100X/QALY to extend the lives of close friends and family. And I think this is philosophically coherent and consistent with being an effective altruist.
I don’t think this goes through. Let’s just talk about the hypothetical of humanity’s evolutionary ancestors still being around.
Unless you assign equal moral weight to an ape than you do to a human, this means that you will almost certainly assign lower moral weight to humans or nearby species earlier in our evolutionary tree, primarily on the basis of genetic differences, since there isn’t even any clean line to draw between humans and our evolutionary ancestors.
Similarly, I don’t see how you can be confident that your moral concern in the present day is independent of exactly that genetic variation in the population. That genetic variation is exactly the same that over time made you care more about humans than other animals, amplified by many rounds of selection, and as such, it would be very surprising if there was absolutely no difference in moral patienthood among the present human population.
Again, I expect that variance to be quite small, since genetic variance in the human population is much smaller than the variance between different species, and also for that variance to really not align very well with classical racist tropes, but the nature of the variance is ultimately the same.
And the last part of the sentence that I quoted seems also not very compatible with this. Digital people might have hugely varying levels of capacity for suffering and happiness and other things we care about, including different EMs. I indeed hope we create beings with much greater capacity for happiness than us, and would consider that among one of the moral priorities of our time.
For information, CEA’s OP links to an explanation of impartiality:
That paragraph does feel kind of confused to me, though it’s hard to be precise in lists of principles like this.
As jimrandomh says above, it is widely accepted in EA that time and location do not matter morally (well, moreso location, I think it’s actually pretty common for EAs to think that far future lives are worth less than present lives, though I don’t agree with this reasoning). But that clearly does not imply that all people count equally, given that there are many possible reasons for differing moral weights.
EMs?
“Emulated Minds” aka “Mind uploads”.
Brain Emulations—basically taking a person and running a simulation of them on a computer, where they could potentially be copied, run faster or slower, etc.
Thanks for writing this up Amber — this is the sense that we intended in our statement and in the intro essay that it refers to (though I didn’t write the intro essay). We have edited the intro essay to make clearer that this is what we mean, and also to make clear that these principles are more like “core hypotheses, but subject to revision” than “set in stone”.