I have thought of one particularly biased terminology in longtermism a lot: “humanity” / “future people”. Why is it not “all sentient beings”? (I am guessing one big reason is strategic instead of conceptual)
It is not a bias, at least not as the term is used by leading longtermists. Toby Ord explains it clearly in The Precipice (pp. 38–39):
my focus on humanity in the definitions is not supposed to exclude considerations of the value of the environment, other animals, successors to Homo sapiens, or creatures elsewhere in the cosmos. It is not that I think only humans count. Instead, it is that humans are the only beings we know of that are responsive to moral reasons and moral argument—the beings who can examine the world and decide to do what is best. If we fail, that upward force, that capacity to push toward what is best or what is just, will vanish from the world.
Our potential is a matter of what humanity can achieve through the combined actions of each and every human. The value of our actions will stem in part from what we do to and for humans, but it will depend on the effects of our actions on non-humans too. If we somehow give rise to new kinds of moral agents in the future, the term “humanity” in my definition should be taken to include them.
I understand that Ord, and MacAskill too, have given similar explanations, and for multiple times among each of them. But I disagree that the terminology is not biased—It still leads a lot of readers/listeners to focus on the future of humans if they haven’t seen/heard these caveats, or maybe even if they have read/heard about it.
I don’t think the fact that among organisms only humans can help other sentient beings justifies almost always using languages like “future of humanity”, “future people”, etc. For example, “future people matter morally just as much as people alive today”. Whether this sentence should be said with “future people” or “future sentient beings” shouldn’t have anything to do whether humans/people will be the only beings who can help other sentient beings. It just looks like a strategic move to reduce the weirdness of longtermism, or avoiding fighting two philosophical battles (which are probably sound reasons, but I also worry that this practice locks in humancentric/speciesist values) So yes, until AGI comes only humans can help other sentient beings but the future that matters should still be a “future of sentient beings”.
And I am not convinced that the terminology didn’t serve speciesism/humancentrism in the community. As a matter of fact, some of prominent longtermists, when they try to evaluate the value of the future, they focused on how many future humans there could be and what could happen to them. Holden Karnofsky and some others took it further and discussed digital people. MacAskill wrote about the number of nonhuman animals in the past and present in WWOTF, but didn’t discuss how many of them there will be and what might happen to them.
In this context, I think there are actually two separate ways in which terminology can inadvertently bias our thinking:
Talk about “future people” may be interpreted as referring to humans or beings with higher cognitive capacities, rather than to sentient beings or beings whose lives can go better or worse. Some alternative terms we could use to reduce bias here are “future sentients”, “future patients”, “future sentient beings” and “future moral patients”.
Talk about “human potential” or “humanity’s potential” may be interpreted as referring to the value humans can potentially experience, rather than to the value humans can potentially create. I’m not sure there are adequate alternatives here. One could perhaps talk about the “potential of human agency”, though that doesn’t sound very natural.
There’s a footnote in the next post to the effect that ‘people’ shouldn’t be taken too literally. I think that’s the short answer in many cases—that it’s just easier to say/write ‘people’. There’s also a tradition in philosophy of using ‘personhood’ more generally than to refer to homo sapiens. It’s maybe a confusing tradition, since I think much of the time people also use the word in its more commonly understood way, so the two can get muddled.
For what it’s worth, my impression is that few philosophical longtermists would exclude nonhuman animals from moral consideration, though they might disagree around how much we should value the welfare of particularly alien-seeming artificial intelligences.
I think “easier to say/write” is not a good enough reason (certainly much weaker than the concern of fighting two philosophical battles, or weirding people away) to always say/write “people”/”humanity”.
My understanding is that when it was proposed to use humans/people/humanity to replace men/man/mankind to refer to humans generally, there were some pushbacks. I didn’t check the full details of the pushbacks but I can imagine some saying man/mankind is just easier to say/write because they have fewer words, and are more commonly known at that time. And I am pretty sure that “mankind” not being gender-neutral is what led to feminists, literature writers, and even etymologists to eventually support using “humanity” instead.
You mentioned “the two [meanings] can get muddled”. For me, that’s a reason to use “sentient beings” instead of “people”. This was actually the reason some etymologists mentioned when they supported using “humanity” in place of “mankind”, because back in their times, the word “man/men” started to mean both “humans” and “male humans”, making it possibly, if not likely, suggest that anything that relates to the whole humanity has nothing to do with women.
And it seems to me that we need to ask, as much as we need to ask whether “mankind” is not explicitly mentioning women as one of the stakeholders given the currently most common meaning of the word “man”, whether “people” is a good umbrella term for all sentient beings. It seems to me that it is clearly not.
I am glad that you mentioned the word “person”. I think even though the same problems still exist if we use this word insofar as people think the word “person” can only be used on humans (which arguably is most people), the problems are less severe. For instance, some animal advocates are trying to advocate for some nonhuman animals to be granted legal personhood, (and some environmentalists tried to seek for nature entities to be given legal personhood, and some of them succeeded). My current take is that “person” is better. But still not ideal as it is quite clear that most people now can only think of humans when they see/hear “person”.
I agree that “few philosophical longtermists would exclude nonhuman animals from moral consideration”. But I took it literally because I do think there is at least one who would exclude nonhuman animals. Eliezer Yudkowsky, whom some might doubt how much he is a philosopher/longtermist, holds the view that pigs cannot feel pain (and by choosing pig he is basically saying no nonhuman animals can feel pain). It also seems to me some “practical longtermists” I came across are omitting/heavily discounting nonhuman animals in their longtermist pictures. For instance, Holden Karnofsky said in an 2017 article on Radical Empathy that his ” own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more.” (but he accepts that he could be wrong and that there are smart people who think he is wrong about this, so that he is willing to have OP’s “neartermist” part to help nonhuman animals) And it seems to me that the claim is still mostly right because most longtermists are EAs or lesswrongers or both. But I expect some non-EA/lesswronger philosophers to become longtermists in the future (and presumably this is what the advocates of longtermism want, even for those people who only care about humans), and I also expect some of them to not care about animals.
Also, excluding nonhumans from longtermism philosophically is different from excluding nonhumans from the longtermist project. The fact that there isn’t yet a single project supported by longtermist funders supporting work on animal welfare under the longtermist worldview makes the philosophical inclusion rather non-comforting, if not more depressing. (I mean, not even a single, which could easily be justified by worldview/intervention diversification. And I can assure you that it is not because of a lack of proposals)
P.S. I sometimes have to say “animal” instead of “nonhuman animals” in my writings to not freak people out or think I am an extremist. But this clearly suffers from the same problem I am complaining.
Thank you for the post!
I have thought of one particularly biased terminology in longtermism a lot: “humanity” / “future people”. Why is it not “all sentient beings”? (I am guessing one big reason is strategic instead of conceptual)
It is not a bias, at least not as the term is used by leading longtermists. Toby Ord explains it clearly in The Precipice (pp. 38–39):
I understand that Ord, and MacAskill too, have given similar explanations, and for multiple times among each of them. But I disagree that the terminology is not biased—It still leads a lot of readers/listeners to focus on the future of humans if they haven’t seen/heard these caveats, or maybe even if they have read/heard about it.
I don’t think the fact that among organisms only humans can help other sentient beings justifies almost always using languages like “future of humanity”, “future people”, etc. For example, “future people matter morally just as much as people alive today”. Whether this sentence should be said with “future people” or “future sentient beings” shouldn’t have anything to do whether humans/people will be the only beings who can help other sentient beings. It just looks like a strategic move to reduce the weirdness of longtermism, or avoiding fighting two philosophical battles (which are probably sound reasons, but I also worry that this practice locks in humancentric/speciesist values) So yes, until AGI comes only humans can help other sentient beings but the future that matters should still be a “future of sentient beings”.
And I am not convinced that the terminology didn’t serve speciesism/humancentrism in the community. As a matter of fact, some of prominent longtermists, when they try to evaluate the value of the future, they focused on how many future humans there could be and what could happen to them. Holden Karnofsky and some others took it further and discussed digital people. MacAskill wrote about the number of nonhuman animals in the past and present in WWOTF, but didn’t discuss how many of them there will be and what might happen to them.
Fair enough.
In this context, I think there are actually two separate ways in which terminology can inadvertently bias our thinking:
Talk about “future people” may be interpreted as referring to humans or beings with higher cognitive capacities, rather than to sentient beings or beings whose lives can go better or worse. Some alternative terms we could use to reduce bias here are “future sentients”, “future patients”, “future sentient beings” and “future moral patients”.
Talk about “human potential” or “humanity’s potential” may be interpreted as referring to the value humans can potentially experience, rather than to the value humans can potentially create. I’m not sure there are adequate alternatives here. One could perhaps talk about the “potential of human agency”, though that doesn’t sound very natural.
There’s a footnote in the next post to the effect that ‘people’ shouldn’t be taken too literally. I think that’s the short answer in many cases—that it’s just easier to say/write ‘people’. There’s also a tradition in philosophy of using ‘personhood’ more generally than to refer to homo sapiens. It’s maybe a confusing tradition, since I think much of the time people also use the word in its more commonly understood way, so the two can get muddled.
For what it’s worth, my impression is that few philosophical longtermists would exclude nonhuman animals from moral consideration, though they might disagree around how much we should value the welfare of particularly alien-seeming artificial intelligences.
I think “easier to say/write” is not a good enough reason (certainly much weaker than the concern of fighting two philosophical battles, or weirding people away) to always say/write “people”/”humanity”.
My understanding is that when it was proposed to use humans/people/humanity to replace men/man/mankind to refer to humans generally, there were some pushbacks. I didn’t check the full details of the pushbacks but I can imagine some saying man/mankind is just easier to say/write because they have fewer words, and are more commonly known at that time. And I am pretty sure that “mankind” not being gender-neutral is what led to feminists, literature writers, and even etymologists to eventually support using “humanity” instead.
You mentioned “the two [meanings] can get muddled”. For me, that’s a reason to use “sentient beings” instead of “people”. This was actually the reason some etymologists mentioned when they supported using “humanity” in place of “mankind”, because back in their times, the word “man/men” started to mean both “humans” and “male humans”, making it possibly, if not likely, suggest that anything that relates to the whole humanity has nothing to do with women.
And it seems to me that we need to ask, as much as we need to ask whether “mankind” is not explicitly mentioning women as one of the stakeholders given the currently most common meaning of the word “man”, whether “people” is a good umbrella term for all sentient beings. It seems to me that it is clearly not.
I am glad that you mentioned the word “person”. I think even though the same problems still exist if we use this word insofar as people think the word “person” can only be used on humans (which arguably is most people), the problems are less severe. For instance, some animal advocates are trying to advocate for some nonhuman animals to be granted legal personhood, (and some environmentalists tried to seek for nature entities to be given legal personhood, and some of them succeeded). My current take is that “person” is better. But still not ideal as it is quite clear that most people now can only think of humans when they see/hear “person”.
I agree that “few philosophical longtermists would exclude nonhuman animals from moral consideration”. But I took it literally because I do think there is at least one who would exclude nonhuman animals. Eliezer Yudkowsky, whom some might doubt how much he is a philosopher/longtermist, holds the view that pigs cannot feel pain (and by choosing pig he is basically saying no nonhuman animals can feel pain). It also seems to me some “practical longtermists” I came across are omitting/heavily discounting nonhuman animals in their longtermist pictures. For instance, Holden Karnofsky said in an 2017 article on Radical Empathy that his ” own reflections and reasoning about philosophy of mind have, so far, seemed to indicate against the idea that e.g. chickens merit moral concern. And my intuitions value humans astronomically more.” (but he accepts that he could be wrong and that there are smart people who think he is wrong about this, so that he is willing to have OP’s “neartermist” part to help nonhuman animals) And it seems to me that the claim is still mostly right because most longtermists are EAs or lesswrongers or both. But I expect some non-EA/lesswronger philosophers to become longtermists in the future (and presumably this is what the advocates of longtermism want, even for those people who only care about humans), and I also expect some of them to not care about animals.
Also, excluding nonhumans from longtermism philosophically is different from excluding nonhumans from the longtermist project. The fact that there isn’t yet a single project supported by longtermist funders supporting work on animal welfare under the longtermist worldview makes the philosophical inclusion rather non-comforting, if not more depressing. (I mean, not even a single, which could easily be justified by worldview/intervention diversification. And I can assure you that it is not because of a lack of proposals)
P.S. I sometimes have to say “animal” instead of “nonhuman animals” in my writings to not freak people out or think I am an extremist. But this clearly suffers from the same problem I am complaining.