I understand that Ord, and MacAskill too, have given similar explanations, and for multiple times among each of them. But I disagree that the terminology is not biased—It still leads a lot of readers/listeners to focus on the future of humans if they haven’t seen/heard these caveats, or maybe even if they have read/heard about it.
I don’t think the fact that among organisms only humans can help other sentient beings justifies almost always using languages like “future of humanity”, “future people”, etc. For example, “future people matter morally just as much as people alive today”. Whether this sentence should be said with “future people” or “future sentient beings” shouldn’t have anything to do whether humans/people will be the only beings who can help other sentient beings. It just looks like a strategic move to reduce the weirdness of longtermism, or avoiding fighting two philosophical battles (which are probably sound reasons, but I also worry that this practice locks in humancentric/speciesist values) So yes, until AGI comes only humans can help other sentient beings but the future that matters should still be a “future of sentient beings”.
And I am not convinced that the terminology didn’t serve speciesism/humancentrism in the community. As a matter of fact, some of prominent longtermists, when they try to evaluate the value of the future, they focused on how many future humans there could be and what could happen to them. Holden Karnofsky and some others took it further and discussed digital people. MacAskill wrote about the number of nonhuman animals in the past and present in WWOTF, but didn’t discuss how many of them there will be and what might happen to them.
In this context, I think there are actually two separate ways in which terminology can inadvertently bias our thinking:
Talk about “future people” may be interpreted as referring to humans or beings with higher cognitive capacities, rather than to sentient beings or beings whose lives can go better or worse. Some alternative terms we could use to reduce bias here are “future sentients”, “future patients”, “future sentient beings” and “future moral patients”.
Talk about “human potential” or “humanity’s potential” may be interpreted as referring to the value humans can potentially experience, rather than to the value humans can potentially create. I’m not sure there are adequate alternatives here. One could perhaps talk about the “potential of human agency”, though that doesn’t sound very natural.
I understand that Ord, and MacAskill too, have given similar explanations, and for multiple times among each of them. But I disagree that the terminology is not biased—It still leads a lot of readers/listeners to focus on the future of humans if they haven’t seen/heard these caveats, or maybe even if they have read/heard about it.
I don’t think the fact that among organisms only humans can help other sentient beings justifies almost always using languages like “future of humanity”, “future people”, etc. For example, “future people matter morally just as much as people alive today”. Whether this sentence should be said with “future people” or “future sentient beings” shouldn’t have anything to do whether humans/people will be the only beings who can help other sentient beings. It just looks like a strategic move to reduce the weirdness of longtermism, or avoiding fighting two philosophical battles (which are probably sound reasons, but I also worry that this practice locks in humancentric/speciesist values) So yes, until AGI comes only humans can help other sentient beings but the future that matters should still be a “future of sentient beings”.
And I am not convinced that the terminology didn’t serve speciesism/humancentrism in the community. As a matter of fact, some of prominent longtermists, when they try to evaluate the value of the future, they focused on how many future humans there could be and what could happen to them. Holden Karnofsky and some others took it further and discussed digital people. MacAskill wrote about the number of nonhuman animals in the past and present in WWOTF, but didn’t discuss how many of them there will be and what might happen to them.
Fair enough.
In this context, I think there are actually two separate ways in which terminology can inadvertently bias our thinking:
Talk about “future people” may be interpreted as referring to humans or beings with higher cognitive capacities, rather than to sentient beings or beings whose lives can go better or worse. Some alternative terms we could use to reduce bias here are “future sentients”, “future patients”, “future sentient beings” and “future moral patients”.
Talk about “human potential” or “humanity’s potential” may be interpreted as referring to the value humans can potentially experience, rather than to the value humans can potentially create. I’m not sure there are adequate alternatives here. One could perhaps talk about the “potential of human agency”, though that doesn’t sound very natural.