This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of “real people,” alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or “ethics of care” or concern for justice that lead people to alternatives like mutual aid and political activism.
My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal’s mugging” critique.
But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it’s the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.
(I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
I want to slightly push back against this post in two ways:
I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy—I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists.
Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of “doing a lot more good matters a lot more” is really important, but it is still trading off against other values.
Helping people closer to you / in your community: many people think this has inherent value
Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly not think it is better to make more of an overall difference by e.g. subsidizing eyeglasses in Bangladesh.
Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfare, because they place inherent value on justice. Both longtermists and GiveWell think they’re similarly good modulo secondary consequences and decision theory.
Discount rate, risk aversion, etc.: There is no reason that having a 10% chance of saving 100 lives in 6,000 years is better than a 40% chance of saving 5 lives tomorrow, if you don’t already believe in zero-discount expected value as the metric to optimize. The reason to believe in zero-discount expected value is a thought experiment involving the veil of ignorance, or maybe the VNM theorem. It is not caring doing the work here because both can be very caring acts, it is your belief in the thought experiment connecting your caring to the expected value.
In conclusion, I think that while care and empathy can be an important motivator to longtermists, and it is valid for us to think of longtermist actions as the ultimate act of care, we are motivated by a conjunction of empathy/care and other attributes, and it is the other attributes that are by far more important. For someone who has empathy/care and values beneficentrism and scope-sensitivity, preventing an extinction-level pandemic is an important act of care; for someone like me or a utilitarian, pandemic prevention is also an important act. But for someone who values justice more, applying more care does not make them prioritize pandemic prevention over helping a sex trafficking victim, and in the larger altruistically-inclined population, I think a greater focus on care and empathy conflict with longtermist values more than they contribute.
[1] More important for me are: feeling moral obligation to make others’ lives better rather than worse, wanting to do my best when it matters, wanting future glory and social status for producing so much utility.
Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom’s Against Empathy book, and how when I read that I thought something like: “oh yeah empathy really isn’t the best guide to acting morally,” and whether that view contradicts what I was expressing in my quick take above.
I think I probably should have framed the post more as “longtermism need not be totally cold and utilitarian,” and that there’s an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively put ourselves in their shoes. And that it might even incorporate elements of justice or fairness if we consider them a disenfranchised group without representation in today’s decision making who we are potentially throwing under the bus for our own benefit, or something like that. So justice and empathy can easily be folded into longtermist thinking. This sounds like what you are saying here, except maybe I do want to stand by the fact that EA values aren’t necessarily trading off against justice, depending on how you define it.
Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.
If we go extinct, they won’t exist, so won’t be real people or have any valid moral claims. I also consider compassion, by definition, to be concerned with suffering, harms or losses. People who don’t come to exist don’t experience suffering or harm and have lost nothing. They also don’t experience injustice.
Longtermists tend to seem focused on ensuring future moral patients exist, i.e. through extinction risk reduction. But, as above, ensuring moral patients come to exist is not a matter of compassion or justice for those moral patients. Still, they may help or (harm!) other moral patients, including other humans who would exist anyway, animals, aliens or artificial sentience.
On the other hand, longtermism is still compatible with a primary concern for compassion or justice, including through asymmetric person-affecting views and wide person-affecting views (e.g. Thomas, 2019, probably focus on s-risks and quality improvements), negative utilitarianism (focus on s-risks) and perhaps even narrow person-affecting views. However, utilitarian versions of most of these views still seem prone, at least in principle, to endorsing killing everyone to replace us and our descendants with better off individuals, even if each of us and our descendants would have had an apparently good life and object. I think some (symmetric and perhaps asymmetric) narrow person-affecting views can avoid this, and maybe these are the ones that fit best with compassion and justice. See my post here.
That being said, empathy could mean more than just compassion or justice and could endorse bringing happy people into existence for their own sake, e.g. Carlsmith, 2021. I disagree that we should create people for their own sake, though, and my intuitions are person-affecting.
Other issues people have with longtermism are fanaticism and ambiguity; the probability that any individual averts an existential catastrophe is usually quite low at best (e.g. 1 in a million), and the numbers are also pretty speculative.
Yeah, I meant to convey this in my post but framing it a bit differently — that they are real people with valid moral claims who may exist. I suppose framing it this way is just moving the hypothetical condition elsewhere to emphasize that, if they do exist, they would be real people with real moral claims, and that matters. Maybe that’s confusing though.
BTW, my personal views lean towards a suffering-focused ethics that isn’t seeking to create happy people for their own sake. But I still think that, in coming to that view, I’m concerned with the experience of those hypothetical people in the fuzzy, caring way that utilitarians are charged with disregarding. That’s my main point here. But maybe I just get off the crazy train at my unique stop. I wouldn’t consider tiling the universe with hedonium to be the ultimate act of care/justice, but I suppose someone could feel that way, and thereby make an argument along the same lines.
Agreed there are other issues with longtermism — just wanted to respond to the “it’s not about care or empathy” critique.
This is a cold take that’s probably been said before, but I thought it bears repeating occasionally, if only for the reminder:
The longtermist viewpoint has gotten a lot of criticism for prioritizing “vast hypothetical future populations” over the needs of “real people,” alive today. The mistake, so the critique goes, is the result of replacing ethics with math, or utilitarianism, or something cold and rigid like that. And so it’s flawed because it lacks the love or duty or “ethics of care” or concern for justice that lead people to alternatives like mutual aid and political activism.
My go-to reaction to this critique has become something like “well you don’t need to prioritize vast abstract future generations to care about pandemics or nuclear war, those are very real things that could, with non-trivial probability, face us in our lifetimes.” I think this response has taken hold in general among people who talk about X-risk. This probably makes sense for pragmatic reasons. It’s a very good rebuttal to the “cold and heartless utilitarianism/pascal’s mugging” critique.
But I think it unfortunately neglects the critical point that longtermism, when taken really seriously — at least the sort of longtermism that MacAskill writes about in WWOTF, or Joe Carlsmith writes about in his essays — is full of care and love and duty. Reading the thought experiment that opens the book about living every human life in sequential order reminded me of this. I wish there were more people responding to the “longtermism is cold and heartless” critique by making the case that no, longtermism at face value is worth preserving because it’s the polar opposite of heartless. Caring about the world we leave for the real people, with emotions and needs and experiences as real as our own, who very well may inherit our world but who we’ll never meet, is an extraordinary act of empathy and compassion — one that’s way harder to access than the empathy and warmth we might feel for our neighbors by default. It’s the ultimate act of care. And it’s definitely concerned with justice.
(I mean, you can also find longtermism worthy because of something something math and cold utilitarianism. That’s not out of the question. I just don’t think it’s the only way to reach that conclusion.)
I want to slightly push back against this post in two ways:
I do not think longtermism is any sort of higher form of care or empathy. Many longtermist EAs are motivated by empathy, but they are also driven by a desire for philosophical consistency, beneficentrism and scope-sensitivity that is uncommon among the general public. Many are also not motivated by empathy—I think empathy plays some role for me but is not the primary motivator? Cold utilitarianism is more important but not the primary motivator either [1]. I feel much more caring when I cook dinner for my friends than when I do CS research, and it is only because I internalize scope sensitivity more than >99% of people that I can turn empathy into any motivation whatsoever to work on longtermist projects. I think that for most longtermists, it is not more empathy, nor a better form of empathy, but the interaction of many normal (often non-empathy) altruistic motivators and other personality traits that makes them longtermists.
Longtermists make tradeoffs between other common values and helping vast future populations that most people disagree with, and without ideosyncratic EA values there is no reason that a caring person should make the same tradeoffs as longtermists. I think the EA value of “doing a lot more good matters a lot more” is really important, but it is still trading off against other values.
Helping people closer to you / in your community: many people think this has inherent value
Beneficentrism: most people think there is inherent value in being directly involved in helping people. Habitat for Humanity is extremely popular among caring and empathic people, and they would mostly not think it is better to make more of an overall difference by e.g. subsidizing eyeglasses in Bangladesh.
Justice: most people think it is more important to help one human trafficking victim than one tuberculosis victim or one victim of omnicidal AI if you create the same welfare, because they place inherent value on justice. Both longtermists and GiveWell think they’re similarly good modulo secondary consequences and decision theory.
Discount rate, risk aversion, etc.: There is no reason that having a 10% chance of saving 100 lives in 6,000 years is better than a 40% chance of saving 5 lives tomorrow, if you don’t already believe in zero-discount expected value as the metric to optimize. The reason to believe in zero-discount expected value is a thought experiment involving the veil of ignorance, or maybe the VNM theorem. It is not caring doing the work here because both can be very caring acts, it is your belief in the thought experiment connecting your caring to the expected value.
In conclusion, I think that while care and empathy can be an important motivator to longtermists, and it is valid for us to think of longtermist actions as the ultimate act of care, we are motivated by a conjunction of empathy/care and other attributes, and it is the other attributes that are by far more important. For someone who has empathy/care and values beneficentrism and scope-sensitivity, preventing an extinction-level pandemic is an important act of care; for someone like me or a utilitarian, pandemic prevention is also an important act. But for someone who values justice more, applying more care does not make them prioritize pandemic prevention over helping a sex trafficking victim, and in the larger altruistically-inclined population, I think a greater focus on care and empathy conflict with longtermist values more than they contribute.
[1] More important for me are: feeling moral obligation to make others’ lives better rather than worse, wanting to do my best when it matters, wanting future glory and social status for producing so much utility.
Thanks for this reply — it does resonate with me. It actually got me thinking back to Paul Bloom’s Against Empathy book, and how when I read that I thought something like: “oh yeah empathy really isn’t the best guide to acting morally,” and whether that view contradicts what I was expressing in my quick take above.
I think I probably should have framed the post more as “longtermism need not be totally cold and utilitarian,” and that there’s an emotional, caring psychological relationship we can have to hypothetical future people because we can imaginatively put ourselves in their shoes. And that it might even incorporate elements of justice or fairness if we consider them a disenfranchised group without representation in today’s decision making who we are potentially throwing under the bus for our own benefit, or something like that. So justice and empathy can easily be folded into longtermist thinking. This sounds like what you are saying here, except maybe I do want to stand by the fact that EA values aren’t necessarily trading off against justice, depending on how you define it.
If we go extinct, they won’t exist, so won’t be real people or have any valid moral claims. I also consider compassion, by definition, to be concerned with suffering, harms or losses. People who don’t come to exist don’t experience suffering or harm and have lost nothing. They also don’t experience injustice.
Longtermists tend to seem focused on ensuring future moral patients exist, i.e. through extinction risk reduction. But, as above, ensuring moral patients come to exist is not a matter of compassion or justice for those moral patients. Still, they may help or (harm!) other moral patients, including other humans who would exist anyway, animals, aliens or artificial sentience.
On the other hand, longtermism is still compatible with a primary concern for compassion or justice, including through asymmetric person-affecting views and wide person-affecting views (e.g. Thomas, 2019, probably focus on s-risks and quality improvements), negative utilitarianism (focus on s-risks) and perhaps even narrow person-affecting views. However, utilitarian versions of most of these views still seem prone, at least in principle, to endorsing killing everyone to replace us and our descendants with better off individuals, even if each of us and our descendants would have had an apparently good life and object. I think some (symmetric and perhaps asymmetric) narrow person-affecting views can avoid this, and maybe these are the ones that fit best with compassion and justice. See my post here.
That being said, empathy could mean more than just compassion or justice and could endorse bringing happy people into existence for their own sake, e.g. Carlsmith, 2021. I disagree that we should create people for their own sake, though, and my intuitions are person-affecting.
Other issues people have with longtermism are fanaticism and ambiguity; the probability that any individual averts an existential catastrophe is usually quite low at best (e.g. 1 in a million), and the numbers are also pretty speculative.
Yeah, I meant to convey this in my post but framing it a bit differently — that they are real people with valid moral claims who may exist. I suppose framing it this way is just moving the hypothetical condition elsewhere to emphasize that, if they do exist, they would be real people with real moral claims, and that matters. Maybe that’s confusing though.
BTW, my personal views lean towards a suffering-focused ethics that isn’t seeking to create happy people for their own sake. But I still think that, in coming to that view, I’m concerned with the experience of those hypothetical people in the fuzzy, caring way that utilitarians are charged with disregarding. That’s my main point here. But maybe I just get off the crazy train at my unique stop. I wouldn’t consider tiling the universe with hedonium to be the ultimate act of care/justice, but I suppose someone could feel that way, and thereby make an argument along the same lines.
Agreed there are other issues with longtermism — just wanted to respond to the “it’s not about care or empathy” critique.