I don’t think compassion is the right term descriptively for EA views, and it seems worse than empathy here. Compassion is (by the most common definitions, I think) a response to (ongoing) suffering (or misfortune).
Longtermism might not count as compassionate because it’s more preventative than responsive, and the motivation to ensure future happy people come to exist probably isn’t a matter of compassion, because it’s not aimed at addressing suffering (or misfortune). But what Holden is referring to is meant to include those. I think what we’re aiming for is counting all interests and anyone who has interests, as well as the equal consideration of interests.
Of course, acts that are supported by longtermism or that ensure future happy people come to exist can be compassionate, but maybe not for longtermist reasons and probably not because they ensure future happy people exist, and instead because they also address suffering (or misfortune). And longtermists and those focused on ensuring future happy people come to exist can still be compassionate in general, but those motivations (or at least ensuring future happy people come to exist) don’t seem to be compassionate, i.e. they’re just not aimed at ongoing suffering in particular.
You’re right that both empathy and compassion are typically used to describe what determines people’s motivation to relieve someone’s suffering. Neither perfectly captures preventive thinking or consideration of interests (beyond welfare and suffering) that characterize longtermist thinking. I think you are right that compassion doesn’t lead you to want future people to exist. But I do think that it leads you to want future people to have positive lives. This point is harder to make for empathy. Compassion often means caring for others because we value their welfare, so it can be easily applied to animals or future people. Empathy means caring for others because we (in some way) feel what it’s like to be them or in their position. It seems like this is more difficult when we talk about animals and future people.
I would argue that empathy, how it is typically described, is even more local and immediate, whereas compassion, again, how it is typically described, gets somewhat closer to the idea of putting weight on others’ welfare (in a potentially fully calculated, unemotional way), which I think is closer to EA thinking. This is also in line with how Paul Bloom frames it: empathy is the more emotional route to caring about others, whereas compassion is the more reflective/rational route. So I agree that neither label captures the breadth of EA thinking and motivations, especially not when considering longtermism. I am not even arguing very strongly for compassion as the label we should go with. My argument more is that empathy seems to be a particualrly bad choice.
I don’t think compassion is the right term descriptively for EA views, and it seems worse than empathy here. Compassion is (by the most common definitions, I think) a response to (ongoing) suffering (or misfortune).
Longtermism might not count as compassionate because it’s more preventative than responsive, and the motivation to ensure future happy people come to exist probably isn’t a matter of compassion, because it’s not aimed at addressing suffering (or misfortune). But what Holden is referring to is meant to include those. I think what we’re aiming for is counting all interests and anyone who has interests, as well as the equal consideration of interests.
Of course, acts that are supported by longtermism or that ensure future happy people come to exist can be compassionate, but maybe not for longtermist reasons and probably not because they ensure future happy people exist, and instead because they also address suffering (or misfortune). And longtermists and those focused on ensuring future happy people come to exist can still be compassionate in general, but those motivations (or at least ensuring future happy people come to exist) don’t seem to be compassionate, i.e. they’re just not aimed at ongoing suffering in particular.
You’re right that both empathy and compassion are typically used to describe what determines people’s motivation to relieve someone’s suffering. Neither perfectly captures preventive thinking or consideration of interests (beyond welfare and suffering) that characterize longtermist thinking. I think you are right that compassion doesn’t lead you to want future people to exist. But I do think that it leads you to want future people to have positive lives. This point is harder to make for empathy. Compassion often means caring for others because we value their welfare, so it can be easily applied to animals or future people. Empathy means caring for others because we (in some way) feel what it’s like to be them or in their position. It seems like this is more difficult when we talk about animals and future people.
I would argue that empathy, how it is typically described, is even more local and immediate, whereas compassion, again, how it is typically described, gets somewhat closer to the idea of putting weight on others’ welfare (in a potentially fully calculated, unemotional way), which I think is closer to EA thinking. This is also in line with how Paul Bloom frames it: empathy is the more emotional route to caring about others, whereas compassion is the more reflective/rational route. So I agree that neither label captures the breadth of EA thinking and motivations, especially not when considering longtermism. I am not even arguing very strongly for compassion as the label we should go with. My argument more is that empathy seems to be a particualrly bad choice.