Let me make a case that we should call it Radical Compassion instead of Radical Empathy. This is a very minor point of course, but then again, people have endlessly debated whether Effective Altruism is a sub-optimal label and what a better label would be. People clearly care about what important things are called (and maybe rightly so from a linguistic precision and marketing perspective).
You probably know this literature, but there’s a lot of confusion around what empathy should be defined as. Sometimes, empathy refers to various perspective-taking processes, like feeling what another feels (let’s call it Empathy 1). I think this is the most common lay definition. Sometimes, it refers to valuing others’ welfare, also referred to as empathic concern or compassion (let’s call it Empathy 2). Sometimes, definitions reference both processes (let’s call it Empathy 3), which doesn’t seem like the most helpful strategy to me.
Holden briefly points to the debate in his post which you link to, but it’s not clear to me why he chose the empathy term despite this confusion and disagreement. In one place, he seems to endorse Empathy 3, but in another, he separates empathy from moral concern, which is inconsistent with Empathy 3.
I think most EA’s want people to care about the welfare of others. It doesn’t matter if people imagine what it feels like being a chicken that is pecked to death in a factory farm (that’s going to be near-impossible), or if they imagine how they would feel in factory farm conditions (again, very difficult to imagine). We just want them to care about the chicken’s welfare. We therefore want to promote Empathy 2, not 1 or 3. Given the confusion around the empathy term, it seems better to stick with compassion. Lay definitions of compassion also align with the “just care about their welfare” view.
bxjaeger—fair point. It’s worth emphasizing Paul Bloom’s distinction between rational compassion and emotional empathy, and the superiority of the former when thinking about evidence-based policies and interventions.
Agreed—I think Paul Bloom’s distinction makes a lot of sense. Many prominent empathy researchers have pushed back on this, mostly to argue for the Empathy 3 definition that I listed, but I don’t see any benefit in conflating these very different processes under one umbrella term.
Yep—I think Paul Bloom makes an important point in arguing that ‘Empathy 2’ (or ‘rational compassion’) is more consistent with EA-style scope-sensitivity, and less likely to lead to ‘compassion fatigue’, compared to ‘Empathy 1’ (feeling another’s suffering as if it’s one’s own).
I don’t think compassion is the right term descriptively for EA views, and it seems worse than empathy here. Compassion is (by the most common definitions, I think) a response to (ongoing) suffering (or misfortune).
Longtermism might not count as compassionate because it’s more preventative than responsive, and the motivation to ensure future happy people come to exist probably isn’t a matter of compassion, because it’s not aimed at addressing suffering (or misfortune). But what Holden is referring to is meant to include those. I think what we’re aiming for is counting all interests and anyone who has interests, as well as the equal consideration of interests.
Of course, acts that are supported by longtermism or that ensure future happy people come to exist can be compassionate, but maybe not for longtermist reasons and probably not because they ensure future happy people exist, and instead because they also address suffering (or misfortune). And longtermists and those focused on ensuring future happy people come to exist can still be compassionate in general, but those motivations (or at least ensuring future happy people come to exist) don’t seem to be compassionate, i.e. they’re just not aimed at ongoing suffering in particular.
You’re right that both empathy and compassion are typically used to describe what determines people’s motivation to relieve someone’s suffering. Neither perfectly captures preventive thinking or consideration of interests (beyond welfare and suffering) that characterize longtermist thinking. I think you are right that compassion doesn’t lead you to want future people to exist. But I do think that it leads you to want future people to have positive lives. This point is harder to make for empathy. Compassion often means caring for others because we value their welfare, so it can be easily applied to animals or future people. Empathy means caring for others because we (in some way) feel what it’s like to be them or in their position. It seems like this is more difficult when we talk about animals and future people.
I would argue that empathy, how it is typically described, is even more local and immediate, whereas compassion, again, how it is typically described, gets somewhat closer to the idea of putting weight on others’ welfare (in a potentially fully calculated, unemotional way), which I think is closer to EA thinking. This is also in line with how Paul Bloom frames it: empathy is the more emotional route to caring about others, whereas compassion is the more reflective/rational route. So I agree that neither label captures the breadth of EA thinking and motivations, especially not when considering longtermism. I am not even arguing very strongly for compassion as the label we should go with. My argument more is that empathy seems to be a particualrly bad choice.
Let me make a case that we should call it Radical Compassion instead of Radical Empathy. This is a very minor point of course, but then again, people have endlessly debated whether Effective Altruism is a sub-optimal label and what a better label would be. People clearly care about what important things are called (and maybe rightly so from a linguistic precision and marketing perspective).
You probably know this literature, but there’s a lot of confusion around what empathy should be defined as. Sometimes, empathy refers to various perspective-taking processes, like feeling what another feels (let’s call it Empathy 1). I think this is the most common lay definition. Sometimes, it refers to valuing others’ welfare, also referred to as empathic concern or compassion (let’s call it Empathy 2). Sometimes, definitions reference both processes (let’s call it Empathy 3), which doesn’t seem like the most helpful strategy to me.
Holden briefly points to the debate in his post which you link to, but it’s not clear to me why he chose the empathy term despite this confusion and disagreement. In one place, he seems to endorse Empathy 3, but in another, he separates empathy from moral concern, which is inconsistent with Empathy 3.
I think most EA’s want people to care about the welfare of others. It doesn’t matter if people imagine what it feels like being a chicken that is pecked to death in a factory farm (that’s going to be near-impossible), or if they imagine how they would feel in factory farm conditions (again, very difficult to imagine). We just want them to care about the chicken’s welfare. We therefore want to promote Empathy 2, not 1 or 3. Given the confusion around the empathy term, it seems better to stick with compassion. Lay definitions of compassion also align with the “just care about their welfare” view.
bxjaeger—fair point. It’s worth emphasizing Paul Bloom’s distinction between rational compassion and emotional empathy, and the superiority of the former when thinking about evidence-based policies and interventions.
Agreed—I think Paul Bloom’s distinction makes a lot of sense. Many prominent empathy researchers have pushed back on this, mostly to argue for the Empathy 3 definition that I listed, but I don’t see any benefit in conflating these very different processes under one umbrella term.
Yep—I think Paul Bloom makes an important point in arguing that ‘Empathy 2’ (or ‘rational compassion’) is more consistent with EA-style scope-sensitivity, and less likely to lead to ‘compassion fatigue’, compared to ‘Empathy 1’ (feeling another’s suffering as if it’s one’s own).
I don’t think compassion is the right term descriptively for EA views, and it seems worse than empathy here. Compassion is (by the most common definitions, I think) a response to (ongoing) suffering (or misfortune).
Longtermism might not count as compassionate because it’s more preventative than responsive, and the motivation to ensure future happy people come to exist probably isn’t a matter of compassion, because it’s not aimed at addressing suffering (or misfortune). But what Holden is referring to is meant to include those. I think what we’re aiming for is counting all interests and anyone who has interests, as well as the equal consideration of interests.
Of course, acts that are supported by longtermism or that ensure future happy people come to exist can be compassionate, but maybe not for longtermist reasons and probably not because they ensure future happy people exist, and instead because they also address suffering (or misfortune). And longtermists and those focused on ensuring future happy people come to exist can still be compassionate in general, but those motivations (or at least ensuring future happy people come to exist) don’t seem to be compassionate, i.e. they’re just not aimed at ongoing suffering in particular.
You’re right that both empathy and compassion are typically used to describe what determines people’s motivation to relieve someone’s suffering. Neither perfectly captures preventive thinking or consideration of interests (beyond welfare and suffering) that characterize longtermist thinking. I think you are right that compassion doesn’t lead you to want future people to exist. But I do think that it leads you to want future people to have positive lives. This point is harder to make for empathy. Compassion often means caring for others because we value their welfare, so it can be easily applied to animals or future people. Empathy means caring for others because we (in some way) feel what it’s like to be them or in their position. It seems like this is more difficult when we talk about animals and future people.
I would argue that empathy, how it is typically described, is even more local and immediate, whereas compassion, again, how it is typically described, gets somewhat closer to the idea of putting weight on others’ welfare (in a potentially fully calculated, unemotional way), which I think is closer to EA thinking. This is also in line with how Paul Bloom frames it: empathy is the more emotional route to caring about others, whereas compassion is the more reflective/rational route. So I agree that neither label captures the breadth of EA thinking and motivations, especially not when considering longtermism. I am not even arguing very strongly for compassion as the label we should go with. My argument more is that empathy seems to be a particualrly bad choice.