re: ‘Shut Up and Divide’, you might be interested in my post on leveling up vs down versions of impartiality, which includes some principled reasons to think the leveling up approach is better justified:
The better you get to know someone, the more you tend to (i) care about them, and (ii) appreciate the reasons to wish them well. Moreover, the reasons to wish them well don’t seem contingent on you or your relationship to them—what you discover is instead that there are intrinsic features of the other person that makes them awesome and worth caring about. Those reasons predate your awareness of them. So the best explanation of our initial indifference to strangers is not that there’s truly no (or little) reason to care about them (until, perhaps, we finally get to know them). Rather, the better explanation is simply that we don’t see the reasons (sufficiently clearly), and so can’t be emotionally gripped or moved by them, until we get to know the person better. But the reasons truly were there all along.
It seems empirically false and theoretically unlikely (cf kin selection) that our emotions work this way. I mean, if it were true, how would you explain things like dads who care more about their own kids that they’ve never seen than strangers’ kids, (many) married couples falling out of love and caring less about each other over time, the Cinderella effect?
So I find it very unlikely that we can “level-up” all the way to impartiality this way, but maybe there are other versions of your argument that could work (in implying not utilitarianism/impartiality but just that we should care a lot more about humanity in aggregate than many of us currently do). Before going down that route though, I’d like to better understand what you’re saying. What do you mean by the “intrinsic features” of the other person that makes them awesome and worth caring about? What kind of features are you talking about?
One tendency can always be counterbalanced by another in particular cases; I’m not trying to give the full story of “how emotions work”. I’m just talking about the undeniable datum that we do, as a general rule, care more about those we know than we do about total strangers.
(And I should stress that I don’t think we can necessarily ‘level-up’ our emotional responses; they may be biased and limited in all kinds of ways. I’m rather appealing to a reasoned generalization from our normative appreciation of those we know best. Much as Nagel argues that we recognize agent-neutral reasons to relieve our own pain—reasons that ideally ought to speak to anyone, even those who aren’t themselves feeling the pain—so I think we implicitly recognize agent-neutral reasons to care about our loved ones. And so we can generalize to appreciate that like reasons are likely to be found in others’ pains, and others’ loved ones, too.)
I don’t have a strong view on which intrinsic features do the work. Many philosophers (see, e.g., David Velleman in ‘Love as a Moral Emotion’) argue that bare personhood suffices for this role. But if you give a more specific answer to the question of “What makes this person awesome and worth caring about?” (when considering one of your best friends, say), that’s fine too, so long as the answer isn’t explicitly relational (e.g. “because they’re nice to me!”). I’m open to the idea that lots of people might be awesome and worth caring about for extremely varied reasons—for possessing any of the varied traits you regard as virtues, perhaps (e.g. one may be funny, irreverent, determined, altruistic, caring, thought-provoking, brave, or...).
I’m just talking about the undeniable datum that we do, as a general rule, care more about those we know than we do about total strangers.
There are lots of X and Y such that, as a general rule, we care more about someone in X than we do someone in Y. Why focus on X=”those we know” and Y=”total strangers” when this is actually very weak compared to other Xs and Ys, and explains only a tiny fraction of the variation in how much we care about different members of humanity?
(By “very weak” I mean suppose someone you know was drowning in a pond, and a total stranger was drowning in another pond that’s slightly closer to you, for what fraction of the people you know, including e.g. people you know from work, would you instinctively run to save them over the total stranger? (And assume you won’t see either of them again afterwards, so you don’t run to save the person you know just to avoid potential subsequent social awkwardness.) Compare this with other X and Y.)
If I think about the broader variation in “how much I care” it seems it’s almost all relational (e.g., relatives, people who were helpful to me in the past, strangers I happen to come across vs distant strangers). And if I ask “why?” the answer I get are like, “my emotions were genetically programmed to work that way” and “because of kin selection” and “it was a good way to gain friends/allies in the EEA”. Intrinsic / non-relational features (either the features themselves, or how much I know or appreciate the features) just don’t seem to enter that much into the equation.
(Maybe you could argue that upon reflection I’d want to self-modify away all that relational stuff and just value people based on their intrinsic features. Is that what you’d argue, and if so what’s the actual argument? It seems like you sort of hint in this direction in your middle parenthetical paragraph, but I’m not sure.)
for what fraction of the people you know, including e.g. people you know from work, would you instinctively run to save them over the total stranger?
Uh, maybe 90 − 99%? (With more on the higher end for people I actually know in some meaningful way, as opposed to merely recognizing their face or having chatted once or twice, which is not at all the same as knowing them as a person.) Maybe we’re just psychologically very different! I’m totally baffled by your response here.
re: ‘Shut Up and Divide’, you might be interested in my post on leveling up vs down versions of impartiality, which includes some principled reasons to think the leveling up approach is better justified:
It seems empirically false and theoretically unlikely (cf kin selection) that our emotions work this way. I mean, if it were true, how would you explain things like dads who care more about their own kids that they’ve never seen than strangers’ kids, (many) married couples falling out of love and caring less about each other over time, the Cinderella effect?
So I find it very unlikely that we can “level-up” all the way to impartiality this way, but maybe there are other versions of your argument that could work (in implying not utilitarianism/impartiality but just that we should care a lot more about humanity in aggregate than many of us currently do). Before going down that route though, I’d like to better understand what you’re saying. What do you mean by the “intrinsic features” of the other person that makes them awesome and worth caring about? What kind of features are you talking about?
One tendency can always be counterbalanced by another in particular cases; I’m not trying to give the full story of “how emotions work”. I’m just talking about the undeniable datum that we do, as a general rule, care more about those we know than we do about total strangers.
(And I should stress that I don’t think we can necessarily ‘level-up’ our emotional responses; they may be biased and limited in all kinds of ways. I’m rather appealing to a reasoned generalization from our normative appreciation of those we know best. Much as Nagel argues that we recognize agent-neutral reasons to relieve our own pain—reasons that ideally ought to speak to anyone, even those who aren’t themselves feeling the pain—so I think we implicitly recognize agent-neutral reasons to care about our loved ones. And so we can generalize to appreciate that like reasons are likely to be found in others’ pains, and others’ loved ones, too.)
I don’t have a strong view on which intrinsic features do the work. Many philosophers (see, e.g., David Velleman in ‘Love as a Moral Emotion’) argue that bare personhood suffices for this role. But if you give a more specific answer to the question of “What makes this person awesome and worth caring about?” (when considering one of your best friends, say), that’s fine too, so long as the answer isn’t explicitly relational (e.g. “because they’re nice to me!”). I’m open to the idea that lots of people might be awesome and worth caring about for extremely varied reasons—for possessing any of the varied traits you regard as virtues, perhaps (e.g. one may be funny, irreverent, determined, altruistic, caring, thought-provoking, brave, or...).
There are lots of X and Y such that, as a general rule, we care more about someone in X than we do someone in Y. Why focus on X=”those we know” and Y=”total strangers” when this is actually very weak compared to other Xs and Ys, and explains only a tiny fraction of the variation in how much we care about different members of humanity?
(By “very weak” I mean suppose someone you know was drowning in a pond, and a total stranger was drowning in another pond that’s slightly closer to you, for what fraction of the people you know, including e.g. people you know from work, would you instinctively run to save them over the total stranger? (And assume you won’t see either of them again afterwards, so you don’t run to save the person you know just to avoid potential subsequent social awkwardness.) Compare this with other X and Y.)
If I think about the broader variation in “how much I care” it seems it’s almost all relational (e.g., relatives, people who were helpful to me in the past, strangers I happen to come across vs distant strangers). And if I ask “why?” the answer I get are like, “my emotions were genetically programmed to work that way” and “because of kin selection” and “it was a good way to gain friends/allies in the EEA”. Intrinsic / non-relational features (either the features themselves, or how much I know or appreciate the features) just don’t seem to enter that much into the equation.
(Maybe you could argue that upon reflection I’d want to self-modify away all that relational stuff and just value people based on their intrinsic features. Is that what you’d argue, and if so what’s the actual argument? It seems like you sort of hint in this direction in your middle parenthetical paragraph, but I’m not sure.)
Uh, maybe 90 − 99%? (With more on the higher end for people I actually know in some meaningful way, as opposed to merely recognizing their face or having chatted once or twice, which is not at all the same as knowing them as a person.) Maybe we’re just psychologically very different! I’m totally baffled by your response here.
Yeah, seems like we’ve surfaced some psychological difference here. Interesting.