“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent. You only get symmetry if the adoption of ‘can now ethically ignore suffering of strangers’ as a moral principle is considered a win for the divide side. That’s the argument that would really shake the foundations of EA.
Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?
So actually we have three choices: divide, multiply, or be scope insensitive. In an ideal world populated by good and rational people, they’d probably still care relatively more about their families, but no one will be indifferent to the suffering of the far away. Loving and empathizing with strangers is widely agreed to be a vital and beautiful part of what makes us human, despite our imperfections. The fact that we have this particular cognitive bias of scope insensitivity may be fundamentally human in some sense, but it’s not really part of what makes us human. Nobody’s calling scope sensitive people sociopaths. Nobody’s personal idea of utopia elevates this principle of scope insensitivity to the level of ‘love others’.
Likewise, very few would prefer/imagine this idealized world as filled with ‘divide’ people rather than ‘multiply’ people. Because:
The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I’ve managed to pass up news and discussions about Amanda Knox without a second thought.
Most people’s imagined inhabitants of utopia fit the former profile much more closely. So I think that “Shut Up and Divide” only challenges the Drowning Child argument insofar as you have very strange ethical intuitions, not shared by many. To really attack this foundation you’d have to argue for why these common intuitions about good and bad are wrong, not just that they’re ripe for inconsistencies when held by normal humans (which every set of ethical principles is).
So I think that “Shut Up and Divide” only challenges the Drowning Child argument insofar as you have very strange ethical intuitions, not shared by many.
Suppose I invented a brain modification machine and asked 100 random people to choose between:
M(ultiply): change your emotions so that you care much more in aggregate about humanity than your friends, family, and self
D(ivide): change your emotions so that you care much less about random strangers that you happen to come across than you currently do
S(cope insensitive): don’t change anything.
Would most of them “intuitively” really choose M?
Most people’s imagined inhabitants of utopia fit the former profile much more closely.
From this, it seems that you’re approaching the question differently, analogous to asking someone if they would modify everyone’s brain so that everyone cares much more in aggregate about humanity (thereby establishing this utopia). But this is like the difference between unilaterally playing Cooperate in Prisoners’ Dilemma, versus somehow forcing both players to play Cooperate. Asking EAs or potential EAs to care much more about humanity than they used to, and not conditional on everyone else doing the same, based on your argument, is like asking someone to unilaterally play Cooperate, while using the argument, “Wouldn’t you like to live in a utopia where everyone plays Cooperate?”
I think most people would choose S because brain modification is weird and scary. This an intuition that’s irrelevant to the purpose of the hypothetical but is strong enough to make the whole scenario less helpful. I’m very confident that ~0/100 people would choose D, which is what you’re arguing for! Furthermore, if you added a weaker M that changed your emotions so that you simply care much more about random strangers than you currently do, I think many (if not most) people—especially among EAs—would choose that. Doubly so for idealized versions of themselves, the people they want to be making the choice. So again, you are arguing for quite strange intuitions, and I think the brain modification scenario reinforces rather than undermines that claim.
To your second point, we’re lucky that EA cause areas are not prisoner’s dilemmas! Everyday acts of altruism aren’t prisoner’s dilemmas either. By arguing that most people’s imagined inhabitants of utopia ‘shut up and multiply’ rather than divide, I’m just saying that these utopians care *a lot* about strangers, and therefore that caring about strangers is something that regular people hold dear as an important human value, even though they often fail at it. Introducing the dynamics of an adversarial game to this broad truth is a disanalogy.
I’m very confident that ~0/100 people would choose D, which is what you’re arguing for!
In my post I said there’s an apparent symmetry between M and D, so I’m not arguing for choosing D but instead that we are confused and should be uncertain.
By arguing that most people’s imagined inhabitants of utopia ‘shut up and multiply’ rather than divide, I’m just saying that these utopians care a lot about strangers, and therefore that caring about strangers is something that regular people hold dear as an important human value, even though they often fail at it.
Ok, I was confused because I wasn’t expecting how you’re using ‘shut up and multiply’. At this point I think you have a different argument for caring a lot about strangers which is different from Peter Singer’s. Considering your own argument, I don’t see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners’ dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I’m all for that, but ultimately my own altruism values people’s welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it’s just raw unexplained intuitions, then I’m not sure we should put much stock in them.)
Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I’m not sure we can derive strong conclusions about human values based on these imaginations anyway.
In my post I said there’s an apparent symmetry between M and D, so I’m not arguing for choosing D but instead that we are confused and should be uncertain.
You’re right, I misrepresented your point here. This doesn’t affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.
Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I’m not sure we can derive strong conclusions about human values based on these imaginations anyway.
I stand by my claim that ‘loving non-kin’ is a stable and fundamental human value, that over history almost all humans would include it (at least directionally) in their personal utopias, and that it only grows stronger upon reflection. Of course there’s variation, but when ~all of religion and literature has been saying one thing, you can look past the outliers.
Considering your own argument, I don’t see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners’ dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I’m all for that, but ultimately my own altruism values people’s welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it’s just raw unexplained intuitions, then I’m not sure we should put much stock in them.)
I’m not explaining myself well. What I’m trying to say is that the symmetry between dividing and multiplying is superficial—both are consistent, but one also fulfills a deep human value (which I’m trying to argue for with the utopia example), whereas the other ethically ‘allows’ the circumvention of this value. I’m not saying that this value of loving strangers, or being altruistic in and of itself, is fundamental to the project of doing good—in that we agree.
“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent.
This reasoning seems confused. Caring more about certain individuals than others is a totally valid utility function that you can have. You can’t
especially care about individual people while simultaneously caring about everyone equally. You just can’t. “Logically consistent” means that you don’t claim to do both of these mutually exclusive things at once.
When I say “be consistent and care about individual strangers”, I mean shut up and multiply. There’s no contradiction. It’s caring about individual strangers taken to the extreme where you care about everyone equally. If you care about logical consistency that works as well as shut up and divide.
“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent. You only get symmetry if the adoption of ‘can now ethically ignore suffering of strangers’ as a moral principle is considered a win for the divide side. That’s the argument that would really shake the foundations of EA.
So actually we have three choices: divide, multiply, or be scope insensitive. In an ideal world populated by good and rational people, they’d probably still care relatively more about their families, but no one will be indifferent to the suffering of the far away. Loving and empathizing with strangers is widely agreed to be a vital and beautiful part of what makes us human, despite our imperfections. The fact that we have this particular cognitive bias of scope insensitivity may be fundamentally human in some sense, but it’s not really part of what makes us human. Nobody’s calling scope sensitive people sociopaths. Nobody’s personal idea of utopia elevates this principle of scope insensitivity to the level of ‘love others’.
Likewise, very few would prefer/imagine this idealized world as filled with ‘divide’ people rather than ‘multiply’ people. Because:
Most people’s imagined inhabitants of utopia fit the former profile much more closely. So I think that “Shut Up and Divide” only challenges the Drowning Child argument insofar as you have very strange ethical intuitions, not shared by many. To really attack this foundation you’d have to argue for why these common intuitions about good and bad are wrong, not just that they’re ripe for inconsistencies when held by normal humans (which every set of ethical principles is).
Suppose I invented a brain modification machine and asked 100 random people to choose between:
M(ultiply): change your emotions so that you care much more in aggregate about humanity than your friends, family, and self
D(ivide): change your emotions so that you care much less about random strangers that you happen to come across than you currently do
S(cope insensitive): don’t change anything.
Would most of them “intuitively” really choose M?
From this, it seems that you’re approaching the question differently, analogous to asking someone if they would modify everyone’s brain so that everyone cares much more in aggregate about humanity (thereby establishing this utopia). But this is like the difference between unilaterally playing Cooperate in Prisoners’ Dilemma, versus somehow forcing both players to play Cooperate. Asking EAs or potential EAs to care much more about humanity than they used to, and not conditional on everyone else doing the same, based on your argument, is like asking someone to unilaterally play Cooperate, while using the argument, “Wouldn’t you like to live in a utopia where everyone plays Cooperate?”
I think most people would choose S because brain modification is weird and scary. This an intuition that’s irrelevant to the purpose of the hypothetical but is strong enough to make the whole scenario less helpful. I’m very confident that ~0/100 people would choose D, which is what you’re arguing for! Furthermore, if you added a weaker M that changed your emotions so that you simply care much more about random strangers than you currently do, I think many (if not most) people—especially among EAs—would choose that. Doubly so for idealized versions of themselves, the people they want to be making the choice. So again, you are arguing for quite strange intuitions, and I think the brain modification scenario reinforces rather than undermines that claim.
To your second point, we’re lucky that EA cause areas are not prisoner’s dilemmas! Everyday acts of altruism aren’t prisoner’s dilemmas either. By arguing that most people’s imagined inhabitants of utopia ‘shut up and multiply’ rather than divide, I’m just saying that these utopians care *a lot* about strangers, and therefore that caring about strangers is something that regular people hold dear as an important human value, even though they often fail at it. Introducing the dynamics of an adversarial game to this broad truth is a disanalogy.
In my post I said there’s an apparent symmetry between M and D, so I’m not arguing for choosing D but instead that we are confused and should be uncertain.
Ok, I was confused because I wasn’t expecting how you’re using ‘shut up and multiply’. At this point I think you have a different argument for caring a lot about strangers which is different from Peter Singer’s. Considering your own argument, I don’t see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners’ dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I’m all for that, but ultimately my own altruism values people’s welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it’s just raw unexplained intuitions, then I’m not sure we should put much stock in them.)
Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I’m not sure we can derive strong conclusions about human values based on these imaginations anyway.
You’re right, I misrepresented your point here. This doesn’t affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.
I stand by my claim that ‘loving non-kin’ is a stable and fundamental human value, that over history almost all humans would include it (at least directionally) in their personal utopias, and that it only grows stronger upon reflection. Of course there’s variation, but when ~all of religion and literature has been saying one thing, you can look past the outliers.
I’m not explaining myself well. What I’m trying to say is that the symmetry between dividing and multiplying is superficial—both are consistent, but one also fulfills a deep human value (which I’m trying to argue for with the utopia example), whereas the other ethically ‘allows’ the circumvention of this value. I’m not saying that this value of loving strangers, or being altruistic in and of itself, is fundamental to the project of doing good—in that we agree.
This reasoning seems confused. Caring more about certain individuals than others is a totally valid utility function that you can have. You can’t
especially care about individual people while simultaneously caring about everyone equally. You just can’t. “Logically consistent” means that you don’t claim to do both of these mutually exclusive things at once.
When I say “be consistent and care about individual strangers”, I mean shut up and multiply. There’s no contradiction. It’s caring about individual strangers taken to the extreme where you care about everyone equally. If you care about logical consistency that works as well as shut up and divide.