I’m very confident that ~0/100 people would choose D, which is what you’re arguing for!
In my post I said there’s an apparent symmetry between M and D, so I’m not arguing for choosing D but instead that we are confused and should be uncertain.
By arguing that most people’s imagined inhabitants of utopia ‘shut up and multiply’ rather than divide, I’m just saying that these utopians care a lot about strangers, and therefore that caring about strangers is something that regular people hold dear as an important human value, even though they often fail at it.
Ok, I was confused because I wasn’t expecting how you’re using ‘shut up and multiply’. At this point I think you have a different argument for caring a lot about strangers which is different from Peter Singer’s. Considering your own argument, I don’t see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners’ dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I’m all for that, but ultimately my own altruism values people’s welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it’s just raw unexplained intuitions, then I’m not sure we should put much stock in them.)
Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I’m not sure we can derive strong conclusions about human values based on these imaginations anyway.
In my post I said there’s an apparent symmetry between M and D, so I’m not arguing for choosing D but instead that we are confused and should be uncertain.
You’re right, I misrepresented your point here. This doesn’t affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.
Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I’m not sure we can derive strong conclusions about human values based on these imaginations anyway.
I stand by my claim that ‘loving non-kin’ is a stable and fundamental human value, that over history almost all humans would include it (at least directionally) in their personal utopias, and that it only grows stronger upon reflection. Of course there’s variation, but when ~all of religion and literature has been saying one thing, you can look past the outliers.
Considering your own argument, I don’t see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners’ dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I’m all for that, but ultimately my own altruism values people’s welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it’s just raw unexplained intuitions, then I’m not sure we should put much stock in them.)
I’m not explaining myself well. What I’m trying to say is that the symmetry between dividing and multiplying is superficial—both are consistent, but one also fulfills a deep human value (which I’m trying to argue for with the utopia example), whereas the other ethically ‘allows’ the circumvention of this value. I’m not saying that this value of loving strangers, or being altruistic in and of itself, is fundamental to the project of doing good—in that we agree.
In my post I said there’s an apparent symmetry between M and D, so I’m not arguing for choosing D but instead that we are confused and should be uncertain.
Ok, I was confused because I wasn’t expecting how you’re using ‘shut up and multiply’. At this point I think you have a different argument for caring a lot about strangers which is different from Peter Singer’s. Considering your own argument, I don’t see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners’ dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I’m all for that, but ultimately my own altruism values people’s welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it’s just raw unexplained intuitions, then I’m not sure we should put much stock in them.)
Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I’m not sure we can derive strong conclusions about human values based on these imaginations anyway.
You’re right, I misrepresented your point here. This doesn’t affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.
I stand by my claim that ‘loving non-kin’ is a stable and fundamental human value, that over history almost all humans would include it (at least directionally) in their personal utopias, and that it only grows stronger upon reflection. Of course there’s variation, but when ~all of religion and literature has been saying one thing, you can look past the outliers.
I’m not explaining myself well. What I’m trying to say is that the symmetry between dividing and multiplying is superficial—both are consistent, but one also fulfills a deep human value (which I’m trying to argue for with the utopia example), whereas the other ethically ‘allows’ the circumvention of this value. I’m not saying that this value of loving strangers, or being altruistic in and of itself, is fundamental to the project of doing good—in that we agree.