Cowen thinks there are limits to EA’s idea that we should be completely impartial in our decisions (that we should weigh all human lives as being equal in value when we make decisions, to the point where we only care about how many lives we can impact and not where in the world those lives are). He cites a thought experiment where aliens come to Earth and want to enslave humankind for their benefit. We don’t calculate whether more net happiness is generated if the aliens get what they want: the vast majority of people would always choose to fight alongside their fellow humans (thus being partial).
Cowen then claims that some degree of partiality is an inescapable part of human psychology, so we ought not to strive to be completely impartial. Not only does this run into Hume’s is-ought problem, as he’s using (what he believes to be) an empirical fact to derive an ought, but this doesn’t get to the core reason of why we ought to be partial in some situations. This matters because having a core principle would more clearly define what limits to our impartiality should be.
For example, I think the notion of personal and collective responsibility is extremely important here for setting clear limits: I am partial to, say, my family over strangers because I have relationships with them that make me accountable to them over strangers. Governments need to be partial to the citizens of their country over the citizens of other countries because they are funded through taxes and voted in by citizens.
Humans should fight on the side of humans in the war against aliens for two reasons: the first is that every human being is in a relationship with herself, making her responsible for not letting herself be enslaved. Secondly, one can include the idea of moral responsibility under the umbrella of personal and collective responsibility: even if only some humans are enslaved and there isn’t a personal benefit for most people to fight on the side of those humans, slavery is immoral, so we ought to fight for the rights and dignity of those people if there is something we can do about it. If a specific subset of humans engaged a whole race of aliens in battle (both sides were voluntarily engaged in the battle), and the winner didn’t enslave the loser, it would actually be wise to pick the side that would lead to the most net happiness, as mere tribalism is not a valid reason to be partial.
I think this is a reasonable response, but Cowen did anticipate the “slavery is immoral” response, and is right that this wouldn’t be a utilitarian response. You can fix that since there is an easily drawn line from utilitarianism to this response, but I think Cowen would respond that in this scenario we both wouldn’t and shouldn’t bother to do such fine reasoning and just accept our partialities. He does make a similar statement during the Q&A.
I’d contend that this an example of mixing practical considerations with philosophical considerations. Of course we wouldn’t stop during an invasion of little green men who are killing and enslaving humans and wonder.. “would it be better for them to win?” If you did stop to wonder, there might be many good reasons to say no, but if you’re asking a question of whether you’d stop and ask a question, it’s not a philosophical question anymore, or at least not a thought experiment. Timing is practical not theoretical.
If it was really all about partialities, and not practical, it wouldn’t matter what side we were on. If we showed up on another planet, and could enslave/exterminate a bunch of little green men, should we stop to think about it before we did? Of course we should. And while maybe you can concoct a scenario in which it’s kill or be killed, there would be little question about the necessity to be certain that it wasn’t an option to simply turn around and go the other way.
Cowen thinks there are limits to EA’s idea that we should be completely impartial in our decisions (that we should weigh all human lives as being equal in value when we make decisions, to the point where we only care about how many lives we can impact and not where in the world those lives are). He cites a thought experiment where aliens come to Earth and want to enslave humankind for their benefit. We don’t calculate whether more net happiness is generated if the aliens get what they want: the vast majority of people would always choose to fight alongside their fellow humans (thus being partial).
Cowen then claims that some degree of partiality is an inescapable part of human psychology, so we ought not to strive to be completely impartial. Not only does this run into Hume’s is-ought problem, as he’s using (what he believes to be) an empirical fact to derive an ought, but this doesn’t get to the core reason of why we ought to be partial in some situations. This matters because having a core principle would more clearly define what limits to our impartiality should be.
For example, I think the notion of personal and collective responsibility is extremely important here for setting clear limits: I am partial to, say, my family over strangers because I have relationships with them that make me accountable to them over strangers. Governments need to be partial to the citizens of their country over the citizens of other countries because they are funded through taxes and voted in by citizens.
Humans should fight on the side of humans in the war against aliens for two reasons: the first is that every human being is in a relationship with herself, making her responsible for not letting herself be enslaved. Secondly, one can include the idea of moral responsibility under the umbrella of personal and collective responsibility: even if only some humans are enslaved and there isn’t a personal benefit for most people to fight on the side of those humans, slavery is immoral, so we ought to fight for the rights and dignity of those people if there is something we can do about it. If a specific subset of humans engaged a whole race of aliens in battle (both sides were voluntarily engaged in the battle), and the winner didn’t enslave the loser, it would actually be wise to pick the side that would lead to the most net happiness, as mere tribalism is not a valid reason to be partial.
I think this is a reasonable response, but Cowen did anticipate the “slavery is immoral” response, and is right that this wouldn’t be a utilitarian response. You can fix that since there is an easily drawn line from utilitarianism to this response, but I think Cowen would respond that in this scenario we both wouldn’t and shouldn’t bother to do such fine reasoning and just accept our partialities. He does make a similar statement during the Q&A.
I’d contend that this an example of mixing practical considerations with philosophical considerations. Of course we wouldn’t stop during an invasion of little green men who are killing and enslaving humans and wonder.. “would it be better for them to win?” If you did stop to wonder, there might be many good reasons to say no, but if you’re asking a question of whether you’d stop and ask a question, it’s not a philosophical question anymore, or at least not a thought experiment. Timing is practical not theoretical.
If it was really all about partialities, and not practical, it wouldn’t matter what side we were on. If we showed up on another planet, and could enslave/exterminate a bunch of little green men, should we stop to think about it before we did? Of course we should. And while maybe you can concoct a scenario in which it’s kill or be killed, there would be little question about the necessity to be certain that it wasn’t an option to simply turn around and go the other way.