Re: Shut Up and Divide. I haven’t read the other comments here but…
For me, effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don’t want to help strangers, animals, future people, etc. But I think I “want to want to” help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do the thing consistent with my second-order desires, something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don’t really detect in myself a symmetrical second-order want to NOT want to help strangers. So that’s one thing that “Shut up and multiply” has over “shut up and divide,” at least for me.
That said, I realize now that I’m often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor’s occasional desire to help strangers and suggest they generalize it, but I don’t symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that’s a more complicated conversation.
(cross-posted)
Re: Shut Up and Divide. I haven’t read the other comments here but…
For me, effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don’t want to help strangers, animals, future people, etc. But I think I “want to want to” help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do the thing consistent with my second-order desires, something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don’t really detect in myself a symmetrical second-order want to NOT want to help strangers. So that’s one thing that “Shut up and multiply” has over “shut up and divide,” at least for me.
That said, I realize now that I’m often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor’s occasional desire to help strangers and suggest they generalize it, but I don’t symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that’s a more complicated conversation.