Thanks so much for writing this! I feel like I end up trying to express this idea quite frequently and I’m really glad for the resource on it. I’d also love to see talking about our non-altruistic goals and motivations become more normalised within EA, so yes, thanks 🙂
Personally I identify with the approach you’re expressing very strongly – I find it hard to understand the thought that I might care for my friends only because it ultimately helps me help the world more; I think of them in different categories. But then I know others who find it very alien that I both care a lot about helping the world as much as possible, but am also happy making some decisions for completely non-altruistic reasons. Have others come up against this divide as a problem issue in EA discussions? I feel like at times it is a place where discussions have got stuck.
I’d be interested in knowing too, as others have asked, how do you (and others) tend to approach weighing things to spend your time on against each other when they are part of different goals? I have various strategies that I try, but they usually boil down to using the non-EA goals as constraints – if there is a choice between a morally effective thing and something else, I usually end up doing the EA thing when I get the answer “no” to questions like “will doing it make me sad” or “would I be failing in something I owe to someone else”. I don’t find that very satisfactory – how do others do it?
Thanks so much for writing this! I feel like I end up trying to express this idea quite frequently and I’m really glad for the resource on it. I’d also love to see talking about our non-altruistic goals and motivations become more normalised within EA, so yes, thanks 🙂
Personally I identify with the approach you’re expressing very strongly – I find it hard to understand the thought that I might care for my friends only because it ultimately helps me help the world more; I think of them in different categories. But then I know others who find it very alien that I both care a lot about helping the world as much as possible, but am also happy making some decisions for completely non-altruistic reasons. Have others come up against this divide as a problem issue in EA discussions? I feel like at times it is a place where discussions have got stuck.
I’d be interested in knowing too, as others have asked, how do you (and others) tend to approach weighing things to spend your time on against each other when they are part of different goals? I have various strategies that I try, but they usually boil down to using the non-EA goals as constraints – if there is a choice between a morally effective thing and something else, I usually end up doing the EA thing when I get the answer “no” to questions like “will doing it make me sad” or “would I be failing in something I owe to someone else”. I don’t find that very satisfactory – how do others do it?