I think this is an important article, and underlies a critical nontrivial distinction between altruistic and for-profit work. However, I also think seeing everything completely globally misses an important part of what incentivizes individuals to do good work.
Whenever eg. people tell me that after talking to me they’ve donated to an effective charity, or they took the pledge, or they trade their vote, or wrote an article about EA, etc., I feel a large spike of joy that my actions have somewhat possibly produced counterfactual good in the world. (Less joy than if I earned and donated the money myself, but still very significant joy). I do not feel nearly as much of this joy when learning, eg. Will has done the same thing, or Good Ventures investments did slightly higher than the market would suggest.
Now, no doubt part of this problem is just a classic issue of scope insensitivity. You can say that if I should feel happy that I raised $X, I should be even happier that Good Ventures or CEA raised $100X or $1000X that, and it’s wrong for me to not update in that direction.
But I think there’s a different issue here too. My second-order preferences for myself is that my own emotions should not serve just as an objective observer, looking from outside to get an impartial view of the world.I am also an agent upon the world, and it’s very relevant to me that my own emotions accurately and acutely inspire me to do the most important/impactful things.
Thus, it makes sense for me to feel, on a visceral, System One level, directly moved by my own opportunities to have a personal impact, in a way that the work of Good Ventures or CEA is less directly relevant to my own space of actions.
I am open to the idea that seeing myself as a coherent and individual agent is silly (esp. in light of the late Derek Parfit’s great works). But most people see themselves as agents, so I feel like the presumption should be that this is a useful approximation. Likewise, perhaps other people (including yourself, Peter) are not significantly inspired to act by their own past and predictions of future emotional state, and can do good work regardless of whether they’re happy or sad. In that case, I commend you and am working towards being more stoic generally. But ultimately I don’t know anybody else’s motivations as well as my own, and I know that my own happiness is a very relevant reinforcement and feedback mechanism on my own ability to be altruistically impactful.
TL;DR: while I agree with you that a)seeing the general progress of the EA movement is a good motivational factor and b)interpersonal comparisons are very suboptimal for inspiration, I disagree with the larger thrust of this argument, which seems to imply that I should be inspired more by the efforts of Team EA than my own counterfactual impact. This seems to closely imply that the primary use of S1 emotion is to accurately reflect the world, whereas I would argue that it’s more important for my own emotions to be used to train my behavior.
I think this is an important article, and underlies a critical nontrivial distinction between altruistic and for-profit work. However, I also think seeing everything completely globally misses an important part of what incentivizes individuals to do good work.
Whenever eg. people tell me that after talking to me they’ve donated to an effective charity, or they took the pledge, or they trade their vote, or wrote an article about EA, etc., I feel a large spike of joy that my actions have somewhat possibly produced counterfactual good in the world. (Less joy than if I earned and donated the money myself, but still very significant joy). I do not feel nearly as much of this joy when learning, eg. Will has done the same thing, or Good Ventures investments did slightly higher than the market would suggest.
Now, no doubt part of this problem is just a classic issue of scope insensitivity. You can say that if I should feel happy that I raised $X, I should be even happier that Good Ventures or CEA raised $100X or $1000X that, and it’s wrong for me to not update in that direction.
But I think there’s a different issue here too. My second-order preferences for myself is that my own emotions should not serve just as an objective observer, looking from outside to get an impartial view of the world.I am also an agent upon the world, and it’s very relevant to me that my own emotions accurately and acutely inspire me to do the most important/impactful things.
Thus, it makes sense for me to feel, on a visceral, System One level, directly moved by my own opportunities to have a personal impact, in a way that the work of Good Ventures or CEA is less directly relevant to my own space of actions.
I am open to the idea that seeing myself as a coherent and individual agent is silly (esp. in light of the late Derek Parfit’s great works). But most people see themselves as agents, so I feel like the presumption should be that this is a useful approximation. Likewise, perhaps other people (including yourself, Peter) are not significantly inspired to act by their own past and predictions of future emotional state, and can do good work regardless of whether they’re happy or sad. In that case, I commend you and am working towards being more stoic generally. But ultimately I don’t know anybody else’s motivations as well as my own, and I know that my own happiness is a very relevant reinforcement and feedback mechanism on my own ability to be altruistically impactful.
TL;DR: while I agree with you that a)seeing the general progress of the EA movement is a good motivational factor and b)interpersonal comparisons are very suboptimal for inspiration, I disagree with the larger thrust of this argument, which seems to imply that I should be inspired more by the efforts of Team EA than my own counterfactual impact. This seems to closely imply that the primary use of S1 emotion is to accurately reflect the world, whereas I would argue that it’s more important for my own emotions to be used to train my behavior.