Hello Alex. Thanks for writing this up. I agree we should try, hard as that might be, to be honest with ourselves about our underlying motivations (which are often non-obvious anyway). I often worry about this in my own case.
That being said, I want to push back slightly on the case you’ve picked. To paraphrase, your example was “I think long-termism is actually true, but I’m going to have to sacrifice a lot to move from development economics to AI policy”. Yet, if you hang around in the EA world long enough, the social pressure and incentives to conform to long-termist seem extremely strong: the EA leadership endorse it, there seem to be much greater status and job prospects for long-termists, and if you work on near-term causes people keep challenging you for having “weird beliefs” and treating you as an oddity (sadly, I speak from experience). As such, it’s not at all obvious to me that your rationalisation example works here: there is a short-term cost to switching your career path but, over the longer-term, switching to long-termism plausibly benefits one’s own welfare (assuming one hangs around with EAs a lot). Hence, this isn’t a clear case of “I think this X is true but it’s really going to cost me, personally, to believe X”.
Yes! Totally agree. I think I mentioned very briefly that one should also be wary of social dynamics pushing toward EA beliefs, but I definitely didn’t address it enough. Although I think the end result was positive and that my beliefs are true (with some uncertainty of course), I would guess that my update toward long-termism was due in large part to lot’s of exposure to the EA community and from the social pressure that brings.
I basically bought some virtue signaling in the EA domain at the cost of signaling in broader society. Given I hang out with a lot of EAs and plan to do so more in the future, I’d guess that if I were to rationally evaluate this decision it would look net positive in favor of changing toward long-termism (as you would also gain within the EA community by making a similar switch, though with some short-term itoldyouso negative effects).
So yes, I think it was largely due to closer social ties to the EA community that this switch finally became worthwhile and perhaps this was a calculation going on at the subconscious level. It’s probably no coincidence that I finally made a full switch-over during an EA retreat where the broad society costs of switching beliefs was less salient and the EA benefits much more salient. To have the perfect decision-making situation I guess it would be nice to have equally good opportunities in communities representing every philosophical belief, but for now seems a bit unlikely. I suppose it’s another argument for cultivating diversity within EA.
This brings up a whole other rabbit hole in terms of thinking about how we want to appeal to people with some interest in EA but not yet committed to the ideas. I think the social aspect is probably larger than many might think. Of course if we emphasized this we’re limiting people’s choice to join EA in a rational way. But then what is ‘choice’ really given the social construction of our personalities and desires....
Hello Alex. Thanks for writing this up. I agree we should try, hard as that might be, to be honest with ourselves about our underlying motivations (which are often non-obvious anyway). I often worry about this in my own case.
That being said, I want to push back slightly on the case you’ve picked. To paraphrase, your example was “I think long-termism is actually true, but I’m going to have to sacrifice a lot to move from development economics to AI policy”. Yet, if you hang around in the EA world long enough, the social pressure and incentives to conform to long-termist seem extremely strong: the EA leadership endorse it, there seem to be much greater status and job prospects for long-termists, and if you work on near-term causes people keep challenging you for having “weird beliefs” and treating you as an oddity (sadly, I speak from experience). As such, it’s not at all obvious to me that your rationalisation example works here: there is a short-term cost to switching your career path but, over the longer-term, switching to long-termism plausibly benefits one’s own welfare (assuming one hangs around with EAs a lot). Hence, this isn’t a clear case of “I think this X is true but it’s really going to cost me, personally, to believe X”.
Yes! Totally agree. I think I mentioned very briefly that one should also be wary of social dynamics pushing toward EA beliefs, but I definitely didn’t address it enough. Although I think the end result was positive and that my beliefs are true (with some uncertainty of course), I would guess that my update toward long-termism was due in large part to lot’s of exposure to the EA community and from the social pressure that brings.
I basically bought some virtue signaling in the EA domain at the cost of signaling in broader society. Given I hang out with a lot of EAs and plan to do so more in the future, I’d guess that if I were to rationally evaluate this decision it would look net positive in favor of changing toward long-termism (as you would also gain within the EA community by making a similar switch, though with some short-term itoldyouso negative effects).
So yes, I think it was largely due to closer social ties to the EA community that this switch finally became worthwhile and perhaps this was a calculation going on at the subconscious level. It’s probably no coincidence that I finally made a full switch-over during an EA retreat where the broad society costs of switching beliefs was less salient and the EA benefits much more salient. To have the perfect decision-making situation I guess it would be nice to have equally good opportunities in communities representing every philosophical belief, but for now seems a bit unlikely. I suppose it’s another argument for cultivating diversity within EA.
This brings up a whole other rabbit hole in terms of thinking about how we want to appeal to people with some interest in EA but not yet committed to the ideas. I think the social aspect is probably larger than many might think. Of course if we emphasized this we’re limiting people’s choice to join EA in a rational way. But then what is ‘choice’ really given the social construction of our personalities and desires....