Thank you so much for this!
I’m really curious about the “nothing I haven’t heard before” in relation to the Social Capital Concern. Have people raised this before? If so, what’s being done about it? As I said, I think it’s the most serious of the four I mentioned, so if it’s empirically supported, what’s the action plan against it?
Fair question! I should’ve been more clear that the implicit premise of the concern is that there has been an overcorrection toward longtermism.
The value-add of EA is distributing utility efficiently (not longtermism). If there’s been an overcorrection, then there’s an inefficiency and a recalibration is needed. So this concern is: how hard will it be to snap back toward the right calibration? The longer longtermism dominates, and the degree to which it does, will make it harder for the muscle memory.
If the EA movement has perfectly calibrated the amount of longtermism needed in the movement (or if there’s currently not enough longtermism), then this concern can be put aside.