Interestingly, I have the opposite intuition, that entire subareas of EA/longtermism are kinda plodding along and not doing much because our best people keep going into AI alignment. Some of those areas are plausibly even critical for making the AI story go well.
Still, it’s not clear to me whether the allocation is inaccurate, just because alignment is so important.
Technical biosecurity and maybe forecasting might be exceptions though.
Interestingly, I have the opposite intuition, that entire subareas of EA/longtermism are kinda plodding along and not doing much because our best people keep going into AI alignment. Some of those areas are plausibly even critical for making the AI story go well.
Still, it’s not clear to me whether the allocation is inaccurate, just because alignment is so important.
Technical biosecurity and maybe forecasting might be exceptions though.