In particular, I think many of the epistemically best EAs go into stuff like grant making, philosophy, general longtermist research, etc. which leaves a gap of really epistemically good people focusing full-time on AI. And I think the current epistemic situation in the AI alignment field (both technical and governance) is pretty bad in part due to this.
Interestingly, I have the opposite intuition, that entire subareas of EA/longtermism are kinda plodding along and not doing much because our best people keep going into AI alignment. Some of those areas are plausibly even critical for making the AI story go well.
Still, it’s not clear to me whether the allocation is inaccurate, just because alignment is so important.
Technical biosecurity and maybe forecasting might be exceptions though.
I mean EAs. I’m most confident about “talent-weighted EAs”. But probably also EAs in general.
In particular, I think many of the epistemically best EAs go into stuff like grant making, philosophy, general longtermist research, etc. which leaves a gap of really epistemically good people focusing full-time on AI. And I think the current epistemic situation in the AI alignment field (both technical and governance) is pretty bad in part due to this.
Interestingly, I have the opposite intuition, that entire subareas of EA/longtermism are kinda plodding along and not doing much because our best people keep going into AI alignment. Some of those areas are plausibly even critical for making the AI story go well.
Still, it’s not clear to me whether the allocation is inaccurate, just because alignment is so important.
Technical biosecurity and maybe forecasting might be exceptions though.