I also think it’s worth really looking at why that was the case. This may only be relevant to a narrow slice of problems, but one of my pet theories/hypotheses is that coordination problems (e.g., poorly incentive structures in academia) undermine the ability of groups to scale with size for certain research questions. Thus, in some fields it might just be that “super geniuses” were so valuable because “mere geniuses” struggled to coordinate. But if people in EA and AI safety are better at coordinating, this may be less of a problem.
I also think it’s worth really looking at why that was the case. This may only be relevant to a narrow slice of problems, but one of my pet theories/hypotheses is that coordination problems (e.g., poorly incentive structures in academia) undermine the ability of groups to scale with size for certain research questions. Thus, in some fields it might just be that “super geniuses” were so valuable because “mere geniuses” struggled to coordinate. But if people in EA and AI safety are better at coordinating, this may be less of a problem.
Agreed, these seem like fascinating and useful research directions.