I think that is a reason that we can’t quickly scale, but not a strong reason that we can’t eventually reach a similar scale to Gates/universities.
I expect that as these fields mature, we’ll break things down into better-defined problems that can be more effectively done by less aligned people. (I think this is already happening to some extent—e.g. compare the type of AI timelines research needed 5 years ago vs. asking someone to do more research into the parameters of recent OP reports.)
From the outside, Givewell work also feels much more regimented and doable by less-aligned people, compared to the early heady days when Holden and Elie were hacking things out from first principles without even knowing about QALYs.
Potentially, but I think the debate largely concerned near-term megaprojects. Cf.:
people able to run big EA projects seem like one of our key bottlenecks right now … I’m especially excited about finding people who could run $100m+ per year ‘megaprojects’
And to the extent that we’re discussing near-term megaprojects, quick scaling matters.
I think that is a reason that we can’t quickly scale, but not a strong reason that we can’t eventually reach a similar scale to Gates/universities.
I expect that as these fields mature, we’ll break things down into better-defined problems that can be more effectively done by less aligned people. (I think this is already happening to some extent—e.g. compare the type of AI timelines research needed 5 years ago vs. asking someone to do more research into the parameters of recent OP reports.)
From the outside, Givewell work also feels much more regimented and doable by less-aligned people, compared to the early heady days when Holden and Elie were hacking things out from first principles without even knowing about QALYs.
Potentially, but I think the debate largely concerned near-term megaprojects. Cf.:
And to the extent that we’re discussing near-term megaprojects, quick scaling matters.
I see, I agree with that.