Iām in a relatively similar position to Neel. I think technical AI safety grant makers typically know way more than me about what is promising to fund. There is a bunch of non-technical info which is very informative for knowing whether a grant is good (what do current marginal grants look like, what are the downside risks, is there private info on the situation which makes things seem sketchier, etc.) and grant makers are generally in a better position than I am to evaluate this stuff.
The limiting factor [in technical ai safety funding] is in having enough technical grant makers, not in having enough organizational diversity among grantmakers (at least at current margins).
If OpenPhil felt more saturated on technical AI grant makers, then I would feel like starting new orgs pursing different funding strategies for technical AI safety could look considerably better than just having more people work at grant making at OpenPhil.
That said, note that I tend to agree to reasonable extent with the technical takes at OpenPhil on AI safety. If I heavily disagreed, I might think starting new orgs looks pretty good.
Iām in a relatively similar position to Neel. I think technical AI safety grant makers typically know way more than me about what is promising to fund. There is a bunch of non-technical info which is very informative for knowing whether a grant is good (what do current marginal grants look like, what are the downside risks, is there private info on the situation which makes things seem sketchier, etc.) and grant makers are generally in a better position than I am to evaluate this stuff.
The limiting factor [in technical ai safety funding] is in having enough technical grant makers, not in having enough organizational diversity among grantmakers (at least at current margins).
If OpenPhil felt more saturated on technical AI grant makers, then I would feel like starting new orgs pursing different funding strategies for technical AI safety could look considerably better than just having more people work at grant making at OpenPhil.
That said, note that I tend to agree to reasonable extent with the technical takes at OpenPhil on AI safety. If I heavily disagreed, I might think starting new orgs looks pretty good.