Thanks for the question. At the time we were generating the initial list of ideas, it wasn’t clear that AI safety was funding-constrained rather than talent-constrained (or even idea-constrained). As you’ve pointed out, it seems more plausible now that finding additional funding sources could be valuable for a couple of reasons:
Helps respond to the higher funding bar that you’ve mentioned
Takes advantage of new entrants to AI-safety-related philanthropy, notably the mainstream foundations that have now become interested in the space.
I don’t have a strong view on whether additional funding should be used to start a new fund or if it is more efficient to direct it towards existing grantmakers. I’m pretty excited about new grantmakers like Manifund getting set up recently that are trying out new ways for grantmakers to be more responsive to potential grantees. I don’t have a strong view about whether ideas around increasing funding for AI safety are more valuable than those listed above. I’d be pretty excited about the right person doing something around educating mainstream donors about AI safety opportunities.
Thanks for the question. At the time we were generating the initial list of ideas, it wasn’t clear that AI safety was funding-constrained rather than talent-constrained (or even idea-constrained). As you’ve pointed out, it seems more plausible now that finding additional funding sources could be valuable for a couple of reasons:
Helps respond to the higher funding bar that you’ve mentioned
Takes advantage of new entrants to AI-safety-related philanthropy, notably the mainstream foundations that have now become interested in the space.
I don’t have a strong view on whether additional funding should be used to start a new fund or if it is more efficient to direct it towards existing grantmakers. I’m pretty excited about new grantmakers like Manifund getting set up recently that are trying out new ways for grantmakers to be more responsive to potential grantees. I don’t have a strong view about whether ideas around increasing funding for AI safety are more valuable than those listed above. I’d be pretty excited about the right person doing something around educating mainstream donors about AI safety opportunities.