I notice there’s not much there along AI policy/governance/advocacy lines, its almost all technical stuff. Those categories seem to fall under your scope (below from website). What are the reasons for that kind of stuff not being funded more? Thanks!
”Projects that directly contribute to reducing existential risks through technical research, policy analysis, advocacy, and/or demonstration projects”
I think that the boring answer for us not doing as much grantmaking in this area as in technical areas is just that we don’t receive a very high number of applications—but this isn’t clearly a bad thing; there are many excellent organisations that do great work in AI policy/governance/advocacy whilst there are only a handful of active organisations on the technical side. I often think that getting a role in an existing org is a better fit for many applicants than doing independent work or starting their own org and I am grateful that the ecosystem for AI policy/governance/advocacy is developed enough to onboard lots of junior people rather than them having to apply for grants to do independent work.
We are trying to do more grantmaking in this space, but unfortunately, the EA brand makes publicising the grants we do make difficult. Many of our grants would count as “fieldbuilding” for AI policy/governance/advocacy, but we could make this clearer in our descriptions. LTFF fund managers, in general, are very excited about work in this area. Even if we can’t fund it directly, we often try to refer it to other funders to fund, so I’d definitely encourage people to apply.
Thanks that makes a lot of sense, especially the comment about getting a job at a regular org and I’m also heartened to hear that the AI governance space is more developed as well.
Didn’t realize the “EA brand” might be a negative that’s sad.
Love the comprehensive summary and transparency
I notice there’s not much there along AI policy/governance/advocacy lines, its almost all technical stuff. Those categories seem to fall under your scope (below from website). What are the reasons for that kind of stuff not being funded more? Thanks!
”Projects that directly contribute to reducing existential risks through technical research, policy analysis, advocacy, and/or demonstration projects”
I think that the boring answer for us not doing as much grantmaking in this area as in technical areas is just that we don’t receive a very high number of applications—but this isn’t clearly a bad thing; there are many excellent organisations that do great work in AI policy/governance/advocacy whilst there are only a handful of active organisations on the technical side. I often think that getting a role in an existing org is a better fit for many applicants than doing independent work or starting their own org and I am grateful that the ecosystem for AI policy/governance/advocacy is developed enough to onboard lots of junior people rather than them having to apply for grants to do independent work.
We are trying to do more grantmaking in this space, but unfortunately, the EA brand makes publicising the grants we do make difficult. Many of our grants would count as “fieldbuilding” for AI policy/governance/advocacy, but we could make this clearer in our descriptions. LTFF fund managers, in general, are very excited about work in this area. Even if we can’t fund it directly, we often try to refer it to other funders to fund, so I’d definitely encourage people to apply.
Thanks that makes a lot of sense, especially the comment about getting a job at a regular org and I’m also heartened to hear that the AI governance space is more developed as well.
Didn’t realize the “EA brand” might be a negative that’s sad.
Also I didn’t find the answer boring at all ;)