I think the party line is that the well-vetted (and good) places in AI Safety aren’t funding-constrained, and the non-well-vetted places in AI Safety might do more harm than good, so we’re waiting for places to build enough capacity to absorb more funding.
Under that worldview, I feel much more bullish about funding constraints for longtermist work outside of AI Safety, as well as more meta work that can feed into AI Safety later.
Within AI Safety, if we want to give lots of money quickly, I’d think about:
funding individuals who seem promising and are somewhat funding constrained
eg, very smart students in developing countries, or Europe, who want to go into AI Safety.
also maybe promising American undergrads from poorer backgrounds
The special case here is yourself if you want to go into AI Safety, and want to invest $s in your own career capital
Figure out which academic labs differentially improve safety over capabilities, throw GPUs or research engineers or teaching time buyouts for their grad students
When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
We’re mostly bottlenecked on strategic clarity here, different AI Safety people I talk to have pretty different ideas about which research differentially advance safety over capabilities.
Possibly just throw lots of money at “aligned enough” academic places like CHAI, or individual AI-safety focused professors.
Unlike the above, here the focus is more on alignment rather than strategic understanding that what people are doing is good, just hoping that apparent alignment + trusting other EAs is “good enough” to be net positive.
Other than #1 (which grantmakers are bottlenecked somewhat on due to their lack of local knowledge/networks), none of these things seem like “clear wins” in the sense of shovel ready projects that can absorb lots of money and we’re pretty confident is good.
When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
I’ve never been a grad student, but I suspect that CS grad students are constrained in ways that EA donors could fairly easily fix. They might not be grant-funding-constrained, but they’re probably make-enough-to-feel-financially-secure-constrained or grantwriting-time-constrained, and you could convert AI grad students into AI safety grad students by lifting these constraints for them.
I think the party line is that the well-vetted (and good) places in AI Safety aren’t funding-constrained, and the non-well-vetted places in AI Safety might do more harm than good, so we’re waiting for places to build enough capacity to absorb more funding.
Under that worldview, I feel much more bullish about funding constraints for longtermist work outside of AI Safety, as well as more meta work that can feed into AI Safety later.
Within AI Safety, if we want to give lots of money quickly, I’d think about:
funding individuals who seem promising and are somewhat funding constrained
eg, very smart students in developing countries, or Europe, who want to go into AI Safety.
also maybe promising American undergrads from poorer backgrounds
The special case here is yourself if you want to go into AI Safety, and want to invest $s in your own career capital
Figure out which academic labs differentially improve safety over capabilities, throw GPUs or research engineers or teaching time buyouts for their grad students
When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
We’re mostly bottlenecked on strategic clarity here, different AI Safety people I talk to have pretty different ideas about which research differentially advance safety over capabilities.
Possibly just throw lots of money at “aligned enough” academic places like CHAI, or individual AI-safety focused professors.
Unlike the above, here the focus is more on alignment rather than strategic understanding that what people are doing is good, just hoping that apparent alignment + trusting other EAs is “good enough” to be net positive.
Seriously consider buying out AI companies, or other bottlenecks to AI progress.
Other than #1 (which grantmakers are bottlenecked somewhat on due to their lack of local knowledge/networks), none of these things seem like “clear wins” in the sense of shovel ready projects that can absorb lots of money and we’re pretty confident is good.
I’ve never been a grad student, but I suspect that CS grad students are constrained in ways that EA donors could fairly easily fix. They might not be grant-funding-constrained, but they’re probably make-enough-to-feel-financially-secure-constrained or grantwriting-time-constrained, and you could convert AI grad students into AI safety grad students by lifting these constraints for them.