> At face value, [an EA organization] seems great. But at the meta-level, I still have to ask, if [organization] is a good use of funds, why doesn’t OpenPhil just fund it?
Open Phil doesn’t fund it because they think they can find opportunities that are 10-100x more cost-effective in the coming years.
This is highly implausible. First of all, if it’s true, it implies that instead of funding things, they should just do fundraising and sit around on their piles of cash until they can discover these opportunities.
But it also implies they have (in my opinion, excessively) high confidence all that the hinge of history and astronomical waste arguments are wrong, and that transformative AI is farther away than most forecasters believe. If someone is going to invent AGI in 2060, we’re really limited in the amount of time available to alter the probabilities that it goes well vs badly for humanity.
When you’re working on global poverty, perhaps you’d want to hold off on donations if your investments are growing by 7% per year while GDP of the poorest countries is only growing by 2%, because you could have something like 5% more impact by giving 107 bednets next year instead of 100 bednets today.
For x-risks this seems totally implausible. What’s the justification for waiting? AGI alignment does not become 10x more tractable over the span of a few years. Private sector AI R&D has been growing by 27% per year since 2015, and I really don’t think alignment progress has outpaced that. If time until AGI is limited and short then we’re actively falling behind. I don’t think their investments or effectiveness are increasing fast enough for this explanation to make sense.
I think the party line is that the well-vetted (and good) places in AI Safety aren’t funding-constrained, and the non-well-vetted places in AI Safety might do more harm than good, so we’re waiting for places to build enough capacity to absorb more funding.
Under that worldview, I feel much more bullish about funding constraints for longtermist work outside of AI Safety, as well as more meta work that can feed into AI Safety later.
Within AI Safety, if we want to give lots of money quickly, I’d think about:
funding individuals who seem promising and are somewhat funding constrained
eg, very smart students in developing countries, or Europe, who want to go into AI Safety.
also maybe promising American undergrads from poorer backgrounds
The special case here is yourself if you want to go into AI Safety, and want to invest $s in your own career capital
Figure out which academic labs differentially improve safety over capabilities, throw GPUs or research engineers or teaching time buyouts for their grad students
When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
We’re mostly bottlenecked on strategic clarity here, different AI Safety people I talk to have pretty different ideas about which research differentially advance safety over capabilities.
Possibly just throw lots of money at “aligned enough” academic places like CHAI, or individual AI-safety focused professors.
Unlike the above, here the focus is more on alignment rather than strategic understanding that what people are doing is good, just hoping that apparent alignment + trusting other EAs is “good enough” to be net positive.
Other than #1 (which grantmakers are bottlenecked somewhat on due to their lack of local knowledge/networks), none of these things seem like “clear wins” in the sense of shovel ready projects that can absorb lots of money and we’re pretty confident is good.
When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
I’ve never been a grad student, but I suspect that CS grad students are constrained in ways that EA donors could fairly easily fix. They might not be grant-funding-constrained, but they’re probably make-enough-to-feel-financially-secure-constrained or grantwriting-time-constrained, and you could convert AI grad students into AI safety grad students by lifting these constraints for them.
Sorry, I didn’t mean to imply that biorisk does or doesn’t have “fast timelines” in the same sense as some AI forecasts. I was responding to the point about “if [EA organization] is a good use of funds, why doesn’t OpenPhil fund it?” being answered with the proposition that OpenPhil is not funding much stuff in the present (disbursing 1% of their assets per year, a really small rate even if you are highly patient) because they think they will find better things to fund in the future. That seems like a wrong explanation.
This is highly implausible. First of all, if it’s true, it implies that instead of funding things, they should just do fundraising and sit around on their piles of cash until they can discover these opportunities.
But it also implies they have (in my opinion, excessively) high confidence all that the hinge of history and astronomical waste arguments are wrong, and that transformative AI is farther away than most forecasters believe. If someone is going to invent AGI in 2060, we’re really limited in the amount of time available to alter the probabilities that it goes well vs badly for humanity.
When you’re working on global poverty, perhaps you’d want to hold off on donations if your investments are growing by 7% per year while GDP of the poorest countries is only growing by 2%, because you could have something like 5% more impact by giving 107 bednets next year instead of 100 bednets today.
For x-risks this seems totally implausible. What’s the justification for waiting? AGI alignment does not become 10x more tractable over the span of a few years. Private sector AI R&D has been growing by 27% per year since 2015, and I really don’t think alignment progress has outpaced that. If time until AGI is limited and short then we’re actively falling behind. I don’t think their investments or effectiveness are increasing fast enough for this explanation to make sense.
I think the party line is that the well-vetted (and good) places in AI Safety aren’t funding-constrained, and the non-well-vetted places in AI Safety might do more harm than good, so we’re waiting for places to build enough capacity to absorb more funding.
Under that worldview, I feel much more bullish about funding constraints for longtermist work outside of AI Safety, as well as more meta work that can feed into AI Safety later.
Within AI Safety, if we want to give lots of money quickly, I’d think about:
funding individuals who seem promising and are somewhat funding constrained
eg, very smart students in developing countries, or Europe, who want to go into AI Safety.
also maybe promising American undergrads from poorer backgrounds
The special case here is yourself if you want to go into AI Safety, and want to invest $s in your own career capital
Figure out which academic labs differentially improve safety over capabilities, throw GPUs or research engineers or teaching time buyouts for their grad students
When I talked to an AI safety grad student about this, he said that Top 4 CS programs are not funding constrained, but top 10-20 are somewhat.
We’re mostly bottlenecked on strategic clarity here, different AI Safety people I talk to have pretty different ideas about which research differentially advance safety over capabilities.
Possibly just throw lots of money at “aligned enough” academic places like CHAI, or individual AI-safety focused professors.
Unlike the above, here the focus is more on alignment rather than strategic understanding that what people are doing is good, just hoping that apparent alignment + trusting other EAs is “good enough” to be net positive.
Seriously consider buying out AI companies, or other bottlenecks to AI progress.
Other than #1 (which grantmakers are bottlenecked somewhat on due to their lack of local knowledge/networks), none of these things seem like “clear wins” in the sense of shovel ready projects that can absorb lots of money and we’re pretty confident is good.
I’ve never been a grad student, but I suspect that CS grad students are constrained in ways that EA donors could fairly easily fix. They might not be grant-funding-constrained, but they’re probably make-enough-to-feel-financially-secure-constrained or grantwriting-time-constrained, and you could convert AI grad students into AI safety grad students by lifting these constraints for them.
This has good content but I am genuinely confused (partly because this article’s subject is complex and this is after several successive replies).
Your point about timelines seems limited to AI risk. I don’t see the connection to the point about CEPI.
Maybe biorisk has similar “fast timelines” as AI risk—is this what your meaning?
I hesitate to assume this is your meaning, so I write this comment instead. I really just want to understand this thread better.
Sorry, I didn’t mean to imply that biorisk does or doesn’t have “fast timelines” in the same sense as some AI forecasts. I was responding to the point about “if [EA organization] is a good use of funds, why doesn’t OpenPhil fund it?” being answered with the proposition that OpenPhil is not funding much stuff in the present (disbursing 1% of their assets per year, a really small rate even if you are highly patient) because they think they will find better things to fund in the future. That seems like a wrong explanation.