The situation could easily reverse in a short time if awareness about AI risk causes a wave of new research interest, or if 80,000 Hours, AGI Safety Fundamentals Curriculum, AI Safety Camp and related programs are able to introduce more people into the field. So just because we have a funding glut now doesn’t mean we should assume that will continue through 2023 which is the time period that this NSF RfI pertains to.
Could you put some numbers around this please—e.g. how much you think we might be able to get the NSF to spend on this? I think we have a big difference in our models here; I can’t think of any scenario you’re thinking of where this seems plausible.
For context, it looks like the NSF currently spends around $8.5bn a year, and this particular program was only $12.5m. It seems unlikely to me that we could get them to spend 2% of the budget ($170m) on AI safety in 2023. In contrast, if there was somehow $170m dollars of high quality grant proposals I’m pretty confident the existing EA funding system would be able to fund it all.
This might make sense if all the existing big donors suddenly decided that AI safety was not very important, so we were very short on money. But if that happens it’s probably because they have become aware of compelling new arguments not to fund AI safety, in which case the decision is probably reasonable!
Could you put some numbers around this please—e.g. how much you think we might be able to get the NSF to spend on this? I think we have a big difference in our models here; I can’t think of any scenario you’re thinking of where this seems plausible.
For context, it looks like the NSF currently spends around $8.5bn a year, and this particular program was only $12.5m. It seems unlikely to me that we could get them to spend 2% of the budget ($170m) on AI safety in 2023. In contrast, if there was somehow $170m dollars of high quality grant proposals I’m pretty confident the existing EA funding system would be able to fund it all.
This might make sense if all the existing big donors suddenly decided that AI safety was not very important, so we were very short on money. But if that happens it’s probably because they have become aware of compelling new arguments not to fund AI safety, in which case the decision is probably reasonable!