Action: Help expand funding for AI Safety by coordinating on NSF response

Thanks to Frances Lorenz and Shiri for their feedback on a draft of this post, which is a crosspost from LessWrong with the section on other EA causes added.

tl;dr: Please fill out this short form if you might be willing to take a few small, simple actions in the next ~5 weeks that have the chance to dramatically increase funding for AI Safety through the NSF.

What is this?

The National Science Foundation (NSF) has put out a Request for Information relating to topics they will be funding in 2023 as part of their NSF Convergence Accelerator program. A group of us are working on coordinating responses to maximize chances that they’ll choose AI Safety as one of their topics. This has the potential to add millions of dollars to the available grant pool for AI Safety in 2023.

Shiri Dori-Hacohen originally posted about this on the 80,000 Hours AI Safety email list. Here’s an excerpt from her email which explains the situation well (some emphasis hers, some mine):

To make a long story short(ish), the responses they get to this RfI now (by Feb 28) will influence the call for proposals they put out in this program in 2023.

This RfI is really quite easy to respond to, and it could be a potentially very influential thing to propose AI Safety as a topic. It’s the kind of thing that could have a disproportionate impact on the field by influencing downstream funding, which would then have a ripple effect on additional researchers learning more about AI safety and possibly shifting their work to it. This impact would last over and above any kind of research results funded by this specific call, and I sincerely believe this is one of the highest-impact actions we can take right now.

In my experience, it would be incredibly powerful to mount an orchestrated /​ coordinated response to this call, i.e. having multiple folks replying with distinct but related proposals. For example, I know that a large group of [redacted] grantees had mounted such a coordinated response a couple of years ago in response to this exact RfI, and that was what led the NSF to pick disinformation as one of the two topics for the 2021 Convergence call, leading to $9M in federal funding (including my own research!) -- and many many additional funding opportunities downstream for the NSF grantees.

Even if there was a relatively small probability of success for this particular call, the outsized impact of success would make the expected value of our actions quite sizable. Furthermore, the program managers reading the responses to these calls have incredible influence on the field, so even if we “fail” in setting the topic for 2023, but nonetheless manage to slightly shift the opinion of the PMs and inclining their perspective towards viewing AI safety as important, that could still have a downstream positive impact on the acceptance of this subfield.

Could this backfire?

Some people in the AI alignment community have raised concerns about how talking to governments about AI existential risk could do more harm than good. For example, in the “Discussion with Eliezer Yudkowsky on AGI interventions” Alignment Forum post on Nov 21, 2021, Eliezer said:

Maybe some of the natsec people can be grownups in the room and explain why “stealing AGI code and running it” is as bad as “full nuclear launch” to their foreign counterparts in a realistic way. Maybe more current AGI groups can be persuaded to go closed; or, if more than one has an AGI, to coordinate with each other and not rush into an arms race. I’m not sure I believe these things can be done in real life, but it seems understandable to me how I’d go about trying—though, please do talk with me a lot more before trying anything like this, because it’s easy for me to see how attempts could backfire

This is a valid concern in general, but it doesn’t pertain to the present NSF RfI. The actions we’re taking here are targeted at expanding grant opportunities for AI Safety through the NSF. They are unlikely to have any direct impact on US policies or regulations. Also, the NSF has a reputation of being quite nuanced and thoughtful in its treatment of research challenges.

What about other EA causes?

This effort is limited in scope to coordinating a response to the NSF’s RfI to promote AI Safety. But someone could essentially copy this post and the form below and change some words and adapt it to another prominent effective altruist cause such as Biosecurity. Then they could follow what we’re doing in the next few weeks for AI Safety and adapt the actions to that cause/​topic.

If we could get the NSF to prioritize both AI Safety and Biosecurity in their grants for 2023, that would be all the better. If they chose either one as a priority topic, that would be much better than neither.

There are probably some other causes besides AI Safety and Biosecurity that could be worth doing this for. Maybe people can mention them in the comments below. Though it’s hard for me to imagine certain prominent EA causes such as animal welfare being a fit for the NSF, because that’s generally taken more of a moral issue than a scientific one (even though certain areas of scientific research could make a large positive impact for animal welfare).

What actions do I take?

If you’re interested in helping out with this, all you have to do right now is fill out this short form so that we can follow up with you:


Then, over the next several weeks before the NSF’s RfI deadline (Feb 28), we’ll ask you to take a few quick, coordinated actions to help us make the best case we can to the NSF on why AI Safety should be prioritized as a funding area for their 2023 Convergence Accelerator.