Begging, Pleading AI Orgs to Comment on NIST AI Risk Management Framework

The National Institute of Standards and Technology (NIST) is seeking public comment until April 29 on its Draft AI Risk Management Framework. NIST will produce a second draft for comment, as well as host a third workshop, before publishing AI RMF 1.0 in January 2023. Please send comments on this initial draft to AIframework@nist.gov by April 29, 2022.

I would like to see places like ARC, OpenAI, Redwood Research, MIRI, Centre for the Governance of AI, CHAI, Credo AI, OpenPhil[1], FHI, Aligned AI, and any other orgs make efforts to comment. Without going into the reasons why deeply here on a public forum, I think influencing the development of NIST’s AI Risk Management Framework could be high impact. The framework is intended for voluntary use in addressing risks in the design, development, use, and evaluation of AI products, services, and systems. NIST standards are often added to government procurement contracts, so these standards often impact what the federal government does or does not purchase through acquisitions. This in turn impacts industry and how they develop their products, services, and systems to meet government standards so they can get those sweet, sweet federal dollas. For example, the IRS issued a Request for Proposals (RFP) soliciting a contract with a company that would meet NIST SP 800-63-3 requirements for facial recognition technology. Another way NIST is influential is with commercial-off-the-shelf items (COTS) in that companies would benefit in making products, services, and systems that can be easily adapted aftermarket to meet the needs of the U.S. government so that they can reach both commercial and governmental markets.

I have been somewhat disheartened by the lack of AI alignment or safety orgs with making comments on early-stage things where it would be very easy to move the Overton window and/​or (in best-case scenario) put some safeguards in place against worst-case scenarios for things we clearly know could be bad, even if we don’t know how to solve alignment problems just yet. The NIST Framework moving forward (it will go through several iterations and updates) will be a great place to add in AI safety standards that we KNOW would at least allow us to avoid catastrophe.

This is also a good time to beg and plead for more EAs to go into NIST for direct work. If you are thinking this might be a good fit for you and want to try it out, please consider joining Open Phil’s Tech Policy Fellowship the next time applications open (probably late summer?).

I am heartened that at least some orgs that at least sometimes if not always contemplate AI alignment and safety have recently provided public comment on AI stuff the U.S. gov is doing. E.g., Anthropic, CSET, Google (not sure if it was DeepMind folks), Stanford HAI (kind of) at least commented on the recent NAIRR Task Force Request for Information (RFI). Future of Life Institute has also been quite good at making comments of this type and has partnered with CHAI in doing so. But there is more room for improvement and sometimes these comments can be quite impactful (especially for formal administrative rulemaking, but we will leave that aside). In the above NAIRR Task Force example, there were only 84 responses. Five additional EA orgs saying the same thing in a unifying voice could be marginally impactful in influencing the Task Force.

NIST’s work on the Framework is consistent with its broader AI efforts, recommendations by the National Security Commission on Artificial Intelligence, and the Plan for Federal Engagement in AI Standards and Related Tools. Congress has directed NIST to collaborate with the private and public sectors to develop the AI RMF.

Please go forth and do good things for the world, AI Orgs :-)

  1. ^

    Uncertain of whether OP should actually being on this list but including for completeness.