NIST Seeks Comments on Draft AI Guidance Documents, Announces Launch of New Program to Evaluate and Measure GenAI Technologies

The US Government’s National Institute of Standards and Technology (NIST) is looking for comments on 4 drafts of AI Governance proposals + they’re announcing a new program to evaluate and measure generative AI technologies.

Both seem like great opportunities for people interested in doing AI Governance and Evals!!!!

AI governance proposals you can comment on

Here are the drafts from the announcement:

  1. NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile

  2. NIST SP 800-218A, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models

  3. NIST AI 100-4, Reducing Risks Posed by Synthetic Content: An Overview of Technical Approaches to Digital Content Transparency

  4. NIST AI 100-5, A Plan for Global Engagement on AI Standards

Drafts of NIST AI 600-1, NIST AI 100-5 and NIST AI 100-4 are available for review and comment on the NIST Artificial Intelligence Resource Center website; and the draft of NIST SP 800-218A is available for review and comment on the NIST Computer Security Resource Center website.

The publications cover varied aspects of AI technology: The first two are guidance documents designed to help manage the risks of generative AI — the technology that enables chatbots and text-based image and video creation tools — and serve as companion resources to NIST’s AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF), respectively. A third offers approaches for promoting transparency in digital content, which AI can generate or alter; the fourth proposes a plan for global engagement for development of AI standards

Evals challenge: registration opens in May

The NIST GenAI program will issue a series of challenge problems designed to evaluate and measure the capabilities and limitations of generative AI technologies. These evaluations will be used to identify strategies to promote information integrity and guide the safe and responsible use of digital content. One of the program’s goals is to help people determine whether a human or an AI produced a given text, image, video or audio recording. Registration opens in May for participation in the pilot evaluation, which will seek to understand how human-produced content differs from synthetic content. More information about the challenge and how to register can be found on the NIST GenAI website.