Can anyone who is more informed on NIST comment on whether high-quality comments tend to be taken into account? Are drafts open for comments often revised substantially in this way?
In case you were wondering, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models completely avoids any discussion of the fact that releasing a dual-use model could potentially be dangerous or that the impacts of any such models should be evaluated before use. This is a truly stunning display of ball-dropping.
Update: I just checked NIST AI 600-1 as well: the report is extremely blaise about CBRN hazards from general AI (admitting though that chemical and biological design tools might pose risks to society or national security”). They quote the RAND report that claims the current generation doesn’t pose any such risks beyond web search, neglecting to mention that these results only applied to the release of a model over an API. As far as they’re concerned, these risks just need to be “carefully monitored”.
For NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, It is suggested to add the following actions under GOVERN 3.2:
-Establish real-time monitoring systems that track the actions and decisions of autonomous AI systems continuously.
-Establish built-in mechanisms for human operators to intervene AI decisions and take control when necessary.
Include “near-miss incidents” in action ID V-4.3-004
-As Action ID MS-2.6-008 is critical in managing high GAI risk systems, It is suggested to include more detailed guidelines on“fail-safe mechanisms”, since fallback and fail-safe mechanisms are different.
note: Fallback mechanisms aim to maintain some level of operational continuity, even if at reduced functionality. Fail-safe mechanisms prioritize safety over continued operation, often resulting in a complete shutdown or transition to a safe state.
For NIST SP 800-218A, It is suggested to include the following at P.11 Task PS.1.3:
-Document the justification of selection of AI models and their hyperparameters.
Can anyone who is more informed on NIST comment on whether high-quality comments tend to be taken into account? Are drafts open for comments often revised substantially in this way?
In case you were wondering, Secure Software Development Practices for Generative AI and Dual-Use Foundation Models completely avoids any discussion of the fact that releasing a dual-use model could potentially be dangerous or that the impacts of any such models should be evaluated before use. This is a truly stunning display of ball-dropping.
Update: I just checked NIST AI 600-1 as well: the report is extremely blaise about CBRN hazards from general AI (admitting though that chemical and biological design tools might pose risks to society or national security”). They quote the RAND report that claims the current generation doesn’t pose any such risks beyond web search, neglecting to mention that these results only applied to the release of a model over an API. As far as they’re concerned, these risks just need to be “carefully monitored”.
Sounds like we need some people to make some comments!!!!!
For NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, It is suggested to add the following actions under GOVERN 3.2:
-Establish real-time monitoring systems that track the actions and decisions of autonomous AI systems continuously.
-Establish built-in mechanisms for human operators to intervene AI decisions and take control when necessary.
Include “near-miss incidents” in action ID V-4.3-004
-As Action ID MS-2.6-008 is critical in managing high GAI risk systems, It is suggested to include more detailed guidelines on“fail-safe mechanisms”, since fallback and fail-safe mechanisms are different.
note: Fallback mechanisms aim to maintain some level of operational continuity, even if at reduced functionality. Fail-safe mechanisms prioritize safety over continued operation, often resulting in a complete shutdown or transition to a safe state.
For NIST SP 800-218A, It is suggested to include the following at P.11 Task PS.1.3:
-Document the justification of selection of AI models and their hyperparameters.