For NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, It is suggested to add the following actions under GOVERN 3.2:
-Establish real-time monitoring systems that track the actions and decisions of autonomous AI systems continuously.
-Establish built-in mechanisms for human operators to intervene AI decisions and take control when necessary.
Include “near-miss incidents” in action ID V-4.3-004
-As Action ID MS-2.6-008 is critical in managing high GAI risk systems, It is suggested to include more detailed guidelines on“fail-safe mechanisms”, since fallback and fail-safe mechanisms are different.
note: Fallback mechanisms aim to maintain some level of operational continuity, even if at reduced functionality. Fail-safe mechanisms prioritize safety over continued operation, often resulting in a complete shutdown or transition to a safe state.
For NIST SP 800-218A, It is suggested to include the following at P.11 Task PS.1.3:
-Document the justification of selection of AI models and their hyperparameters.
For NIST AI 600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, It is suggested to add the following actions under GOVERN 3.2:
-Establish real-time monitoring systems that track the actions and decisions of autonomous AI systems continuously.
-Establish built-in mechanisms for human operators to intervene AI decisions and take control when necessary.
Include “near-miss incidents” in action ID V-4.3-004
-As Action ID MS-2.6-008 is critical in managing high GAI risk systems, It is suggested to include more detailed guidelines on“fail-safe mechanisms”, since fallback and fail-safe mechanisms are different.
note: Fallback mechanisms aim to maintain some level of operational continuity, even if at reduced functionality. Fail-safe mechanisms prioritize safety over continued operation, often resulting in a complete shutdown or transition to a safe state.
For NIST SP 800-218A, It is suggested to include the following at P.11 Task PS.1.3:
-Document the justification of selection of AI models and their hyperparameters.