Thanks for encouraging involvement with the NIST AI Risk Management Framework (AI RMF) development process. Currently my main focus is the AI RMF and related standards development, particularly on issues affecting AI safety and catastrophic risks. Colleagues at UC Berkeley and I previously submitted comments to NIST, available at https://www.nist.gov/system/files/documents/2021/09/16/ai-rmf-rfi-0092.pdf and https://cltc.berkeley.edu/2022/01/25/response-to-nist-ai-risk-management-framework-concept-paper/ . We are also preparing comments on the AI RMF Initial Draft, which we plan to submit to NIST soon.
If any folks working on AI safety or governance are preparing comments of their own and want to discuss, I’d be happy to: email me at anthony.barrett@berkeley.edu.
Current theme: default
Less Wrong (text)
Less Wrong (link)
Thanks for encouraging involvement with the NIST AI Risk Management Framework (AI RMF) development process. Currently my main focus is the AI RMF and related standards development, particularly on issues affecting AI safety and catastrophic risks. Colleagues at UC Berkeley and I previously submitted comments to NIST, available at https://www.nist.gov/system/files/documents/2021/09/16/ai-rmf-rfi-0092.pdf and https://cltc.berkeley.edu/2022/01/25/response-to-nist-ai-risk-management-framework-concept-paper/ . We are also preparing comments on the AI RMF Initial Draft, which we plan to submit to NIST soon.
If any folks working on AI safety or governance are preparing comments of their own and want to discuss, I’d be happy to: email me at anthony.barrett@berkeley.edu.