That’s a good regulatory mechanism, and isn’t unlike many that exist UK-side for uses intended for security or nuclear application. Surprisingly, there isn’t a similar requirement for policing although the above mentioned case has drastically improved the willingness of forces to have such systems adequately (and sometimes publicly) vetted. It certainly increased the seriousness to which AI safety is considered in a few industries.
I’d really like to see a similar system as to the one you just mentioned for AI systems over a certain threshold, or for sale to certain industries. A licensing process would be useful, though obviously faces challenges as AI can and does change over time. This is one of the big weaknesses of a NIST certification, and one I am careful to raise with those seeking regulatory input.
Another problem with the NIST approach is an overemphasis on solving for identified risks, rather than precautionary principle (just don’t use scaled tech that could destabilise society at scale), or on preventing and ensuring legal liability for designs that cause situationalised harms.
That’s a good regulatory mechanism, and isn’t unlike many that exist UK-side for uses intended for security or nuclear application. Surprisingly, there isn’t a similar requirement for policing although the above mentioned case has drastically improved the willingness of forces to have such systems adequately (and sometimes publicly) vetted. It certainly increased the seriousness to which AI safety is considered in a few industries.
I’d really like to see a similar system as to the one you just mentioned for AI systems over a certain threshold, or for sale to certain industries. A licensing process would be useful, though obviously faces challenges as AI can and does change over time. This is one of the big weaknesses of a NIST certification, and one I am careful to raise with those seeking regulatory input.
Another problem with the NIST approach is an overemphasis on solving for identified risks, rather than precautionary principle (just don’t use scaled tech that could destabilise society at scale), or on preventing and ensuring legal liability for designs that cause situationalised harms.