Consider Bridges v South Wales Police, where the court found in favour of Bridges on some elements not because the AI system was biased, but because a Data Protection Impact Assessment (DPIA) had not been carried out. Put simply, SWP hadn’t made sure it wasn’t biased. A DPIA is a foundation-level document in almost any compliance procedure.
This is an interesting anecdote. It reminds me of how US medical companies having to go through FDA’s premarket approval process for software designed for prespecified uses holds them from launching medical software willy-nilly on the market. If they did release before FDA approval, they are quite likely to see regulatory action (and/or be held liable in court).
That’s a good regulatory mechanism, and isn’t unlike many that exist UK-side for uses intended for security or nuclear application. Surprisingly, there isn’t a similar requirement for policing although the above mentioned case has drastically improved the willingness of forces to have such systems adequately (and sometimes publicly) vetted. It certainly increased the seriousness to which AI safety is considered in a few industries.
I’d really like to see a similar system as to the one you just mentioned for AI systems over a certain threshold, or for sale to certain industries. A licensing process would be useful, though obviously faces challenges as AI can and does change over time. This is one of the big weaknesses of a NIST certification, and one I am careful to raise with those seeking regulatory input.
Another problem with the NIST approach is an overemphasis on solving for identified risks, rather than precautionary principle (just don’t use scaled tech that could destabilise society at scale), or on preventing and ensuring legal liability for designs that cause situationalised harms.
This is an interesting anecdote.
It reminds me of how US medical companies having to go through FDA’s premarket approval process for software designed for prespecified uses holds them from launching medical software willy-nilly on the market. If they did release before FDA approval, they are quite likely to see regulatory action (and/or be held liable in court).
That’s a good regulatory mechanism, and isn’t unlike many that exist UK-side for uses intended for security or nuclear application. Surprisingly, there isn’t a similar requirement for policing although the above mentioned case has drastically improved the willingness of forces to have such systems adequately (and sometimes publicly) vetted. It certainly increased the seriousness to which AI safety is considered in a few industries.
I’d really like to see a similar system as to the one you just mentioned for AI systems over a certain threshold, or for sale to certain industries. A licensing process would be useful, though obviously faces challenges as AI can and does change over time. This is one of the big weaknesses of a NIST certification, and one I am careful to raise with those seeking regulatory input.
Another problem with the NIST approach is an overemphasis on solving for identified risks, rather than precautionary principle (just don’t use scaled tech that could destabilise society at scale), or on preventing and ensuring legal liability for designs that cause situationalised harms.