Let’s think about...lowering the burden of proof for liability for harms associated with AI.

Victims of damages from AI would currently need to prove a wrongful action or omission by a person who caused the damage under many relevant laws worldwide. However the complexity, autonomy and opacity of AI may insulate AI developers from actual or perceived liability. Lowering the burdens of proof for the legal liability associated with the harms of AI could be a workaround.

Long-term, this reform could create new incentives (compensation) for individuals and organisations to think about personal damages associated AI and for developers to consider the potential harms from their work. That could shape the long-run future of AI.

I see this as part of a policy mix and not a substitute for other reforms. The faster AI take-offs, the harder it will be to pursue malicious and negligent actors for damages. The wide scale nature of the changes AI could advance also means that this liability may be difficult to pursue on a case by case basis. Instead, this reform may primarily as a deterrent for irresponsible development and encourage more responsible development.

It can be hard to anticipate the kinds of damages that might be associated with the use of transformative AI. And so, the generalisability and simplicity of this approach is appealing while other approaches continue to be developed.

Liability is on the political agenda but it is not clear we’re on the ball. I have read that for high-risk AI systems (as defined in the EU’s AI Act), the complainant is challenged to show that:

  • the training, validation and testing data sets did not meet the quality criteria of the AI Act;

  • the AI system was not designed and developed in a way that meets the transparency requirements of the AI Act;

  • the AI system was not designed and developed in a way that allows for an effective oversight by natural persons;

  • the AI system had inadequate cybersecurity protections; or

  • when problems were discovered, appropriate corrective actions were not ‘immediately taken’.

Victims have to prove the operator was at fault or negligent in order to claim compensation for damages, which remains difficult because of the aforementioned complexity, autonomy and opacity of AI.

No comments.