This doesn’t compute for me. The quoted language obliges the AI company to “[i]mplement administrative, technical, and physical cybersecurity protections . . . .that are appropriate in light of the risks associated with the covered model . . . .” It does not establish strict liability for the acts of malicious actors. If the implemented protections were appropriate, I don’t see the violation.
I also don’t get the point about liability shifting between the cybersecurity vendors and the regulated AI companies. These are big corporations which are capable of negotiating bespoke contracts, obtaining cybersecurity insurance, and taking other actions to manage risk. If a given cybersecurity firm’s work is not up to snuff, the insurer will require the insured to use something more effective as a condition of coverage or will hit the client with an appropriate surcharge. In fact, the cybersecurity firms would make awful de facto insurers, as the risks they would hold would be highly correlated with each other.
Yes, sometimes the liability clauses in contracts are negotiable if the customer is large enough. Often, it is not, as we’ve seen in the fallout from the recent Crowdstrike blunder that caused worldwide chaos where Crowdstrike has been invoking its EULA provisions re liability being limited to twice what the customer’s annual bill was.
Fair, but I’m not sure how much difference there is between “not negotiable” and “no rational large customer would ever choose to buy cyberinsurance from its security vendor by negotiating a liability shift in exchange for paying massively more.” This would be like buying pandemic insurance from an insurer who only sold pandemic insurance (and wasn’t backstopped by reinsurance or government support). If/when you needed to make a claim, everyone else would be in a similar position, and the claims would bankrupt the security vendor quite easily. That means everyone gets only a small fraction of their claim paid and holds the bag for the rest.
This doesn’t compute for me. The quoted language obliges the AI company to “[i]mplement administrative, technical, and physical cybersecurity protections . . . .that are appropriate in light of the risks associated with the covered model . . . .” It does not establish strict liability for the acts of malicious actors. If the implemented protections were appropriate, I don’t see the violation.
I also don’t get the point about liability shifting between the cybersecurity vendors and the regulated AI companies. These are big corporations which are capable of negotiating bespoke contracts, obtaining cybersecurity insurance, and taking other actions to manage risk. If a given cybersecurity firm’s work is not up to snuff, the insurer will require the insured to use something more effective as a condition of coverage or will hit the client with an appropriate surcharge. In fact, the cybersecurity firms would make awful de facto insurers, as the risks they would hold would be highly correlated with each other.
Yes, sometimes the liability clauses in contracts are negotiable if the customer is large enough. Often, it is not, as we’ve seen in the fallout from the recent Crowdstrike blunder that caused worldwide chaos where Crowdstrike has been invoking its EULA provisions re liability being limited to twice what the customer’s annual bill was.
Fair, but I’m not sure how much difference there is between “not negotiable” and “no rational large customer would ever choose to buy cyberinsurance from its security vendor by negotiating a liability shift in exchange for paying massively more.” This would be like buying pandemic insurance from an insurer who only sold pandemic insurance (and wasn’t backstopped by reinsurance or government support). If/when you needed to make a claim, everyone else would be in a similar position, and the claims would bankrupt the security vendor quite easily. That means everyone gets only a small fraction of their claim paid and holds the bag for the rest.