Interesting post! Another potential downside (which I don’t think you mention) is that strict liability could disincentivize information sharing. For example, it could make AI labs more reluctant to disclose new dangerous capabilities or incidents (when that’s not required by law). That information could be valuable for other AI labs, for regulators, for safety researchers, and for users.
Interesting post! Another potential downside (which I don’t think you mention) is that strict liability could disincentivize information sharing. For example, it could make AI labs more reluctant to disclose new dangerous capabilities or incidents (when that’s not required by law). That information could be valuable for other AI labs, for regulators, for safety researchers, and for users.
Really good point! Also just realised that what you’re saying is already playing out in cybersecurity incident reporting in many countries.