Executive summary: The author proposes a framework for calculating punitive damages to incentivize AI developers to mitigate risks from advanced AI systems. Technical AI safety researchers can help implement this by analyzing potential catastrophic scenarios and estimating key parameters.
Key points:
Punitive damages can “pull forward” liability for uninsurable AI risks by applying them in correlated cases of compensable harm.
The proposed formula calculates punitive damages based on the total expected uncompensable harm, the plaintiff’s share of compensable harm, and the relative elasticity of the uncompensable risk.
Implementing this requires estimating uncompensable harm and elasticities. Model evaluations could analyze catastrophic risk pathways.
Knowledge that compensable harm occurred can help select pathways to analyze, but probability estimates shouldn’t update based on that knowledge.
More work is needed to implement these suggestions. The author invites collaboration, including help securing funding.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: The author proposes a framework for calculating punitive damages to incentivize AI developers to mitigate risks from advanced AI systems. Technical AI safety researchers can help implement this by analyzing potential catastrophic scenarios and estimating key parameters.
Key points:
Punitive damages can “pull forward” liability for uninsurable AI risks by applying them in correlated cases of compensable harm.
The proposed formula calculates punitive damages based on the total expected uncompensable harm, the plaintiff’s share of compensable harm, and the relative elasticity of the uncompensable risk.
Implementing this requires estimating uncompensable harm and elasticities. Model evaluations could analyze catastrophic risk pathways.
Knowledge that compensable harm occurred can help select pathways to analyze, but probability estimates shouldn’t update based on that knowledge.
More work is needed to implement these suggestions. The author invites collaboration, including help securing funding.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
On key point 5, I was offering to help secure funding, not asking for help.