The linked article says—persuasively, in my view—that Section 230 generally doesn’t shield companies like OpenAI for what their chatbots say. But that merely takes away a shield; you still need a sword (a theory of liability) on top of that.
My guess is that most US courts will rely significantly on analogies in absence of legislative action. Some of those are not super-friendly to litigation. Arguably the broadest analogy is to buggy software with security holes that can be exploited and cause damage; I don’t think plaintiffs have had much success with those sorts of lawsuits. If there is an interveining human actor, that also can make causation more difficult to establish. Obviously that is all at the 100,000 foot level and off the cuff! To the extent the harmed person is a user of the AI, they may have signed an agreement that limits their ability to sue (both by waiving certain claims, by limiting potential damages, or by onerous procedural requirements that mandate private arbitration and preclude class actions).
There are some activities at common law that are seen as superhazardous and which impose strict liability on the entity conducting them—using explosives is the usual example. But—I don’t understand there to be a plausible case that using AI in an application right now is similarly superhazardous in a way that would justify extending those precedents to AI harm.
The linked article says—persuasively, in my view—that Section 230 generally doesn’t shield companies like OpenAI for what their chatbots say. But that merely takes away a shield; you still need a sword (a theory of liability) on top of that.
My guess is that most US courts will rely significantly on analogies in absence of legislative action. Some of those are not super-friendly to litigation. Arguably the broadest analogy is to buggy software with security holes that can be exploited and cause damage; I don’t think plaintiffs have had much success with those sorts of lawsuits. If there is an interveining human actor, that also can make causation more difficult to establish. Obviously that is all at the 100,000 foot level and off the cuff! To the extent the harmed person is a user of the AI, they may have signed an agreement that limits their ability to sue (both by waiving certain claims, by limiting potential damages, or by onerous procedural requirements that mandate private arbitration and preclude class actions).
There are some activities at common law that are seen as superhazardous and which impose strict liability on the entity conducting them—using explosives is the usual example. But—I don’t understand there to be a plausible case that using AI in an application right now is similarly superhazardous in a way that would justify extending those precedents to AI harm.