I’m skeptical that this would be cost-effective. Section 230 aside, it is incredibly expensive to litigate in the US. Even if you found a somewhat viable claim (which I’m not sure you would), you would be litigating opposite a company like Microsoft. It would most likely cost $ millions to find a good case and pursue it, and then it would be settled quietly. Legally speaking, you probably couldn’t be forced to settle (though in some cases you could); practically speaking, it would be very hard if not impossible to pursue a case through trial, and you’d need a willing plaintiff. Settlement agreements often contain confidentiality clauses that would constrain the signaling value of your suit. Judgments would almost certainly be for money damages, not any type of injunctive relief.
All the big tech players have weathered high-profile, billion-dollar lawsuits. It is possible that you could scare some small AI startups with this strategy, but I’m not sure if the juice is worth the squeeze. Best case scenario, some companies might pivot away from mass market and towards a b2b model. I don’t know if this would be good or bad for AI safety.
If you want to keep working on this, you might look to Legal Impact for Chickens as a model for EA impact litigation. Their situation is a bit different though, for reasons I can expand on later if I have time.
Maybe not the most cost-effective thing in the whole world, but possibly still a great project for EAs who already happen to be lawyers and want to contribute their expertise (see organizations like Legal Priorities Project or Legal Impact for Chickens).
This also feels like the kind of thing where EA wouldn’t necessarily have to foot the entire bill for an eventual mega-showdown with Microsoft or etc… we could just fund some seminal early cases and figure out what a general “playbook” should look like for creating possibly-winnable lawsuits that would encourage companies to pay more attention to alignment / safety / assessment of their AI systems. Then, other people, profit-motivated by seeking a big payout from a giant tech company, would surely be happy to launch their own lawsuits once we’d established enough of a “playbook” for how such cases work.
One important aspect of this project, perhaps, should be trying to craft legal arguments that encourage companies to take useful, potentially-x-risk-mitigating actions in response to lawsuit risk, rather than just coming up with whatever legal arguments will most likely result in a payout. This could set the tone for the field in an especially helpful direction.
I think your last paragraph hits on a real risk here: litigation response is driven by fear of damages, and will drive the AI companies’ interest in what they call “safety” in the direction of wherever their damages exposure is greatest in the aggregate and/or the largest litigation-existential risk to their company.
If only there were some sort of new technology that could be harnessed to empower millions of ordinary people who will have small legitimate legal grievances against AI companies to file their own suits as self-represented litigants, with documents that are at least good enough to make it past the initial pleading stages . . . .
If people do use chatbots to help with pro se litigation, then that opens a possible legal theory of liability against AI companies, namely that AI chatbots (or the companies that run them) are practicing law without a license.
Of course, this could extend to other related licensure violations, such as practicing medicine without a license.
Yes. The definition of “unauthorized practice of law” is murkier and depends more on context than one might think. For instance, I personally used—and recommend for most people without complex needs—the Nolo/Quicken WillMaker will-writing software.
On a more serious note, if there were 25 types of small legal harm commonly caused by AI chatbots, writing 25 books on “How to Sue a Chatbot Company For Harm X, Including Sample Pleadings” is probably not going to constitute unauthorized practice.
I’m skeptical that this would be cost-effective. Section 230 aside, it is incredibly expensive to litigate in the US. Even if you found a somewhat viable claim (which I’m not sure you would), you would be litigating opposite a company like Microsoft. It would most likely cost $ millions to find a good case and pursue it, and then it would be settled quietly. Legally speaking, you probably couldn’t be forced to settle (though in some cases you could); practically speaking, it would be very hard if not impossible to pursue a case through trial, and you’d need a willing plaintiff. Settlement agreements often contain confidentiality clauses that would constrain the signaling value of your suit. Judgments would almost certainly be for money damages, not any type of injunctive relief.
All the big tech players have weathered high-profile, billion-dollar lawsuits. It is possible that you could scare some small AI startups with this strategy, but I’m not sure if the juice is worth the squeeze. Best case scenario, some companies might pivot away from mass market and towards a b2b model. I don’t know if this would be good or bad for AI safety.
If you want to keep working on this, you might look to Legal Impact for Chickens as a model for EA impact litigation. Their situation is a bit different though, for reasons I can expand on later if I have time.
Maybe not the most cost-effective thing in the whole world, but possibly still a great project for EAs who already happen to be lawyers and want to contribute their expertise (see organizations like Legal Priorities Project or Legal Impact for Chickens).
This also feels like the kind of thing where EA wouldn’t necessarily have to foot the entire bill for an eventual mega-showdown with Microsoft or etc… we could just fund some seminal early cases and figure out what a general “playbook” should look like for creating possibly-winnable lawsuits that would encourage companies to pay more attention to alignment / safety / assessment of their AI systems. Then, other people, profit-motivated by seeking a big payout from a giant tech company, would surely be happy to launch their own lawsuits once we’d established enough of a “playbook” for how such cases work.
One important aspect of this project, perhaps, should be trying to craft legal arguments that encourage companies to take useful, potentially-x-risk-mitigating actions in response to lawsuit risk, rather than just coming up with whatever legal arguments will most likely result in a payout. This could set the tone for the field in an especially helpful direction.
I think your last paragraph hits on a real risk here: litigation response is driven by fear of damages, and will drive the AI companies’ interest in what they call “safety” in the direction of wherever their damages exposure is greatest in the aggregate and/or the largest litigation-existential risk to their company.
If only there were some sort of new technology that could be harnessed to empower millions of ordinary people who will have small legitimate legal grievances against AI companies to file their own suits as self-represented litigants, with documents that are at least good enough to make it past the initial pleading stages . . . .
(not intended as a serious suggestion)
If people do use chatbots to help with pro se litigation, then that opens a possible legal theory of liability against AI companies, namely that AI chatbots (or the companies that run them) are practicing law without a license.
Of course, this could extend to other related licensure violations, such as practicing medicine without a license.
Yes. The definition of “unauthorized practice of law” is murkier and depends more on context than one might think. For instance, I personally used—and recommend for most people without complex needs—the Nolo/Quicken WillMaker will-writing software.
On a more serious note, if there were 25 types of small legal harm commonly caused by AI chatbots, writing 25 books on “How to Sue a Chatbot Company For Harm X, Including Sample Pleadings” is probably not going to constitute unauthorized practice.