Maybe not the most cost-effective thing in the whole world, but possibly still a great project for EAs who already happen to be lawyers and want to contribute their expertise (see organizations like Legal Priorities Project or Legal Impact for Chickens).
This also feels like the kind of thing where EA wouldn’t necessarily have to foot the entire bill for an eventual mega-showdown with Microsoft or etc… we could just fund some seminal early cases and figure out what a general “playbook” should look like for creating possibly-winnable lawsuits that would encourage companies to pay more attention to alignment / safety / assessment of their AI systems. Then, other people, profit-motivated by seeking a big payout from a giant tech company, would surely be happy to launch their own lawsuits once we’d established enough of a “playbook” for how such cases work.
One important aspect of this project, perhaps, should be trying to craft legal arguments that encourage companies to take useful, potentially-x-risk-mitigating actions in response to lawsuit risk, rather than just coming up with whatever legal arguments will most likely result in a payout. This could set the tone for the field in an especially helpful direction.
I think your last paragraph hits on a real risk here: litigation response is driven by fear of damages, and will drive the AI companies’ interest in what they call “safety” in the direction of wherever their damages exposure is greatest in the aggregate and/or the largest litigation-existential risk to their company.
Maybe not the most cost-effective thing in the whole world, but possibly still a great project for EAs who already happen to be lawyers and want to contribute their expertise (see organizations like Legal Priorities Project or Legal Impact for Chickens).
This also feels like the kind of thing where EA wouldn’t necessarily have to foot the entire bill for an eventual mega-showdown with Microsoft or etc… we could just fund some seminal early cases and figure out what a general “playbook” should look like for creating possibly-winnable lawsuits that would encourage companies to pay more attention to alignment / safety / assessment of their AI systems. Then, other people, profit-motivated by seeking a big payout from a giant tech company, would surely be happy to launch their own lawsuits once we’d established enough of a “playbook” for how such cases work.
One important aspect of this project, perhaps, should be trying to craft legal arguments that encourage companies to take useful, potentially-x-risk-mitigating actions in response to lawsuit risk, rather than just coming up with whatever legal arguments will most likely result in a payout. This could set the tone for the field in an especially helpful direction.
I think your last paragraph hits on a real risk here: litigation response is driven by fear of damages, and will drive the AI companies’ interest in what they call “safety” in the direction of wherever their damages exposure is greatest in the aggregate and/or the largest litigation-existential risk to their company.