Stanford Law School student & EA organizer.
Eli Barrish
Amazing! Glad to see this team growing.
I’m skeptical that this would be cost-effective. Section 230 aside, it is incredibly expensive to litigate in the US. Even if you found a somewhat viable claim (which I’m not sure you would), you would be litigating opposite a company like Microsoft. It would most likely cost $ millions to find a good case and pursue it, and then it would be settled quietly. Legally speaking, you probably couldn’t be forced to settle (though in some cases you could); practically speaking, it would be very hard if not impossible to pursue a case through trial, and you’d need a willing plaintiff. Settlement agreements often contain confidentiality clauses that would constrain the signaling value of your suit. Judgments would almost certainly be for money damages, not any type of injunctive relief.
All the big tech players have weathered high-profile, billion-dollar lawsuits. It is possible that you could scare some small AI startups with this strategy, but I’m not sure if the juice is worth the squeeze. Best case scenario, some companies might pivot away from mass market and towards a b2b model. I don’t know if this would be good or bad for AI safety.
If you want to keep working on this, you might look to Legal Impact for Chickens as a model for EA impact litigation. Their situation is a bit different though, for reasons I can expand on later if I have time.
The “ethics is a front” stuff: is SBF saying naive utilitarianism is true and his past messaging amounted to a noble lie? Or is he saying ethics in general (including his involvement in EA) was a front to “win” and make money? Sorry if this is super obvious, I just see people commenting with both interpretations. To me it seems like he’s saying Option A (noble lie).
EDIT: Just adding some examples of people interpreting it as Option B (EA was the front): 1 2 3 4