Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
The language shown in this tweet says:
It’s a trick!
Departing OpenAI employees are then offered a general release which meets the requirements of this section and also contains additional terms. What a departing OpenAI employee needs to do is have their own lawyer draft, execute, and deliver a general release which meets the requirements set forth. Signing the separation agreement is a mistake, and rejecting the separation agreement without providing your own general release is a mistake.
I could be misunderstanding this; I’m not a lawyer, just a person reading carefully. And there’s a lot more agreement text that I don’t have screenshots of. Still, I think the practical upshot is that departing OpenAI employees may be being tricked, and this particular trick seems defeatable to me. Anyone leaving OpenAI really needs a good lawyer.
See also: Call for Attorneys for OpenAI Employees and Ex-Employees
It seems like these terms would constitute theft if the equity awards in question were actual shares of OpenAI rather than profit participation units (PPUs). When an employee is terminated, their unvested RSUs or options may be cancelled, but the company would have no right to claw back shares that are already vested as those are the employee’s property. Similarly, don’t PPUs belong to the employee, meaning that the company cannot “cancel” them without consideration in return?
Daniel’s behavior here is genuinely heroic, and I say that as someone who is pretty skeptical of AI takeover being a significant risk*.
*(I still think the departure of safety people is bad news though.)
According to Kelsey’s article, OpenAI employees are coerced into signing lifelong nondisparagement agreements, which also forbid discussion of the nondisparagement agreements themselves, under threat of losing all of their equity.
This is intensely contrary to the public interest, and possibly illegal. Enormous kudos for bringing it to light.
In a legal dispute initiated by an OpenAI employee, the most important thing would probably be what representations were previously made about the equity. That’s hard for me to evaluate, but if it’s true that they were presented as compensation and the nondisparagement wasn’t disclosed, then rescinding those benefits could be a breach of contract. However, I’m not sure if this would apply if this was threatened but the threat wasn’t actually executed.
CA GOV § 12964.5 and 372 NLRB No. 58 also offer some angles by which former OpenAI employees might fight this in court.
CA GOV § 12964.5 talks specifically about disclosure of “conduct that you have reason to believe is unlawful.” Generically criticizing OpenAI as pursuing unsafe research would not qualify unless (the speaker believes) it rises to the level of criminal endangerment, or similar. Copyright issues would *probably* qualify. Workplace harrassment would definitely qualify.
(No OpenAI employees have alleged any of these things publicly, to my knowledge)
372 NLRB No. 58 nominally invalidates separation agreements that contain nondisparagement clauses, and that restrict discussion of the terms of the separation agreement itself. However, it’s specifically focused on the effect on collective bargaining rights under the National Labor Relations Act, which could make it inapplicable.
Kelsey suggests that OpenAI may be admitting defeat here:
https://twitter.com/KelseyTuoc/status/1791691267941990764
Damage control, not defeat IMO. It’s not defeat until they free previous leavers from unfair non disparagements/otherwise make it right to them
What about for people who’ve already resigned?
This does not bode well to me. One of my personal concerns about the usefulness of AI safety technical research is the extent to which the fruits of such research would actually be utilized by the frontier labs in practice. Just because some hypothetical researcher or lab figures out a solution to the Alignment problem, it doesn’t mean the actual eventual creators of AGI will care enough to actually use it if it, for instance, comes with an alignment tax that slows down their capabilities work and leads to less profit, or worse, causes the loss of first mover advantage to a less scrupulous competitor.
OpenAI seems like the front runner right now, and the fact they had a substantial Alignment Team with substantial compute resources devoted to them, at least made it seem like maybe they’d care enough to use any effective alignment techniques that do get developed and ensure that things go well. The gutting of the Alignment Team does not look good in this regard.
This feels really suss to me:
Sounds like it is time for someone to report them to the NLRB.
I’m not sure if you need standing to complain, but here’s the relevant link.