I would like to see more developed thinking in EA circles about what a potential and plausible remedy is if Musk prevails here. The possibility of “some kind of middle ground here” was discussed on the podcast, and I’d keep those kinds of outcomes in mind if Musk were to prevail at trial.
OpenAI’s ability to attract enough investment to compete may be dependent on it being structured more like a typical company. The fact that it agreed to such onerous terms in the first place implies that it had little choice.
And I would guess that’s going to be a key element of OpenAI’s argument at trial. They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn’t viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn’t seem necessarily unreasonable under general charitable-law principles to me. The district court didn’t need to go there at this point given that the existence of an actual contract or charitable trust between the parties is a threshold issue, and I am not seeing much on this point in the court’s order.
To me, this is not only a defense for OpenAI but is also intertwined with the question of remedy. A permanent injunction is not awarded to a prevailing party as a matter of right. Rather:
According to well-established principles of equity, a plaintiff seeking a permanent injunction must satisfy a four-factor test before a court may grant such relief. A plaintiff must demonstrate: (1) that it has suffered an irreparable injury; (2) that remedies available at law, such as monetary damages, are inadequate to compensate for that injury; (3) that, considering the balance of hardships between the plaintiff and defendant, a remedy in equity is warranted; and (4) that the public interest would not be disserved by a permanent injunction.
The district court’s discussion of the balance of equities focuses on the fact that “Altman and Brockman made foundational commitments foreswearing any intent to use OpenAI as a vehicle to enrich themselves.” It’s not hard to see how an injunction against payola for insiders would meet traditional equitable criteria.
But an injunction that could pose a significant existential risk to OpenAI’s viability could run into some serious problems on prong four. It’s not likely that the district court would conclude the public interest affirmatively favors Meta, Google, xAI, or the like reaching AGI first as opposed to OpenAI. There is a national-security angle to the extent that the requested injunction might increase the risk of another country reaching AGI first. And to the extent that the cash from selling off OpenAI control would be going to charitable ends rather than lining Altman’s pockets, it’s going to be hard to argue that OpenAI’s board has a fiduciary duty to just shut it all down and vanish ~$100B in charitable assets into thin air.
And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn’t quick, the evidence of antitrust violations was massive, and there just wasn’t any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.
If OpenAI is practically enjoined from raising enough capital needed to achieve its goals, the usual responsible thing for a charity that can no longer effectively function is to sell off its assets and distribute the proceeds to other non-profits. Think about a non-profit set up to run a small rural hospital that is no longer viable on its own. It might prefer to merge with another non-profit, but selling the whole hospital to a for-profit chain is usually the next-best option, with selling the land and equipment as a backup option. In a certain light, how different might a sale be from what OpenAI is proposing to do? I’d want to think more about that . . .
With Musk as plaintiff, there are also some potential concerns on prong three relating to laches (the idea that Musk slept on his rights and prejudiced OpenAI-related parties as a result). Although I’m not sure if the interests of OpenAI investors and employees (who are not Altman and Brockman) with equity-like interests would be analyzed under prong three or four, it does seem that he sat around without asserting his rights while others invested cash and/or sweat equity into OpenAI. In contrast, “[t]he general principle is, that laches is not imputable to the government . . . .” United States v. Kirkpatrick, 22 U.S. (9 Wheat) 720, 735 (1824). I predict that any relief granted to Musk will need to take account of these third-party interests, especially because they were invested in while Musk slept on his rights. The avoidance of a laches argument is another advantage of a governmental litigant as opposed to Musk (although the third-party interests would still have to be considered).
All that is to say that—while “this is really important and what OpenAI wants is bad” may be an adequate public advocacy basis for now, I think there will need to be a judicially and practically viable plan for what appropriate relief looks like at some point. Neither side in the litigation would be a credible messenger on this point, as OpenAI is compromised and its competitor Musk would like to pick off assets for his own profit and power-seeking purposes. I think that’s one of the places where savvy non-party advocacy could make a difference.
Would people rather see OpenAI sold off to whatever non-insider bidder the board determines would be best, possibly with some judicial veto of a particularly bad choice? Would people prefer that a transition of some sort go forward, subject to imposition of some sort of hobbles that would slow OpenAI down and require some safety and ethics safeguards? These are the sorts of questions on which I think a court would be more likely to defer to the United States as an amicus and/or to the state AGs, and would be more likely to listen to subject-matter experts and advocacy groups who sought amicus status.
They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn’t viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn’t seem necessarily unreasonable under general charitable-law principles to me
I’m confused about this line of argument. Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?
I tried to find the original mission statement. Is the following correct?
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
If so, I can see how an OpenAI plantiff can try to argue that “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” necessitates them “winning the AI arms race”, but I don’t exactly see why an impartial observer should grant them that.
Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?
It depends on what exactly “losing the AI arms race” means, which is in turn influenced by how big the advantages of being first (or one of the first) to AGI are. If the mission was to “advance digital intelligence,” and it was widely understood that the mission involved building AGI and/or near-AGI, that would seem to imply some sort of technological leadership position was prerequisite to mission success. I agree that being first to AGI isn’t particularly relevant to succeeding at the mission. But if they can’t stay competitive with Google et al., it’s questionable whether they can meaningfully achieve the goal of “advanc[ing] digital intelligence.”
So for instance, if OpenAI’s progress rate were to be reduced by X% due to the disadvantages in raising capital it faces on account of its non-profit structure, would that be enough to render it largely irrelevant as other actors quickly passed it and their lead grew with every passing month? I think a lot would depend on what X% is. A range of values seem plausible to me; as I mentioned in a different comment I just submitted, I suspect that fairly probative evidence on OpenAI’s current ability to fundraise with its non-profit structure exists but is not yet public.
(I found the language you quoted going back to 2015, so it’s probably a fair characterization of what OpenAI was telling donors and governmental agencies at the beginning.)
To me, “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.
Further evidence for this view comes from OpenAI’s old merge-and-assist clause, which indicates that they’d be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good.
Thanks for sharing this, very informative and helpful for highlighting a potential leverage point., strong upvoted.
One minor point of disagreement: I think you are being a bit too pessimistic here:
And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn’t quick, the evidence of antitrust violations was massive, and there just wasn’t any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.
There are few examples of US courts blowing up large US corporations, but that is not exactly the situation here. OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. There is a long history of businesses exaggerating the harm from new regulations, claiming they will be ruinous when actually human ingenuity and entrepreneurship render them merely disadvantageous. The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue.
I think a closer example might be when the DC District Court sided with the FTC and blocked the Staples-Office Depot merger on somewhat dubious grounds. The court didn’t directly implode a massive retailer… but Staples did enter administration shortly afterwards, and my impression at the time was the causal link was pretty clear.
OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. [ . . . .] The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue.
Yes, that’s the counterargument. I submit that there is likely to be pretty relevant documentary and testimonial evidence on this point, but we don’t know which way it would go. So I don’t have any clear opinion on whether OpenAI’s argument would work and/or how much these kinds of concerns would shape the scope of injunctive relief.
OpenAI agreed to terms that I would almost characterize as a poison pill: if the transformation doesn’t move forward on time, the investors can get that $6.6B back. It may be that would-be investors were not willing to put enough money to keep OpenAI going without a commitment to refund if the non-profit board were not disempowered. As you mentioned, corporations exaggerate the detrimental impact of legal requirements they don’t like all the time! But the statements and actions of multiple, independent third-party investors should be less infected on this issue. If an inability to secure adequate funding as a non-profit is what this evidence points toward, I think that would be enough to establish a prima facie case and require proponents to put up evidence of their own to rebut that case.
So who will make that case? It’s not clear Musk will assert that OpenAI can stay competitive while remaining a non-profit; his expression of a desire “[o]n behalf of a consortium of buyers,” “to acquire all assets . . . of OpenAI” for $97,375,000,000 (Order at 14 n.10) suggests he may not be inclined to advocate for OpenAI’s ability to use its own assets to successfully advance its mission.
There’s also the possibility that the court would show some deference on this question to the business judgment of OpenAI’s independent board members if people like Altman and Brockman were screened off enough. It seems fairly clear to me that everyone understood early on there would need to be some for-profit elements in the mix, and so I think the non-conflicted board members may get some benefit of the doubt in figuring that out.
To the extent that evidence from the recent fundraising cycle supports the risk-of-fatal-damage theory, I suspect the relevance of fundraising success that occurred prior to the board controversy may be limited. I think it would be reasonable to ascribe lowered funder willingness to tolerate non-profit control to that controversy.
I would like to see more developed thinking in EA circles about what a potential and plausible remedy is if Musk prevails here. The possibility of “some kind of middle ground here” was discussed on the podcast, and I’d keep those kinds of outcomes in mind if Musk were to prevail at trial.
In @Garrison’s helpful writeup, he observes that:
And I would guess that’s going to be a key element of OpenAI’s argument at trial. They may assert that subsequent developments establish that nonprofit development of AI is financially infeasible, that they are going to lose the AI arms race without massive cash infusions, and that obtaining infusions while the nonprofit is in charge isn’t viable. If the signs are clear enough that the mission as originally envisioned is doomed to fail, then switching to a backup mission doesn’t seem necessarily unreasonable under general charitable-law principles to me. The district court didn’t need to go there at this point given that the existence of an actual contract or charitable trust between the parties is a threshold issue, and I am not seeing much on this point in the court’s order.
To me, this is not only a defense for OpenAI but is also intertwined with the question of remedy. A permanent injunction is not awarded to a prevailing party as a matter of right. Rather:
According to well-established principles of equity, a plaintiff seeking a permanent injunction must satisfy a four-factor test before a court may grant such relief. A plaintiff must demonstrate: (1) that it has suffered an irreparable injury; (2) that remedies available at law, such as monetary damages, are inadequate to compensate for that injury; (3) that, considering the balance of hardships between the plaintiff and defendant, a remedy in equity is warranted; and (4) that the public interest would not be disserved by a permanent injunction.
eBay Inc. v. MercExchange, L.L.C., 547 U.S. 388 (2006) (U.S. Supreme Court decision).
The district court’s discussion of the balance of equities focuses on the fact that “Altman and Brockman made foundational commitments foreswearing any intent to use OpenAI as a vehicle to enrich themselves.” It’s not hard to see how an injunction against payola for insiders would meet traditional equitable criteria.
But an injunction that could pose a significant existential risk to OpenAI’s viability could run into some serious problems on prong four. It’s not likely that the district court would conclude the public interest affirmatively favors Meta, Google, xAI, or the like reaching AGI first as opposed to OpenAI. There is a national-security angle to the extent that the requested injunction might increase the risk of another country reaching AGI first. And to the extent that the cash from selling off OpenAI control would be going to charitable ends rather than lining Altman’s pockets, it’s going to be hard to argue that OpenAI’s board has a fiduciary duty to just shut it all down and vanish ~$100B in charitable assets into thin air.
And put in more EA-coded language: the base rate of courts imploding massive businesses (or charities) is not exactly high. One example in which something like this did happen was the breakup of the Bell System in 1982, but it wasn’t quick, the evidence of antitrust violations was massive, and there just wasn’t any other plausible remedy. Another would be the breakup of Standard Oil in 1911, again a near-monopoly with some massive antitrust problems.
If OpenAI is practically enjoined from raising enough capital needed to achieve its goals, the usual responsible thing for a charity that can no longer effectively function is to sell off its assets and distribute the proceeds to other non-profits. Think about a non-profit set up to run a small rural hospital that is no longer viable on its own. It might prefer to merge with another non-profit, but selling the whole hospital to a for-profit chain is usually the next-best option, with selling the land and equipment as a backup option. In a certain light, how different might a sale be from what OpenAI is proposing to do? I’d want to think more about that . . .
With Musk as plaintiff, there are also some potential concerns on prong three relating to laches (the idea that Musk slept on his rights and prejudiced OpenAI-related parties as a result). Although I’m not sure if the interests of OpenAI investors and employees (who are not Altman and Brockman) with equity-like interests would be analyzed under prong three or four, it does seem that he sat around without asserting his rights while others invested cash and/or sweat equity into OpenAI. In contrast, “[t]he general principle is, that laches is not imputable to the government . . . .” United States v. Kirkpatrick, 22 U.S. (9 Wheat) 720, 735 (1824). I predict that any relief granted to Musk will need to take account of these third-party interests, especially because they were invested in while Musk slept on his rights. The avoidance of a laches argument is another advantage of a governmental litigant as opposed to Musk (although the third-party interests would still have to be considered).
All that is to say that—while “this is really important and what OpenAI wants is bad” may be an adequate public advocacy basis for now, I think there will need to be a judicially and practically viable plan for what appropriate relief looks like at some point. Neither side in the litigation would be a credible messenger on this point, as OpenAI is compromised and its competitor Musk would like to pick off assets for his own profit and power-seeking purposes. I think that’s one of the places where savvy non-party advocacy could make a difference.
Would people rather see OpenAI sold off to whatever non-insider bidder the board determines would be best, possibly with some judicial veto of a particularly bad choice? Would people prefer that a transition of some sort go forward, subject to imposition of some sort of hobbles that would slow OpenAI down and require some safety and ethics safeguards? These are the sorts of questions on which I think a court would be more likely to defer to the United States as an amicus and/or to the state AGs, and would be more likely to listen to subject-matter experts and advocacy groups who sought amicus status.
I’m confused about this line of argument. Why is losing the AI arms race relevant to whether the mission as originally envisioned is doomed to fail?
I tried to find the original mission statement. Is the following correct?
If so, I can see how an OpenAI plantiff can try to argue that “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” necessitates them “winning the AI arms race”, but I don’t exactly see why an impartial observer should grant them that.
It depends on what exactly “losing the AI arms race” means, which is in turn influenced by how big the advantages of being first (or one of the first) to AGI are. If the mission was to “advance digital intelligence,” and it was widely understood that the mission involved building AGI and/or near-AGI, that would seem to imply some sort of technological leadership position was prerequisite to mission success. I agree that being first to AGI isn’t particularly relevant to succeeding at the mission. But if they can’t stay competitive with Google et al., it’s questionable whether they can meaningfully achieve the goal of “advanc[ing] digital intelligence.”
So for instance, if OpenAI’s progress rate were to be reduced by X% due to the disadvantages in raising capital it faces on account of its non-profit structure, would that be enough to render it largely irrelevant as other actors quickly passed it and their lead grew with every passing month? I think a lot would depend on what X% is. A range of values seem plausible to me; as I mentioned in a different comment I just submitted, I suspect that fairly probative evidence on OpenAI’s current ability to fundraise with its non-profit structure exists but is not yet public.
(I found the language you quoted going back to 2015, so it’s probably a fair characterization of what OpenAI was telling donors and governmental agencies at the beginning.)
To me, “advanc[ing] digital intelligence in the way that is most likely to benefit humanity as a whole” does not necessitate them building AGI at all. Indeed the same mission statement can be said to apply to e.g. Redwood Research.
Further evidence for this view comes from OpenAI’s old merge-and-assist clause, which indicates that they’d be willing to fold and assist a different company if the other company is a) within 2 years of building AGI and b) sufficiently good.
Thanks for sharing this, very informative and helpful for highlighting a potential leverage point., strong upvoted.
One minor point of disagreement: I think you are being a bit too pessimistic here:
There are few examples of US courts blowing up large US corporations, but that is not exactly the situation here. OpenAI might claim that preventing a for-profit conversion would destroy or fatally damage the company, but they do not have proof. There is a long history of businesses exaggerating the harm from new regulations, claiming they will be ruinous when actually human ingenuity and entrepreneurship render them merely disadvantageous. The fact is that this far OpenAI has raised huge amounts of money and been at the forefront of scaling with its current hybrid structure, and I think a court could rightfully be skeptical of claims without proof that this cannot continue.
I think a closer example might be when the DC District Court sided with the FTC and blocked the Staples-Office Depot merger on somewhat dubious grounds. The court didn’t directly implode a massive retailer… but Staples did enter administration shortly afterwards, and my impression at the time was the causal link was pretty clear.
Yes, that’s the counterargument. I submit that there is likely to be pretty relevant documentary and testimonial evidence on this point, but we don’t know which way it would go. So I don’t have any clear opinion on whether OpenAI’s argument would work and/or how much these kinds of concerns would shape the scope of injunctive relief.
OpenAI agreed to terms that I would almost characterize as a poison pill: if the transformation doesn’t move forward on time, the investors can get that $6.6B back. It may be that would-be investors were not willing to put enough money to keep OpenAI going without a commitment to refund if the non-profit board were not disempowered. As you mentioned, corporations exaggerate the detrimental impact of legal requirements they don’t like all the time! But the statements and actions of multiple, independent third-party investors should be less infected on this issue. If an inability to secure adequate funding as a non-profit is what this evidence points toward, I think that would be enough to establish a prima facie case and require proponents to put up evidence of their own to rebut that case.
So who will make that case? It’s not clear Musk will assert that OpenAI can stay competitive while remaining a non-profit; his expression of a desire “[o]n behalf of a consortium of buyers,” “to acquire all assets . . . of OpenAI” for $97,375,000,000 (Order at 14 n.10) suggests he may not be inclined to advocate for OpenAI’s ability to use its own assets to successfully advance its mission.
There’s also the possibility that the court would show some deference on this question to the business judgment of OpenAI’s independent board members if people like Altman and Brockman were screened off enough. It seems fairly clear to me that everyone understood early on there would need to be some for-profit elements in the mix, and so I think the non-conflicted board members may get some benefit of the doubt in figuring that out.
To the extent that evidence from the recent fundraising cycle supports the risk-of-fatal-damage theory, I suspect the relevance of fundraising success that occurred prior to the board controversy may be limited. I think it would be reasonable to ascribe lowered funder willingness to tolerate non-profit control to that controversy.