SAN FRANCISCO, Sept 25 (Reuters) - ChatGPT-maker OpenAI is working on a plan to restructure its core business into a for-profit benefit corporation that will no longer be controlled by its non-profit board, people familiar with the matter told Reuters, in a move that will make the company more attractive to investors.
The case for a legal challenge seems hugely overdetermined to me:
Stop/delay/complicate the restructuring, and otherwise make life appropriately hard for Sam Altman
Settle for a large huge amount of money that can be used to do a huge amount of good
Signal that you can’t just blatantly take advantage of OpenPhil/EV/EA as you please without appropriate challenge
I know OpenPhil has a pretty hands-off ethos and vibe; this shouldn’t stop them from acting with integrity when hands-on legal action is clearly warranted
I understand that OpenAI’s financial situation is not very good [edit: this may not be a high-quality source], and if they aren’t able to convert to a for-profit, things will become even worse:
OpenAI has two years from the [current $6.6 billion funding round] deal’s close to convert to a for-profit company, or its funding will convert into debt at a 9% interest rate.
As an aside: how will OpenAI pay that interest in the event they can’t convert to a for-profit business? Will they raise money to pay the interest rate? Will they get a loan?
It’s conceivable that OpenPhil suing OpenAI could buy us 10+ years of AI timeline, if the following dominoes fall:
OpenPhil sues, and OpenAI fails to convert to a for-profit.
As a result, OpenAI struggles to raise additional capital from investors.
Losing $4-5 billion a year with little additional funding in sight, OpenAI is forced to make some tough financial decisions. They turn off the free version of ChatGPT, stop training new models, and cut salaries for employees. They’re able to eke out some profit, but not much profit, because their product is not highly differentiated from other AI offerings.
Silicon Valley herd mentality kicks in. OpenAI has been the hottest startup in the Valley. If it becomes known as the next WeWork, its fall will be earth-shaking. Game-theoretically, it doesn’t make as much sense to invest in an early AI startup round if there’s no capital willing to invest in subsequent rounds. OpenAI’s collapse could generate the belief that AI startups will struggle to raise capital—and if many investors believe that, it could therefore become true.
The AI bubble deflates and the Valley refocuses on other industries.
It would be extremely ironic if the net effect of all Sam Altman and Mark Zuckerberg’s efforts is to make AI companies uninvestable and buy us a bunch of timeline. Sam by generating a bunch of hype that fails to deliver, and Mark by commoditizing LLMs. (I say “ironic” because EAs are used to thinking of both Sam and Mark as irresponsible actors in the AI space.)
EDIT: there is some criticism of OpenPhil’s approach to its public image here which may be relevant to the decision of whether to sue or not. Also, there’s the obvious point that OpenAI appears to be one of the worst actors in the AI space.
EDIT 2: One also needs to consider how Sam might respond, e.g. by starting a new company and attempting to poach all OpenAI employees.
From Reuters:
I sincerely hope OpenPhil (or Effective Ventures, or both—I don’t know the minutia here) sues over this. Read the reasoning for and details of the $30M grant here.
The case for a legal challenge seems hugely overdetermined to me:
Stop/delay/complicate the restructuring, and otherwise make life appropriately hard for Sam Altman
Settle for a large huge amount of money that can be used to do a huge amount of good
Signal that you can’t just blatantly take advantage of OpenPhil/EV/EA as you please without appropriate challenge
I know OpenPhil has a pretty hands-off ethos and vibe; this shouldn’t stop them from acting with integrity when hands-on legal action is clearly warranted
Good Ventures rather than Effective Ventures, no?
What’s the legal case for a lawsuit?
There’s a broader point here about the takeover of a non-profit organization by financial interests that I’d really like to see fought back against.
You could also add:
”Negotiate safety conditions as part of a settlement”
I understand that OpenAI’s financial situation is not very good [edit: this may not be a high-quality source], and if they aren’t able to convert to a for-profit, things will become even worse:
https://www.wheresyoured.at/oai-business/
It’s conceivable that OpenPhil suing OpenAI could buy us 10+ years of AI timeline, if the following dominoes fall:
OpenPhil sues, and OpenAI fails to convert to a for-profit.
As a result, OpenAI struggles to raise additional capital from investors.
Losing $4-5 billion a year with little additional funding in sight, OpenAI is forced to make some tough financial decisions. They turn off the free version of ChatGPT, stop training new models, and cut salaries for employees. They’re able to eke out some profit, but not much profit, because their product is not highly differentiated from other AI offerings.
Silicon Valley herd mentality kicks in. OpenAI has been the hottest startup in the Valley. If it becomes known as the next WeWork, its fall will be earth-shaking. Game-theoretically, it doesn’t make as much sense to invest in an early AI startup round if there’s no capital willing to invest in subsequent rounds. OpenAI’s collapse could generate the belief that AI startups will struggle to raise capital—and if many investors believe that, it could therefore become true.
The AI bubble deflates and the Valley refocuses on other industries.
It would be extremely ironic if the net effect of all Sam Altman and Mark Zuckerberg’s efforts is to make AI companies uninvestable and buy us a bunch of timeline. Sam by generating a bunch of hype that fails to deliver, and Mark by commoditizing LLMs. (I say “ironic” because EAs are used to thinking of both Sam and Mark as irresponsible actors in the AI space.)
EDIT: there is some criticism of OpenPhil’s approach to its public image here which may be relevant to the decision of whether to sue or not. Also, there’s the obvious point that OpenAI appears to be one of the worst actors in the AI space.
EDIT 2: One also needs to consider how Sam might respond, e.g. by starting a new company and attempting to poach all OpenAI employees.