OpenAI defected. As a non-profit, OpenAI recruited researchers on the promise of “safe” AGI. Then it pushed out the safety researchers and turned into a for-profit.
Why act in response?
OpenAI is recklessly scaling AI. Besides accelerating “progress” toward mass extinction, it causes increasing harms. Many communities are now speaking up. In my circles alone, I count seven new books critiquing AI corps. It’s what happens when you scrape everyone’s personal data to train inscrutable models (computed by polluting data centers) used to cheaply automate out professionals and spread disinformation and deepfakes.
Safety researchers used to say they could improve things on the inside. They didn’t want to ruin the goodwill. That option is no longer available.
The rational step is to ratchet up pressure from the outside. If we stand by while OpenAI violates its charter, it signals that their execs can get away with it. Worse, it signals that we don’t care.
OpenAI is in a weaker position than it looks on the surface. OpenAI is seen as the industry leader, yet projected to lose $5 billion this year. Microsoft’s CEO changed their mind on injecting that amount after the board fired Sam. So now OpenAI is dependent on investment firms to pump in cash for compute every < 10 months. OpenAI is constantly nearing a financial cliff. If concerned communities would seriously collaborate to make OpenAI stop obvious harms caused by their model releases, or to hold the business liable, OpenAI will fail.
OpenAI’s forced downsizing would reset expectations across the industry. Since 2012, the gap between expenditures in deep learning and actual revenue has ballooned to over half a trillion dollars a year. If even OpenAI, as the industry’s leading large model developer, had to fire engineers and cut compute to save its failing business model, it could well trigger an AI crash. As media articles turn cynical, investors will sell stakes to get ahead of the crowd, and the disinvested start-ups will go bankrupt. During this period of industry weakness, our communities can pass and actually enforce many laws to restrict harmful scaling.
OpenAI’s activities are harmful. Let’s be public and honest in our response.
If you start a campaign, please communicate how you are targeting OpenAI’s harmful activities. That way, we maintain the moral high ground in the eyes of the public.
Avoid smear campaigns for this reason. OpenAI can outspend and outhire us if they decide to counter-campaign. But the public distrusts Sam & co for failing to be open, and for their repeated dishonest claims. We can stand our ground by taking care to be open and honest.
The flipside is to not downplay your critiques in public because you’re worried of sounding extreme. Many people are fed up with OpenAI, and you can be honest about it too.
Examples of what I see as honest actions:
Explain why you’re concerned in public.
Publish a technical demonstration of a GPT model dysfunctioning.
OpenAI defected, but we can take honest actions
OpenAI defected. As a non-profit, OpenAI recruited researchers on the promise of “safe” AGI. Then it pushed out the safety researchers and turned into a for-profit.
Why act in response?
OpenAI is recklessly scaling AI. Besides accelerating “progress” toward mass extinction, it causes increasing harms. Many communities are now speaking up. In my circles alone, I count seven new books critiquing AI corps. It’s what happens when you scrape everyone’s personal data to train inscrutable models (computed by polluting data centers) used to cheaply automate out professionals and spread disinformation and deepfakes.
Safety researchers used to say they could improve things on the inside. They didn’t want to ruin the goodwill. That option is no longer available.
The rational step is to ratchet up pressure from the outside. If we stand by while OpenAI violates its charter, it signals that their execs can get away with it. Worse, it signals that we don’t care.
OpenAI is in a weaker position than it looks on the surface. OpenAI is seen as the industry leader, yet projected to lose $5 billion this year. Microsoft’s CEO changed their mind on injecting that amount after the board fired Sam. So now OpenAI is dependent on investment firms to pump in cash for compute every < 10 months. OpenAI is constantly nearing a financial cliff. If concerned communities would seriously collaborate to make OpenAI stop obvious harms caused by their model releases, or to hold the business liable, OpenAI will fail.
OpenAI’s forced downsizing would reset expectations across the industry. Since 2012, the gap between expenditures in deep learning and actual revenue has ballooned to over half a trillion dollars a year. If even OpenAI, as the industry’s leading large model developer, had to fire engineers and cut compute to save its failing business model, it could well trigger an AI crash. As media articles turn cynical, investors will sell stakes to get ahead of the crowd, and the disinvested start-ups will go bankrupt. During this period of industry weakness, our communities can pass and actually enforce many laws to restrict harmful scaling.
OpenAI’s activities are harmful. Let’s be public and honest in our response.
If you start a campaign, please communicate how you are targeting OpenAI’s harmful activities. That way, we maintain the moral high ground in the eyes of the public.
Avoid smear campaigns for this reason. OpenAI can outspend and outhire us if they decide to counter-campaign. But the public distrusts Sam & co for failing to be open, and for their repeated dishonest claims. We can stand our ground by taking care to be open and honest.
The flipside is to not downplay your critiques in public because you’re worried of sounding extreme. Many people are fed up with OpenAI, and you can be honest about it too.
Examples of what I see as honest actions:
Explain why you’re concerned in public.
Publish a technical demonstration of a GPT model dysfunctioning.
Inform government decision-makers, eg. through messaging campaigns and meetings with politicians.
Start a lawsuit, or send complaints about OAI’s overreaches to state-attorney generals and regulators.
Donate to an org advocating on behalf of communities.
Others are taking actions already. Please act in line with your care, and contribute what you can.