We are not alone: many communities want to stop Big Tech from scaling unsafe AI

Less than a year ago, a community-wide conversation started about slowing down AI.

Some commented that outside communities won’t act effectively to restrict AI, since they’re not “aligned” with our goal of preventing extinction. That’s where I stepped in:


Communities are already taking action – to restrict harmful scaling of AI.
I’m in touch with creatives, data workers, journalists, veterans, product safety experts, AI ethics researchers, and climate change researchers organising against harms.


Today, I drafted a plan to assist creatives. It’s for a funder, so I omitted details.
Would love your thoughts, before the AI Pause Debate Week closes:

Plan

Rather than hope new laws will pass in 1-2 years, we can enforce established laws now. It is in AI Safety’s interest to support creatives to enforce laws against data laundering.

To train “everything for everyone” models (otherwise called General-Purpose AI), companies scrape the web. AI companies have scraped so much personal data, that they are breaking laws. These laws protect copyright holders against text and data mining, children against sharing of CSAM, and citizens against the processing of personal data.

Books, art and photos got scraped to train AI without consent, credit or compensation. Creatives began lobbying, and filed six class-action lawsuits in the US. A prediction market now puts a 24% chance on generative AI trained on crawled art being illegal in 2027 because of copyright in the US.

In the EU, no lawsuit has been filed. Yet the case is stronger in the EU.

In the EU, this commercial text and data mining is illegal. The Digital Single Market 2019 directive upholds a 2001 provision:“Such [TDM] exceptions and limitations may not be applied in a way which prejudices the legitimate interests of the rightholder or which conflicts with the normal exploitation of his work or other subject-matter.”

[project details]

This proposal is about restricting data laundering. If legal action here is indeed tractable, it is worth considering funding other legal actions too.

Long-term vision

We want this project to become a template for future legal actions.

Supporting communities’ legal actions to prevent harms can robustly restrict the scaled integration of AI in areas of economic production.

Besides restricting data, legal actions can restrict AI being scaled on the harmful exploitation of workers, uses, and compute:
- Employment and whistleblowing laws can protect underpaid or misled workers.
- Tort, false advertising, and product safety laws can protect against misuses.
- Environmental regulations can protect against pollutive compute.

AI governance folk have focussed the most on establishing regulations and norms to evaluate and prevent risks of a catastrophe or extinction.

Risk-based regulation has many gaps, as described in this law paper:
❝ risk regulation typically assumes a technology will be adopted despite its harms…Even immense individual harms may get dismissed through the lens of risk analysis, in the face of significant collective benefits.

❝ The costs of organizing to participate in the politics of risk are often high.. It also removes the feedback loop of tort liability: without civil recourse, risk regulation risks being static. Attempts to make risk regulation “adaptive” or iterative in turn risk capture by regulated entities.

❝ risk regulation as most scholars conceive of it entails mitigating harms while avoiding unnecessarily stringent laws, while the precautionary principle emphasizes avoiding insufficiently stringent laws… [M]any of the most robust examples of U.S. risk regulation are precautionary in nature: the Food and Drug Administration’s regulation of medicine...and the Nuclear Regulatory Commission’s certification scheme for nuclear reactors. Both of these regulatory schemes start from the default of banning a technology from general use until it has been demonstrated to be safe, or safe enough.


Evaluative risk-based regulation tends to lead to AI companies being overwhelmingly involved in conceiving of and evaluating the risks. Some cases:
- OpenAI lobbying against categorizing GPT as “high risk”.
- Anthropic’s Responsible Scaling Policy – in effect allowing staff to scale on, as long as they/​the board evaluates the risk that their “AI model directly causes large scale devastation” as low enough.
- Subtle regulatory capture of the UK’s AI Safety initiatives.

Efforts to pass risk-based laws will be co-opted by Big Tech lobbyists aiming to dilute restrictions on AI commerce. The same is not so with lawsuits – the most AI companies can do is try not to lose the case.

Lawsuits put pressure on Big Tech, in a “business as usual” way. Of course, companies should not be allowed to break laws to scale AI. Of course, AI companies should be held accountable. Lawsuits focus on the question whether specific damages were caused, rather than on broad ideological disagreements, which makes lawsuits less politicky.

Contrast the climate debates in US congress with how Sierra Club sued coal plant after coal plant, on whatever violations they could find, preventing the scale up of coal plants under the Trump Administration.

A legal approach reduces conflicts between communities concerned about AI.
The EU Commission announcement that “mitigating the risk of extinction should be a global priority” gave into bifurcated reactions – excitement from the AI Safety side, critique from the AI Ethics side. Putting aside whether a vague commitment to mitigate extinction risks can be enforced, the polarization around it curbs a collective response.

Lately, there are heated discussions between AI Ethics and AI Safety. Concerns need to be recognised (eg. should AI Safety folk have given labs funds, talent, and ideological support? should AI Ethics folk worry about more than current stochastic parrots?).
But it distracts from what needs to be done: restrict Big Tech from scaling unsafe AI.

AI Ethics researchers have been supporting creatives, but lack funds.
AI Safety has watched on, but could step in to alleviate the bottleneck.
Empowering creatives is a first step to de-escalating the conflict.

Funding lawsuits rectifies a growing power imbalance. AI companies are held liable for causing damage to individual citizens, rather than just being free to extract profit and reinvest in artificial infrastructure.

Communities are noticing how Big Tech consolidates power with AI.
Communities are noticing the growing harms and risks of corporate-funded automated technology growth.

People feel helpless. Community leaders are overwhelmed by the immensity of the situation, recognising that their efforts alone will not be enough.

Like some Greek tragedy, Big Tech divides and conquers democracy.
Do we watch our potential allies wither one by one – first creatives, then gig workers, then Black and conservative communities, then environmentalists?

Can we support them to restrict AI on the frontiers? Can we converge on a shared understanding of what situations we all want to resolve?