AI Safety Newsletter #3: AI policy proposals and a new challenger approaches

Link post

Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.

Subscribe here to receive future versions.

---

Policy Proposals for AI Safety

Critical industries rely on the government to protect consumer safety. The FAA approves new airplane designs, the FDA tests new drugs, and the SEC and CFPB regulate risky financial instruments. Currently, there is no analogous set of regulations for AI safety.

This could soon change. President Biden and other members of Congress have recently been vocal about the risks of artificial intelligence and the need for policy solutions.

From guiding principles to enforceable laws. Previous work on AI policy such as the White House Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework has articulated guiding principles like interpretability, robustness, and privacy. But these recommendations are not enforceable – AI developers can simply choose to ignore them.

A solution with more teeth could be on its way. Axios reports that Senator Chuck Schumer has been circulating a draft framework for AI governance among experts over the last several weeks. To help inform policy making efforts, the Department of Commerce has issued a request for comments on how to effectively regulate AI.

The European Union debates narrow vs. general AI regulation. In Europe, policy conversations are centering around the EU AI Act. The Act focuses on eight “high-risk” applications of AI, including hiring, biometrics, and criminal justice. But the rise of general purpose AI systems like ChatGPT calls into question the wisdom of regulating only a handful of specific applications.

An open letter signed by over 50 AI experts, including CAIS’s director, argues that the Act should also govern general purpose AI systems, holding AI developers liable for harm caused by their systems. Several members from all political blocs of the EU parliament have publicly agreed that rules are necessary for “powerful General Purpose AI systems that can be easily adapted to a multitude of purposes.”

Specific policy proposals for AI safety. With politicians promising that AI regulation is coming, the key question is which proposals they will choose to carry forward into law. Here is a brief compilation of several recent sets of policy proposals:

  1. Create an AI regulatory body. A national agency focused on AI could set and enforce standards, monitor the development of powerful new models, investigate AI failures, and publish information about how to develop AI safely.

  2. Clarify legal liability for AI harm. When ChatGPT falsely accused a law professor of sexual harassment, legal scholars argued that OpenAI should face legal liability for libel and defamatory statements made by its models. Others propose AI developers should be strictly liable for harm caused by AI, but questions remain about where to draw the line between an unsafe product versus deliberate misuse.

  3. Compute governance. AI regulations could be automatically enforced by software built into the cutting edge computer chips used to train AI systems.

  4. Nuclear command and control. Despite persistent problems with the security and reliability of AI systems, some military analysts advocate using AI in the process of launching nuclear weapons. A simple proposal: Don’t give AI influence over nuclear command and control.

  5. Fund safety research. Organizations promoting work on AI safety such as NIST and NSF could use more funding from federal sources.

China proposes many AI regulations. Last week, China released its own set of AI regulations that go much further than current Western efforts. Under these regulations, AI developers would be required to conduct security assessments, protect user data privacy, prevent impersonation via generative AI, and take legal responsibility for harm caused by their models. While some have opposed safety measures on the grounds that they would slow progress and allow countries like China to catch up, these regulations provide an opportunity for cooperation and taking precautions without fears of competitive loss.

---

Competitive Pressures in AI Development

The AI developer landscape is changing quickly—one organization is shifting its strategy, two organizations are merging, and one organization is emerging, all in order to adapt to competitive pressures.

Anthropic shifts its focus to products. Anthropic was originally founded by former OpenAI employees who were concerned about OpenAI’s product-focused direction coming at the expense of safety. More recently, however, Anthropic has been influenced by competitive pressures and has shifted their focus toward products. TechCrunch obtained a pitch deck for Anthropic’s Series C fundraising round, which includes plans to build a model that is “10 times more capable than today’s most powerful AI” that will require “a billion dollars in spending over the next 18 months.”

Elon Musk will likely launch a new AI company. Elon Musk is apparently launching a new artificial intelligence start-up to compete with OpenAI. Musk has already “secured thousands of high-powered GPU processors from Nvidia” and begun recruiting engineers from top AI labs.”

While Musk now seeks to compete with OpenAI, he was originally one of OpenAI’s co-founders. He was allegedly inspired to start the company due to concerns that Google co-founder Larry Page was not taking AI safety seriously enough. Many years ago, when Musk brought up AI safety concerns to Page, the Google co-founder allegedly responded by calling Musk a “speciesist.” The emergence of a new major AI developer will likely increase competitive pressures.

Google Brain and DeepMind merge into Google DeepMind. Google announced a merger between Google Brain and DeepMind, two major AI developers. This restructuring was likely spurred by products from Google’s competitors, OpenAI and Microsoft. Google’s announcement stated this was to make them move “faster” and to “accelerate” AI development. The new organization, Google DeepMind, will be run by Demis Hassabis (former CEO of DeepMind).

From an AI safety perspective, the effect of this decision will largely be determined by whether the new organization will have a safety culture more similar to DeepMind or Google Brain. DeepMind leadership has a much stronger history and track record of being concerned about AI safety, whereas Google Brain has never had a safety team. If DeepMind leadership has more influence over the new organization, this might represent a win for AI safety. If not, then society has essentially lost one of the (relatively) responsible leading AI developers.

Competitive pressures shape the AI landscape. These news updates relate to a larger trend in the AI development landscape: the role of competitive pressures. Building safer AI systems can incur a substantial cost. Competitive pressures can make it difficult for AI developers—even ones that care about safety—to act in ways that put safety first. The pressures to race ahead can cause actors to make sacrifices, especially when there are tradeoffs between safety and competitiveness.

---

See also: CAIS website, CAIS twitter, A technical safety research newsletter

Crossposted to LessWrong (33 points, 0 comments)