Released today (10/30/23) this is crazy, perhaps the most sweeping action taken by government on AI yet.
Below, I’ve segmented by x-risk and non-x-risk related proposals, excluding the proposals that are geared towards promoting its use[1] and focusing solely on those aimed at risk. It’s worth noting that some of these are very specific and direct an action to be taken by one of the executive branch organizations (i.e. sharing of safety test results) but others are guidances, which involve “calls on Congress” to pass legislation that would codify the desired action.
[Update]: The official order (this is a summary of the press release) has now be released, so if you want to see how these are codified to a greater granularity, look there[2].
Existential Risk Related Actions:
Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests.
Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety.
Protect against the risks of using AI to engineer dangerous biological materials by developing strong new standards for biological synthesis screening. Agencies that fund life-science projects will establish these standards as a condition of federal funding, creating powerful incentives to ensure appropriate screening and manage risks potentially made worse by AI
Order the development of a National Security Memorandum that directs further actions on AI and security, to be developed by the National Security Council and White House Chief of Staff. This document will ensure that the United States military and intelligence community use AI safely, ethically, and effectively in their missions, and will direct actions to counter adversaries’ military use of AI.
Expand bilateral, multilateral, and multistakeholder engagements to collaborate on AI. The State Department, in collaboration, with the Commerce Department will lead an effort to establish robust international frameworks for harnessing AI’s benefits and managing its risks and ensuring safety. This will include accelerating development and implementation AI standards.
Non-Existential Risk Actions:
General
Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content. Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic—and set an example for the private sector and governments around the world.
Discrimination
Provide clear guidance to landlords, Federal benefits programs, and federal contractors to keep AI algorithms from being used to exacerbate discrimination.
Address algorithmic discrimination through training, technical assistance and coordination between the Department of Justice and Federal civil rights offices
Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
Healthcare
Advance the responsible use of AI in healthcare and the development of affordable and life-saving drugs. The Department of Health and Human Services will also establish a safety program to receive reports of—and act to remedy – harms or unsafe healthcare practices involving AI.
Jobs
Develop principles and best practices to mitigate the harms and maximize the benefits of AI for workers by addressing job displacement; labor standards; workplace equity, health, and safety; and data collection. These principles and best practices will benefit workers by providing guidance to prevent employers from undercompensating workers, evaluating job applications unfairly, or impinging on workers’ ability to organize.
Produce a report on AI’s potential labor-market impacts, and study and identify options for strengthening federal support for workers facing labor disruptions, including from AI.
Privacy
Protect Americans’ privacy by prioritizing federal support for accelerating the development and use of privacy-preserving techniques as well as evaluations of the effectiveness of these techniques
Strengthen privacy-preserving research and technologies, such as cryptographic tools that preserve individuals’ privacy, by funding a Research Coordination Network to advance rapid breakthroughs and development. The National Science Foundation will also work with this network to promote the adoption of leading-edge privacy-preserving technologies by federal agencies.
Evaluate how agencies collect and use commercially available information—including information they procure from data brokers—and strengthen privacy guidance for federal agencies to account for AI risks.
This was the press release; the actual order has now been published.
One safety-relevant part:
Great to see how concrete and serious the US is now. This basically means that models more powerful than GPT-4 have to be reported to the government.
Thanks, I’ll toss this in at the top now for those that are curious
Thank you very much for splitting this up into sections in addition to posting the linkpost itself
Anytime :) I didn’t do much, but glad to know it was helpful because I was debating whether to continue trying to organize for future stuff
Would the information in this quote fall under any of the Freedom of Information Act (FOIA) exemptions, particularly those concerning national security or confidential commercial information/trade secrets? Or would there be other reasons why it wouldn’t become public knowledge through FOIA requests?
Yes I expect that the government would aim to protect the reported information (or at least key sensitive details) as CUI or in another way that would be FOIA exempt.
Executive summary: President Biden issued an executive order with sweeping proposals for regulating AI systems, including requirements to share safety tests for powerful models and developing standards to ensure trustworthy AI.
Key points:
Requires developers of powerful AI systems to share safety tests and notify government before training models that pose national security risks.
Directs establishing standards and tools for safe, secure, trustworthy AI systems.
Calls for standards to screen dangerous biological materials synthesized using AI.
Seeks international collaboration on developing AI standards and managing risks.
Aims to protect against AI fraud, promote non-discriminatory AI, and support workers impacted by AI automation.
Focuses on privacy protections and evaluating government use of personal data and AI.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.