[Link Post: New York Times] White House Unveils Initiatives to Reduce Risks of A.I.
This is a linkpost for https://www.nytimes.com/2023/05/04/technology/us-ai-research-regulation.html
The White House on Thursday announced its first new initiatives aimed at taming the risks of artificial intelligence since a boom in A.I.-powered chatbots has prompted growing calls to regulate the technology.
The National Science Foundation plans to spend $140 million on new research centers devoted to A.I., White House officials said. The administration also pledged to release draft guidelines for government agencies to ensure that their use of A.I. safeguards “the American people’s rights and safety,” adding that several A.I. companies had agreed to make their products available for scrutiny in August at a cybersecurity conference.
The announcements came hours before Vice President Kamala Harris and other administration officials were scheduled to meet with the chief executives of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an A.I. start-up, to discuss the technology. A senior administration official said on Wednesday that the White House planned to impress upon the companies that they had a responsibility to address the risks of new A.I. developments. The White House has been under growing pressure to police A.I. that is capable of crafting sophisticated prose and lifelike images. The explosion of interest in the technology began last year when OpenAI released ChatGPT to the public and people immediately began using it to search for information, do schoolwork and assist them with their job. Since then, some of the biggest tech companies have rushed to incorporate chatbots into their products and accelerated A.I. research, while venture capitalists have poured money into A.I. start-ups.
But the A.I. boom has also raised questions about how the technology will transform economies, shake up geopolitics and bolster criminal activity. Critics have worried that many A.I. systems are opaque but extremely powerful, with the potential to make discriminatory decisions, replace people in their jobs, spread disinformation and perhaps even break the law on their own.
President Biden recently said that it “remains to be seen” whether A.I. is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way.
Spokeswomen for Google and Microsoft declined to comment ahead of the White House meeting. A spokesman for Anthropic confirmed the company would be attending. A spokeswoman for OpenAI did not respond to a request for comment.
The announcements build on earlier efforts by the administration to place guardrails on A.I. Last year, the White House released what it called a “Blueprint for an A.I. Bill of Rights,” which said that automated systems should protect users’ data privacy, shield them from discriminatory outcomes and make clear why certain actions were taken. In January, the Commerce Department also released a framework for reducing risk in A.I. development, which had been in the works for years.
The introduction of chatbots like ChatGPT and Google’s Bard has put huge pressure on governments to act. The European Union, which had already been negotiating regulations to A.I., has faced new demands to regulate a broader swath of A.I., instead of just systems seen as inherently high risk.
In the United States, members of Congress, including Senator Chuck Schumer of New York, the majority leader, have moved to draft or propose legislation to regulate A.I. But concrete steps to rein in the technology in the country may be more likely to come first from law enforcement agencies in Washington.
A group of government agencies pledged in April to “monitor the development and use of automated systems and promote responsible innovation,” while punishing violations of the law committed using the technology.
In a guest essay in The New York Times on Wednesday, Lina Khan, the chair of the Federal Trade Commission, said the nation was at a “key decision point” with A.I. She likened the technology’s recent developments to the birth of tech giants like Google and Facebook, and she warned that, without proper regulation, the technology could entrench the power of the biggest tech companies and give scammers a potent tool.
“As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself,” she said.
- Implications of the Whitehouse meeting with AI CEOs for AI superintelligence risk—a first-step towards evals? by 7 May 2023 17:33 UTC; 78 points) (
- Summaries of top forum posts (1st to 7th May 2023) by 9 May 2023 9:30 UTC; 21 points) (LessWrong;
- Summaries of top forum posts (1st to 7th May 2023) by 9 May 2023 9:30 UTC; 18 points) (
Here’s the Factsheet—https://www.whitehouse.gov/briefing-room/statements-releases/2023/05/04/fact-sheet-biden-harris-administration-announces-new-actions-to-promote-responsible-ai-innovation-that-protects-americans-rights-and-safety/
“Today’s announcements include:
New investments to power responsible American AI research and development (R&D). The National Science Foundation is announcing $140 million in funding to launch seven new National AI Research Institutes. This investment will bring the total number of Institutes to 25 across the country, and extend the network of organizations involved into nearly every state. These Institutes catalyze collaborative efforts across institutions of higher education, federal agencies, industry, and others to pursue transformative AI advances that are ethical, trustworthy, responsible, and serve the public good. In addition to promoting responsible innovation, these Institutes bolster America’s AI R&D infrastructure and support the development of a diverse AI workforce. The new Institutes announced today will advance AI R&D to drive breakthroughs in critical areas, including climate, agriculture, energy, public health, education, and cybersecurity.
Public assessments of existing generative AI systems. The Administration is announcing an independent commitment from leading AI developers, including Anthropic, Google, Hugging Face, Microsoft, NVIDIA, OpenAI, and Stability AI, to participate in a public evaluation of AI systems, consistent with responsible disclosure principles—on an evaluation platform developed by Scale AI—at the AI Village at DEFCON 31. This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts to explore how the models align with the principles and practices outlined in the Biden-Harris Administration’s Blueprint for an AI Bill of Rights and AI Risk Management Framework. This independent exercise will provide critical information to researchers and the public about the impacts of these models, and will enable AI companies and developers take steps to fix issues found in those models. Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.
Policies to ensure the U.S. government is leading by example on mitigating AI risks and harnessing AI opportunities. The Office of Management and Budget (OMB) is announcing that it will be releasing draft policy guidance on the use of AI systems by the U.S. government for public comment. This guidance will establish specific policies for federal departments and agencies to follow in order to ensure their development, procurement, and use of AI systems centers on safeguarding the American people’s rights and safety. It will also empower agencies to responsibly leverage AI to advance their missions and strengthen their ability to equitably serve Americans—and serve as a model for state and local governments, businesses and others to follow in their own procurement and use of AI. OMB will release this draft guidance for public comment this summer, so that it will benefit from input from advocates, civil society, industry, and other stakeholders before it is finalized.”