[Congressional Hearing] Oversight of A.I.: Legislating on Artificial Intelligence

Link post

This is a selection of quotes from the Senate Subcommittee Hearing “Oversight of A.I.: Legislating on Artificial Intelligence” held on 9/​12/​23. It’s the third in a series of hearings, the first a hearing on rules for AI (quotes here), the second a hearing on principles for regulation (quotes here).

I think this is important because it serves as a quick way to orient to this hearing, to get a sense of some of the important themes and develop from them what we should think about those involved and the ideas they discussed. Below, I’ll give quick summaries for the positions of the witness and congressmen, and will then present the quotes, topically organized.


Witnesses were: Woodrow Hartzog (Boston University, law professor), Brad Smith (Microsoft, president), and William Dally (NVIDIA, chief scientist).

  • Hartzog is focused more on how AI becomes another tool for the powerful, and the non-x-risk side of AI, with ideas that resonate but often seem to lead to interventions that might not be as helpful for x-risk.

  • Brad Smith was on board with regulation, he even praised the B-H framework at one point, but also continually emphasized the need for a conversation between innovation and safety, and would often use specific language that implied he supported something similar but more narrow than what was being discussed (e.g. while endorsing the B-H framework he made clear that it should only apply to “advanced models in high-risk scenarios”)

  • William Dally says AGI is “science fiction” and also pushes against the idea that regulating the AI supply chain is a feasible tool (“no nation, and certainly no company, controls a chokepoint to AI development”).

Senators were: Richard Blumenthal, Josh Hawley, Mazie Hirono, John Kennedy, Amy Klobuchar, Marsha Blackburn and Jon Ossoff

  • Senator Blumenthal mostly asked clarifying questions that built upon others, but also talked about his framework and the general importance of the issue

  • Senator Hawley was focused on non-x-risk topics, generally honing in on risks from kids interacting with chatbots and risks from AI use in China

  • Senator Mazie Hirono was largely concerned with misinformation

  • Senator Kennedy was focused on notification of AI use entirely

  • Senator Klobuchar was most concerned with various non-x-risk issues, like the use of AI in creating synthetic media, and especially with how to prevent the use of AI generated synthetic media in elections

  • Senator Blackburn was focused on China and disinformation (Tik-Tok was an example)

  • Senator Jon Ossoff asked pointed, useful questions throughout his time that were directly aimed at how to codify things (like a definition of AI) for writing legislation


AGI

Dally: “Some have expressed fear that frontier models will evolve into uncontrollable artificial general intelligence, which could escape our control and cause harm. Fortunately, uncontrollable artificial general intelligence is science fiction and not reality. At it’s core, AI is a software program that is limited by its training, the inputs provided to it and the nature of its output. In other words, humans will always decide how much decision making power to cede to AI models.”

Importance

Blumenthal: “There is a moral imperative here...and when we simply do economic or political interests, sometimes it’s very shortsighted”

Hawley: “We have a responsibility...[to not] make the same mistakes Congress made with social media where 30 years ago Congress basically outsourced social media to the biggest corporations in the world and that has been, I would submit to you, a nearly unmitigated disaster”

Regulation

General

Blumenthal: “Private rights of action, as well as federal enforcement, are very important”

Hartzog: “Half measures like audits, assessments and certifications are necessary for data governance but industry leverages procedural checks like these to dilute our laws into managerial box checking exercises that entrench harmful surveillance based business models...it’s no substitute for meaningful liability.”

Proposals

Blumenthal-Hawley (B-H) Framework

Smith: “[The Blumenthal-Hawley Framework] is a very strong and positive step in the right direction...Let’s require licenses for advanced AI models and uses in high-risk scenarios. Let’s have an agency that is independent and can exercise real and effective oversight over this category”

Stop Button

Smith: “We need a safety break, just like we have a circuit breaker in every building and home in this country, to stop the flow of electricity if that’s needed”

Uncertainties

At What Stage?

Hartzog: “I think that the area that has been ignored up until this point has been the design and inputs to a lot of these tools”

Dally: “I think it’s really the use of the model and the deployment that you can effectively regulate. It’s going to be hard to regulate the creation of it because if people can’t create them here they’ll create them somewhere else. I think we’ll have to be very careful if we want the US to stay ahead”

Is Domestic Regulation Feasible?

Dally: “No nation, and certainly no company, controls a chokepoint to AI development. Leading US computing companies are competing with companies from around the world...US companies...are not the only alternative for developers abroad. Other nations are developing AI systems with or without US components and they will offer those applications in the worldwide market. Safe and trustworthy AI will require a multi-lateral and multi-stakeholder cooperation, or it will not be effective”

Dally: “We would like to ensure the US stays ahead in this field”

Ossoff: “How does any of this work without international law? Isn’t it correct that a model, potentially a very powerful and dangerous model, for example whose purpose is to unlock CBRN or mass destructive virological capabilities to a relatively unsophisticated actor, once trained it’s relatively light weight to transport, and without A. an international legal system and B. a level of surveillance that seems inconceivable into the flow of data across the internet, how can that be controlled and policed?”

...

Hartzog: “Ultimately what I worry about is deploying a level of surveillance that we’ve never before seen in an attempt to perfectly capture the entire chain of AI”

Blumenthal: “I think there are international models here where, frankly, the US is a leader by example and best practices are adopted by other countries when we support them.”

Smith: “We probably need an export control regime that weaves [GPUs, cloud compute, frontier models] together. For example, there might be a country in the world...where you all in the executive branch might say ‘we have some qualms, but we want US technology to be present, and we want US technology to be used properly’. You might say then ‘we’ll let NVIDIA export chips to that country to be used in, say, a datacenter of a company that we trust, that is licensed, even here, for that use, with the model being used in a secure way in that data center, with a know-your-customer requirement, and with guardrails that put certain kinds of use off-limits’. That may well be where government policy needs to go.”

...

Blumenthal: “I would analogize this situation to nuclear proliferation. We cooperate over safety, in some respects, with other countries, some of the adversaries, but we still do everything in our power to prevent American companies from helping China or Russia in their nuclear programs. Part of that non-proliferation effort is through export controls. We impose sanctions, we have limits and rules around selling and sharing certain chokepoint technologies relating to nuclear enrichment, as well as biological warfare, surveliance and other national security risks”

Dally: “The difference here is that there really isn’t a chokepoint and there’s a careful balance to be made between limiting where our chips go and what they’re used for...and disadvantaging American companies”

...

Dally: “We’re not the only people who make chips that can do AI...If people can’t get the chips they need to do AI from us, they will get them somewhere else, and what will happen then is it turns out the chips isn’t what makes them useful, it’s the software. And, if all of a sudden the standard chips for people to do AI become something from, pick a country, Singapore...and all the software engineers will start writing the software for those chips, they’ll become the dominant chips and the leadership of that area will have shifted from the US to Singapore or whatever other country becomes dominant”

...

Smith: “Sometimes you can approach this and say, look, if we don’t provide this to somebody, somebody else will so let’s not worry about it. But at the end of the day, whether your a company or a country, I think you do have to have clarity about how you want your technology to be used.”

How Should We Define What We License?

Ossoff: “Is the question which models are the most powerful in time, or is there a threshold of capability or power that should define the scope of regulated technology?”

Ossoff: “Is it a license to train a model to a certain capability? Is it a license to sell (or license access) to that model? Or is it a license to purchase or deploy that model? Who is the licensed entity?

Smith: “That’s another question that is key and may have different answers in different scenarios but mostly I would say it should be a license to deploy...I think there may well be obligations to disclose to say an independent authority when a training run begins depending on what the goal [is]”

Smith: “Imagine we’re at GPT-12...Before that gets released for use, I think you can imagine a licensing regime that would say that it needs to be licensed after it’s been, in effect, certified as safe...Look at the world of civil aviation, that’s fundamentally how it has worked since the 1940s, lets try to learn from it and see how we might apply something like that or other models here”

Regulate by Risk or Use Case?

Blumenthal: “To my colleagues who say there’s no need for new rules, we have enough laws protecting the public...we need to make sure that these regulations are targeted and framed in a way that apply to the risks involved. Risk based rules, managing the risks is what we need to do here”

Dally: “Fortunately many uses of AI applications are subject to existing laws and regulations that govern the sectors in which they operate. AI enabled services in high risk sectors could be subject to enhanced licensing and certification requirements when necessary, while other applications with less risk of harm may need less stringent licensing or regulation.”

Smith: “We’re going to need different levels of obligations and as we go forward let’s think about the connection between the role of, let’s say, a central agency that will be on point for certain things, as well as the obligations that frankly will be part of the work of many agencies...I do think that it would be a mistake to think that one single agency, or one single licensing regime would be the right recipe to address everything”″

Hartzog: “Lawmakers must accept that AI systems are not neutral and regulate how they are designed. People often argue that lawmakers should avoid design rules for technologies because there are no bad AI systems, only bad AI users. This view of technologies is wrong.”

Dally: “AI is a computer program, it takes an input and produces an output, and if you don’t connect up something that can cause harm to that output it can’t cause that harm”

Dally: “[Licensing] is dependent on the application, because if you have a model which is basically determining a medical procedure there’s a high risk for that. If you have another model which is controlling the temperature in your building, if it get’s it a little bit wrong...it’s not a life threatening situation...You need to regulate the things that are, have high consequences if things go awry”

Timelines

Blumenthal: “Make no mistake there will be regulation, the only question is how soon and what”

Blumenthal: “We’ll achieve legislation I hope, by the end of this year”

Non-X-Risk Topics

Digital Providence

Both Smith and Dally are in support

Jobs

Smith argues that AI will likely automate jobs without creativity and argues this can be good because it frees people up to focus on “paying attention to other people and helping them