Happy to weigh in here with some additional information/thoughts.
Before I started my current role at PauseAI US, I worked on statewide environmental campaigns. While these were predominantly grassroots (think volunteer management, canvassing, coalition-building etc.) they did have a lobbying component, and I met with statewide and federal offices to advance our policy proposals. My two most noteworthy successes were statewide campaigns in Massachusetts and California, where I met with a total of ~60 state Congressional offices and helped to persuade the legislatures of both states to pass our bills (clean energy legislation in MA; pollinator protection in CA) despite opposition from the fossil fuel and pesticide industries.
I have been in D.C. since August working on PauseAI US’ lobby efforts. So far, I have spoken to 16 Congressional offices — deliberately meeting with members of both parties, with a special focus on Congressmembers in relevant committees (i.e. House Committee on Science, Space, and Technology; Senate Committee on Commerce, Science, and Transportation; House Bipartisan AI Task Force).
I plan to speak with another >50 offices over the next 6 months, as well as deepen relationships with offices I’ve already met. I also intend to host a series of Congressional briefings— on (1) AI existential risk, (2) Pausing as a solution, and (3) the importance and feasibility of international coordination— inviting dozens of Congressional staff to each briefing.
I do coordinate with a few other individuals from aligned AI policy groups, to share insights and gain feedback on messaging strategies.
Here are a few takeaways from my lobbying efforts so far, explaining why I believe PauseAI US lobbying is important:
This is low-hanging fruit. Many Congressional offices haven’t yet heard of loss-of-control and existential AI risks; when I bring these risks up, it is often the first time these offices have encountered them. This means that PauseAI US can play a foundational role in setting the narrative and having lots of leverage.
Offices are more receptive than one might expect to existential risk / loss-of-control scenarios, and even occasionally to the Pause solution.
Framing and vocabulary matter a lot here — it’s important to find the best ways to make our arguments palatable to Congressional offices. This includes, for instance, framing a Pause as “pro-safe innovation” rather than generically “anti-innovation,” anticipating and addressing reasonable objections, making comparisons to how we regulate other technologies (i.e. aviation, nuclear power), and providing concrete risk scenarios that avoid excessive technical jargon.
It is crucially important to explain the feasibility and importance of international coordination on AI risk / an AI Treaty. A worrisome “default path” might be for the US to ramp up an AI arms race against China, leading to superintelligent AI before we are able to control it. In order to avoid this outcome, we need to convince US policymakers that (1) it doesn’t matter who builds superintelligence, we all lose; and (2) international coordination is feasible and tractable.
As such, I spend a lot of time emphasizing loss-of-control scenarios, making the case that this technology should not be thought of as a “weapon” to be controlled by whichever country builds it first, but instead as a “doomsday device” that could end our world regardless of who builds it.
I also make the case for the feasibility of an international pause, by appealing to historical precedent (i.e. nuclear non-proliferation agreements) and sharing information about verification and enforcement mechanisms (i.e. chip tracking, detecting large-scale training runs, on-chip reporting mechanisms.)
The final reason for the importance of PauseAI US lobbying is a counterfactual one: If we don’t lobby Congress, we risk ceding ground to other groups who push the “arms race” narrative and convince the US to go full-speed ahead on AGI development. By being in the halls of Congress and making the most persuasive case for a Pause, we are at the very least helping prevent the pendulum from swinging in the opposite direction.
One other thing I forgot to mention re: value-add. Some of the groups you mentioned (Center for AI Policy & Center for AI Safety; not sure about Palisade) are focused mostly on domestic AI regulation. PauseAI US is focused more on the international side of things, making the case for global coordination and an AI Treaty. In this sense, one of our main value-adds might be convincing members of Congress that international coordination on AI is both feasible and necessary to prevent catastrophic risk. This also serves to counter the “arms race” narrative (“the US needs to develop AGI first in order to beat China!”) which risks sabotaging AI policy in the coming years.