I’m confident in PauseAI US’s ability to run protests and I think the case for doing protests is pretty strong. You’re also doing lobbying, headed by Felix De Simone. I’m less confident about that so I have some questions.
There are no other organized groups (AFAIK) doing AI pause protests in the US of the sort you’re doing. But there are other groups talking to policy-makers, including Center for AI Policy, Center for AI Safety, and Palisade (plus some others outside the US, and some others that focus on AI risk but that I think are less value-aligned). What is the value-add of PauseAI US’s direct lobbying efforts compared these other groups? And are you coordinating with them at all?
What is Felix’s background / experience in this area? Basically, why should I expect him to be good at lobbying?
1. Our lobbying is more “outside game” than the others in the space. Rather than getting our lobbying authority from prestige or expense, we get it from our grassroots support. Our message is simpler and clearer, pushing harder on the Overton window. (More on the radical flank effect here.) Our messages can complement more constrained lobbying from aligned inside gamers by making their asks seem more reasonable and safe, which is why us lobbying is not redundant with those other orgs but synergistic.
2. Felix has experience on climate campaigns and climate canvassing and was a leader in U Chicago EA. He’s young, so he hasn’t had many years of experience at anything, but he has the relevant kinds of experience that I wanted and is demonstrably excellent at educating, building bridges, and juggling a large network. He. has the tact and sensitivity you want in a role like this while also being very earnest. I’m very excited to nurture his talent and have him serve as the foundation for our lobbying program going forward.
One other thing I forgot to mention re: value-add. Some of the groups you mentioned (Center for AI Policy & Center for AI Safety; not sure about Palisade) are focused mostly on domestic AI regulation. PauseAI US is focused more on the international side of things, making the case for global coordination and an AI Treaty. In this sense, one of our main value-adds might be convincing members of Congress that international coordination on AI is both feasible and necessary to prevent catastrophic risk. This also serves to counter the “arms race” narrative (“the US needs to develop AGI first in order to beat China!”) which risks sabotaging AI policy in the coming years.
Happy to weigh in here with some additional information/thoughts.
Before I started my current role at PauseAI US, I worked on statewide environmental campaigns. While these were predominantly grassroots (think volunteer management, canvassing, coalition-building etc.) they did have a lobbying component, and I met with statewide and federal offices to advance our policy proposals. My two most noteworthy successes were statewide campaigns in Massachusetts and California, where I met with a total of ~60 state Congressional offices and helped to persuade the legislatures of both states to pass our bills (clean energy legislation in MA; pollinator protection in CA) despite opposition from the fossil fuel and pesticide industries.
I have been in D.C. since August working on PauseAI US’ lobby efforts. So far, I have spoken to 16 Congressional offices — deliberately meeting with members of both parties, with a special focus on Congressmembers in relevant committees (i.e. House Committee on Science, Space, and Technology; Senate Committee on Commerce, Science, and Transportation; House Bipartisan AI Task Force).
I plan to speak with another >50 offices over the next 6 months, as well as deepen relationships with offices I’ve already met. I also intend to host a series of Congressional briefings— on (1) AI existential risk, (2) Pausing as a solution, and (3) the importance and feasibility of international coordination— inviting dozens of Congressional staff to each briefing.
I do coordinate with a few other individuals from aligned AI policy groups, to share insights and gain feedback on messaging strategies.
Here are a few takeaways from my lobbying efforts so far, explaining why I believe PauseAI US lobbying is important:
This is low-hanging fruit. Many Congressional offices haven’t yet heard of loss-of-control and existential AI risks; when I bring these risks up, it is often the first time these offices have encountered them. This means that PauseAI US can play a foundational role in setting the narrative and having lots of leverage.
Offices are more receptive than one might expect to existential risk / loss-of-control scenarios, and even occasionally to the Pause solution.
Framing and vocabulary matter a lot here — it’s important to find the best ways to make our arguments palatable to Congressional offices. This includes, for instance, framing a Pause as “pro-safe innovation” rather than generically “anti-innovation,” anticipating and addressing reasonable objections, making comparisons to how we regulate other technologies (i.e. aviation, nuclear power), and providing concrete risk scenarios that avoid excessive technical jargon.
It is crucially important to explain the feasibility and importance of international coordination on AI risk / an AI Treaty. A worrisome “default path” might be for the US to ramp up an AI arms race against China, leading to superintelligent AI before we are able to control it. In order to avoid this outcome, we need to convince US policymakers that (1) it doesn’t matter who builds superintelligence, we all lose; and (2) international coordination is feasible and tractable.
As such, I spend a lot of time emphasizing loss-of-control scenarios, making the case that this technology should not be thought of as a “weapon” to be controlled by whichever country builds it first, but instead as a “doomsday device” that could end our world regardless of who builds it.
I also make the case for the feasibility of an international pause, by appealing to historical precedent (i.e. nuclear non-proliferation agreements) and sharing information about verification and enforcement mechanisms (i.e. chip tracking, detecting large-scale training runs, on-chip reporting mechanisms.)
The final reason for the importance of PauseAI US lobbying is a counterfactual one: If we don’t lobby Congress, we risk ceding ground to other groups who push the “arms race” narrative and convince the US to go full-speed ahead on AGI development. By being in the halls of Congress and making the most persuasive case for a Pause, we are at the very least helping prevent the pendulum from swinging in the opposite direction.
I’m confident in PauseAI US’s ability to run protests and I think the case for doing protests is pretty strong. You’re also doing lobbying, headed by Felix De Simone. I’m less confident about that so I have some questions.
There are no other organized groups (AFAIK) doing AI pause protests in the US of the sort you’re doing. But there are other groups talking to policy-makers, including Center for AI Policy, Center for AI Safety, and Palisade (plus some others outside the US, and some others that focus on AI risk but that I think are less value-aligned). What is the value-add of PauseAI US’s direct lobbying efforts compared these other groups? And are you coordinating with them at all?
What is Felix’s background / experience in this area? Basically, why should I expect him to be good at lobbying?
1. Our lobbying is more “outside game” than the others in the space. Rather than getting our lobbying authority from prestige or expense, we get it from our grassroots support. Our message is simpler and clearer, pushing harder on the Overton window. (More on the radical flank effect here.) Our messages can complement more constrained lobbying from aligned inside gamers by making their asks seem more reasonable and safe, which is why us lobbying is not redundant with those other orgs but synergistic.
2. Felix has experience on climate campaigns and climate canvassing and was a leader in U Chicago EA. He’s young, so he hasn’t had many years of experience at anything, but he has the relevant kinds of experience that I wanted and is demonstrably excellent at educating, building bridges, and juggling a large network. He. has the tact and sensitivity you want in a role like this while also being very earnest. I’m very excited to nurture his talent and have him serve as the foundation for our lobbying program going forward.
One other thing I forgot to mention re: value-add. Some of the groups you mentioned (Center for AI Policy & Center for AI Safety; not sure about Palisade) are focused mostly on domestic AI regulation. PauseAI US is focused more on the international side of things, making the case for global coordination and an AI Treaty. In this sense, one of our main value-adds might be convincing members of Congress that international coordination on AI is both feasible and necessary to prevent catastrophic risk. This also serves to counter the “arms race” narrative (“the US needs to develop AGI first in order to beat China!”) which risks sabotaging AI policy in the coming years.
Happy to weigh in here with some additional information/thoughts.
Before I started my current role at PauseAI US, I worked on statewide environmental campaigns. While these were predominantly grassroots (think volunteer management, canvassing, coalition-building etc.) they did have a lobbying component, and I met with statewide and federal offices to advance our policy proposals. My two most noteworthy successes were statewide campaigns in Massachusetts and California, where I met with a total of ~60 state Congressional offices and helped to persuade the legislatures of both states to pass our bills (clean energy legislation in MA; pollinator protection in CA) despite opposition from the fossil fuel and pesticide industries.
I have been in D.C. since August working on PauseAI US’ lobby efforts. So far, I have spoken to 16 Congressional offices — deliberately meeting with members of both parties, with a special focus on Congressmembers in relevant committees (i.e. House Committee on Science, Space, and Technology; Senate Committee on Commerce, Science, and Transportation; House Bipartisan AI Task Force).
I plan to speak with another >50 offices over the next 6 months, as well as deepen relationships with offices I’ve already met. I also intend to host a series of Congressional briefings— on (1) AI existential risk, (2) Pausing as a solution, and (3) the importance and feasibility of international coordination— inviting dozens of Congressional staff to each briefing.
I do coordinate with a few other individuals from aligned AI policy groups, to share insights and gain feedback on messaging strategies.
Here are a few takeaways from my lobbying efforts so far, explaining why I believe PauseAI US lobbying is important:
This is low-hanging fruit. Many Congressional offices haven’t yet heard of loss-of-control and existential AI risks; when I bring these risks up, it is often the first time these offices have encountered them. This means that PauseAI US can play a foundational role in setting the narrative and having lots of leverage.
Offices are more receptive than one might expect to existential risk / loss-of-control scenarios, and even occasionally to the Pause solution.
Framing and vocabulary matter a lot here — it’s important to find the best ways to make our arguments palatable to Congressional offices. This includes, for instance, framing a Pause as “pro-safe innovation” rather than generically “anti-innovation,” anticipating and addressing reasonable objections, making comparisons to how we regulate other technologies (i.e. aviation, nuclear power), and providing concrete risk scenarios that avoid excessive technical jargon.
It is crucially important to explain the feasibility and importance of international coordination on AI risk / an AI Treaty. A worrisome “default path” might be for the US to ramp up an AI arms race against China, leading to superintelligent AI before we are able to control it. In order to avoid this outcome, we need to convince US policymakers that (1) it doesn’t matter who builds superintelligence, we all lose; and (2) international coordination is feasible and tractable.
As such, I spend a lot of time emphasizing loss-of-control scenarios, making the case that this technology should not be thought of as a “weapon” to be controlled by whichever country builds it first, but instead as a “doomsday device” that could end our world regardless of who builds it.
I also make the case for the feasibility of an international pause, by appealing to historical precedent (i.e. nuclear non-proliferation agreements) and sharing information about verification and enforcement mechanisms (i.e. chip tracking, detecting large-scale training runs, on-chip reporting mechanisms.)
The final reason for the importance of PauseAI US lobbying is a counterfactual one: If we don’t lobby Congress, we risk ceding ground to other groups who push the “arms race” narrative and convince the US to go full-speed ahead on AGI development. By being in the halls of Congress and making the most persuasive case for a Pause, we are at the very least helping prevent the pendulum from swinging in the opposite direction.