Props to @Andres Jimenez Zorrilla 🔸 for dealing with the razzing. Enduring people’s incredulous reactions is an important part of the work and you did a fantastic job being patient and earnest.
Holly Elmore ⏸️ 🔸
Really crack reporting, Garrison! You’re doing the lord’s work.
There’s nothing special going on here— higher levels of government prevail if laws at two different levels conflict. The federal government has the right to regulate AI in a way that preempts state level governance: https://www.law.cornell.edu/wex/preemption
We are asking people to tell their Senators not to allow this provision to pass because it means choking out the best hope for AI regulation given Congress’s lack of interest/heavy AI industry lobbying. It’s not offering any federal level regulation— just making it so the states can’t implement any of their own.
There’s nothing special going on here— higher levels of government prevail if laws at two different levels conflict. The federal government has the right to regulate AI in a way that preempts state level governance: https://www.law.cornell.edu/wex/preemption
We are asking people to tell their Senators not to allow this provision to pass because it means choking out the best hope for AI regulation given Congress’s lack of interest/heavy AI industry lobbying. It’s not offering any federal level regulation— just making it so the states can’t implement any of their own.
That has nothing to do with this? The federal government has a right to pass laws at the federal level to preempt state levels laws. There’s a procedural problem with this one (violates Byrd Rule) but we are asking people to tell their Senators they oppose it because of the content.
What is your point? In this case Congress is showing no interest in AI regulation and is heavily lobbied by AI labs and related defense contractors, but state legislatures can act as a check on this, as the federalist system intended.
Who framed it in terms of individual rights?
In animal welfare, federal preemption is usually about powerful lobbies (animal ag, pesticides) wanting control and having more influence at the federal level. That’s the closer analogy in this case— AI company lobbies want to have a bottleneck they control. If Congress passes federal AI regulation, it can always preempt state-level regulation. What that provision says is states can’t regulate AI and there’s no federal proposal for doing so instead.
You can share this information in tweet form: https://x.com/pauseaius/status/1922828892886401431
[urgent] Americans: call your Senators and tell them you oppose AI preemption
(FYI I’m the ED of PauseAI US and we have our own website pauseai-us.org)
1. It is on every actor morally to do the right thing by not advancing dangerous capabilities separate from whether everyone else does it, even though everyone pausing and then agreeing to safe development standards is the ideal solution. That’s what that language refers to. I’m very careful about taking positions as an org, but, personally, I also think unilateral pauses would make the world safer compared to no pauses by slowing worldwide development. In particular, if the US were to pause capabilities development, our competitors wouldn’t have our frontier research to follow/imitate, and it would take other countries longer to generate those insights themselves.
2. “PauseAI NOW” is not just the simplest and best message to coordinate around, it’s also an assertion that we are ALREADY in too much danger. You pause FIRST, then sort out the technical details.
Feels like your true objection here is that frontier AI development just isn’t that dangerous? Otherwise I don’t know how you could be more concerned about the few piddling “inaccuracies and misleading statements that I won’t fully enumerate” than nobody doing CAIP’s work to get the beginnings of safeguards in place.
(Makes much more sense if you were talking about unilateral pauses! The PauseAI pause is international, so that’s just how I think of Pause.)
Then there should be future legislation? Why is it on CAIP and this legislation to foresee the entire future? That’s a prohibitively high bar for regulation.
I love to see people coming to this simple and elegant case in their own way and from their own perspective— this is excellent for spreading the message and helps to keep it grounded. Was very happy to see this on the Forum :)
As for whether Pause is the right policy (I’m a founder of PauseAI and ED of PauseAI US), we can quibble about types of pauses or possible implementations but I think “Pause NOW” is the strongest and clearest message. I think anything about delaying a pause or timing it perfectly is the unrealistic thing that makes it harder to achieve consensus and to have the effect we want, and Carl should know better. I’m still very surprised he said it given how much he seems to get the issue, but I think it comes down to trying to “balance the benefits and the risks”. Imo the best we can do for now is slam the brakes and not drive off the cliff, and we can worry about benefits after.
When we treat international cooperation or a moratorium as unrealistic, we weaken our position and make that more true. So, at least when you go to the bargaining table, if not here, we need to ask for fully what we want without pre-surrendering. “Pause AI!”, not “I know it’s not realistic to pause, but maybe you could tap the brakes?” What’s realistic is to some extent what the public says is realistic.
Short answer I think trying to time this is too galaxy-brained. I think getting the meme of Pause out there ASAP is good because it pushes the Overton window and it gives people longer to chew on it. If and when warning shots occur, they will mainly advance Pause if people already had the idea that Pause would combat things like the warning shot happening before they happened.
I think takes that rely on saving up some kind of political capital and deploying it at the perfect time are generally wrong. PauseAI will gain more capital with more time and conversation, not use it up.
You can even take this further and question why the person on their deathbed doesn’t feel proud of the work they chose to do. Maybe they feel ashamed of their actual preferences and don’t need to. Or maybe they aren’t taking to heart the tradeoff in interests between the experiencing self and the remembering self.
To say someone is not “truthseeking” in Berkeley is like a righteous ex-communication. It gets to be an epistemic issue.
Imo the biggest reason not to do this is that it’s labeling the person or getting at their character. There’s a threat implied that they will be dismissed out of hand bc they are categorically in bad faith. It can be weaponized.
As someone trying to start a social movement (PauseAI), I wish EAs were more understanding and forgiving that there isn’t a great literature I can just follow. I feel confident that jumping in and finding my way was a good thing to do because advocacy and activism were neglected angles to a very important problem.
Most of my thinking and decision-making with PauseAI US is based on my world model, not specific beliefs about the efficacy of different practices or philosophies in other social movements. I expect local conditions and factors specific to the topic and landscape of AI and organizational factors like my leadership style to be more important than which approach is “best” ceteris parabis.
Please :)