Hmm, could you clarify how this would be coordinated in practice?
Currently what I see is a bunch of signatures, and a bunch of people writing pretty good tweets, but no clear mechanism between that and any AI company leaders actually deciding to pause the scaling of model training and deployment.
The Center for AI and Digital Policy did mention the FLI pause letter in their complaint on OpenAI to FTC, so maybe that kinda advocacy helps move the needle.
Everyone joining together in mass protests and letter writing / phone campaigns calling on their representatives to push for a pause. We probably just need ~1-10% of the 63% who don’t want AGI/ASI to take such action.
We don’t need to get the AI company leaders to agree, we need to get governments to force them.
How about from there? How would US, UK and other national politicians who got somewhat convinced (that this will help their political career) use existing regulations to monitor and enforce restrictions on AI development?
David Manheim talks a bit about using existing regulations here. Stuart Russell also mentioned the idea of recalls for models that cause harm at his senate hearing.
Thanks. David Manheim’s descriptions are broad and move around, so I find them hard to pin down in terms of the governance mechanisms he is referring to.
My impression is that in David’s description of the use of existing laws were based on harms like consumer protection, misuse, copyright violation, and illegal discrimination.
This is also what I am arguing for (instead of trying to go through national governments to establish decrees worldwide to pause AI).
A difference between the focus of David’s and my post, is that he was describing how government departments could clarify and sharpen the implementation of existing regulations. The focus of my post is complementary – on supporting communities to ensure enforcement happens through the court system.
Other things David mentioned around eg. monitoring or banning AI systems larger than GPT-4 seem to require establishing new rules/laws somehow or another.
I don’t see how establishing those new rules/laws is not going to be a lengthier process than enforcing already established laws in court. And even when the new rules/laws are written, signed and approved/passed, new enforcement mechanisms need to be build around that.
I mean, if any country can pass this that would be amazing: ”laws today that will trigger a full ban on deploying or training AI systems larger than GPT-4″
I just don’t see the political will yet? I can imagine a country that does not have companies developing some of the largest models deciding to pass this bill (maybe just against the use of such models). Would still be a win in terms of setting an example for other countries.
Maybe the fact that it’s about future models, current politicians would be more okay with setting a limit a few “versions” higher than GPT-4 since in their eyes it won’t hamstring economic “progress” now, but rather hamstring future politicians.
Though adding in this exception is another recipe for future regulatory capture: ”...which have not been reviewed by an international regulatory body with authority to reject applications”
Even if you think the priority is to shut the current models down, an indefinite pause on further development is a great first step toward that.
Hmm, could you clarify how this would be coordinated in practice?
Currently what I see is a bunch of signatures, and a bunch of people writing pretty good tweets, but no clear mechanism between that and any AI company leaders actually deciding to pause the scaling of model training and deployment.
The Center for AI and Digital Policy did mention the FLI pause letter in their complaint on OpenAI to FTC, so maybe that kinda advocacy helps move the needle.
Everyone joining together in mass protests and letter writing / phone campaigns calling on their representatives to push for a pause. We probably just need ~1-10% of the 63% who don’t want AGI/ASI to take such action.
We don’t need to get the AI company leaders to agree, we need to get governments to force them.
Good step to aim for. I like it.
How about from there?
How would US, UK and other national politicians who got somewhat convinced (that this will help their political career) use existing regulations to monitor and enforce restrictions on AI development?
David Manheim talks a bit about using existing regulations here. Stuart Russell also mentioned the idea of recalls for models that cause harm at his senate hearing.
Thanks. David Manheim’s descriptions are broad and move around, so I find them hard to pin down in terms of the governance mechanisms he is referring to.
My impression is that in David’s description of the use of existing laws were based on harms like consumer protection, misuse, copyright violation, and illegal discrimination.
This is also what I am arguing for (instead of trying to go through national governments to establish decrees worldwide to pause AI).
A difference between the focus of David’s and my post, is that he was describing how government departments could clarify and sharpen the implementation of existing regulations. The focus of my post is complementary – on supporting communities to ensure enforcement happens through the court system.
Other things David mentioned around eg. monitoring or banning AI systems larger than GPT-4 seem to require establishing new rules/laws somehow or another.
I don’t see how establishing those new rules/laws is not going to be a lengthier process than enforcing already established laws in court. And even when the new rules/laws are written, signed and approved/passed, new enforcement mechanisms need to be build around that.
I mean, if any country can pass this that would be amazing:
”laws today that will trigger a full ban on deploying or training AI systems larger than GPT-4″
I just don’t see the political will yet?
I can imagine a country that does not have companies developing some of the largest models deciding to pass this bill (maybe just against the use of such models). Would still be a win in terms of setting an example for other countries.
Maybe the fact that it’s about future models, current politicians would be more okay with setting a limit a few “versions” higher than GPT-4 since in their eyes it won’t hamstring economic “progress” now, but rather hamstring future politicians.
Though adding in this exception is another recipe for future regulatory capture:
”...which have not been reviewed by an international regulatory body with authority to reject applications”
I’m all for an indefinite pause, ie. a moratorium on all the precursors needed for further AI development, of course!