David Manheim talks a bit about using existing regulations here. Stuart Russell also mentioned the idea of recalls for models that cause harm at his senate hearing.
Thanks. David Manheim’s descriptions are broad and move around, so I find them hard to pin down in terms of the governance mechanisms he is referring to.
My impression is that in David’s description of the use of existing laws were based on harms like consumer protection, misuse, copyright violation, and illegal discrimination.
This is also what I am arguing for (instead of trying to go through national governments to establish decrees worldwide to pause AI).
A difference between the focus of David’s and my post, is that he was describing how government departments could clarify and sharpen the implementation of existing regulations. The focus of my post is complementary – on supporting communities to ensure enforcement happens through the court system.
Other things David mentioned around eg. monitoring or banning AI systems larger than GPT-4 seem to require establishing new rules/laws somehow or another.
I don’t see how establishing those new rules/laws is not going to be a lengthier process than enforcing already established laws in court. And even when the new rules/laws are written, signed and approved/passed, new enforcement mechanisms need to be build around that.
I mean, if any country can pass this that would be amazing: ”laws today that will trigger a full ban on deploying or training AI systems larger than GPT-4″
I just don’t see the political will yet? I can imagine a country that does not have companies developing some of the largest models deciding to pass this bill (maybe just against the use of such models). Would still be a win in terms of setting an example for other countries.
Maybe the fact that it’s about future models, current politicians would be more okay with setting a limit a few “versions” higher than GPT-4 since in their eyes it won’t hamstring economic “progress” now, but rather hamstring future politicians.
Though adding in this exception is another recipe for future regulatory capture: ”...which have not been reviewed by an international regulatory body with authority to reject applications”
David Manheim talks a bit about using existing regulations here. Stuart Russell also mentioned the idea of recalls for models that cause harm at his senate hearing.
Thanks. David Manheim’s descriptions are broad and move around, so I find them hard to pin down in terms of the governance mechanisms he is referring to.
My impression is that in David’s description of the use of existing laws were based on harms like consumer protection, misuse, copyright violation, and illegal discrimination.
This is also what I am arguing for (instead of trying to go through national governments to establish decrees worldwide to pause AI).
A difference between the focus of David’s and my post, is that he was describing how government departments could clarify and sharpen the implementation of existing regulations. The focus of my post is complementary – on supporting communities to ensure enforcement happens through the court system.
Other things David mentioned around eg. monitoring or banning AI systems larger than GPT-4 seem to require establishing new rules/laws somehow or another.
I don’t see how establishing those new rules/laws is not going to be a lengthier process than enforcing already established laws in court. And even when the new rules/laws are written, signed and approved/passed, new enforcement mechanisms need to be build around that.
I mean, if any country can pass this that would be amazing:
”laws today that will trigger a full ban on deploying or training AI systems larger than GPT-4″
I just don’t see the political will yet?
I can imagine a country that does not have companies developing some of the largest models deciding to pass this bill (maybe just against the use of such models). Would still be a win in terms of setting an example for other countries.
Maybe the fact that it’s about future models, current politicians would be more okay with setting a limit a few “versions” higher than GPT-4 since in their eyes it won’t hamstring economic “progress” now, but rather hamstring future politicians.
Though adding in this exception is another recipe for future regulatory capture:
”...which have not been reviewed by an international regulatory body with authority to reject applications”