Other things David mentioned around eg. monitoring or banning AI systems larger than GPT-4 seem to require establishing new rules/laws somehow or another.
I don’t see how establishing those new rules/laws is not going to be a lengthier process than enforcing already established laws in court. And even when the new rules/laws are written, signed and approved/passed, new enforcement mechanisms need to be build around that.
I mean, if any country can pass this that would be amazing: ”laws today that will trigger a full ban on deploying or training AI systems larger than GPT-4″
I just don’t see the political will yet? I can imagine a country that does not have companies developing some of the largest models deciding to pass this bill (maybe just against the use of such models). Would still be a win in terms of setting an example for other countries.
Maybe the fact that it’s about future models, current politicians would be more okay with setting a limit a few “versions” higher than GPT-4 since in their eyes it won’t hamstring economic “progress” now, but rather hamstring future politicians.
Though adding in this exception is another recipe for future regulatory capture: ”...which have not been reviewed by an international regulatory body with authority to reject applications”
Other things David mentioned around eg. monitoring or banning AI systems larger than GPT-4 seem to require establishing new rules/laws somehow or another.
I don’t see how establishing those new rules/laws is not going to be a lengthier process than enforcing already established laws in court. And even when the new rules/laws are written, signed and approved/passed, new enforcement mechanisms need to be build around that.
I mean, if any country can pass this that would be amazing:
”laws today that will trigger a full ban on deploying or training AI systems larger than GPT-4″
I just don’t see the political will yet?
I can imagine a country that does not have companies developing some of the largest models deciding to pass this bill (maybe just against the use of such models). Would still be a win in terms of setting an example for other countries.
Maybe the fact that it’s about future models, current politicians would be more okay with setting a limit a few “versions” higher than GPT-4 since in their eyes it won’t hamstring economic “progress” now, but rather hamstring future politicians.
Though adding in this exception is another recipe for future regulatory capture:
”...which have not been reviewed by an international regulatory body with authority to reject applications”