My overall views are fairly neutral. I lean in favour of this addition, but honestly it could go either way in the long run.
The addition means developers of general AI will basically be unregulated. On the one hand being totally unregulated is bad as it removes the possible advantages of oversight etc. But on the other hand applying rules to regulate general AI in a way similar to how this act regulates high-risk AI would be the wrong way to regulate general AI.
In my view no regulation seems better than inappropriate regulation, and still leaves the door open to good regulatory practice. Someone else could argue that restrictive inappropriate regulation would slow down EU progress on general AI research and this would be good and I could understand the case for that but in my view think the evidence for the value of slowing EU general AI research is weak and my general preference for not building inappropriate or broken systems is stronger.
(Also the addition removes the ambiguity that was in the act as to whether it applied to general AI products, which is good as legal clarity is good.)
Thank you for the update – super helpful to see.
My overall views are fairly neutral. I lean in favour of this addition, but honestly it could go either way in the long run.
The addition means developers of general AI will basically be unregulated. On the one hand being totally unregulated is bad as it removes the possible advantages of oversight etc. But on the other hand applying rules to regulate general AI in a way similar to how this act regulates high-risk AI would be the wrong way to regulate general AI.
In my view no regulation seems better than inappropriate regulation, and still leaves the door open to good regulatory practice. Someone else could argue that restrictive inappropriate regulation would slow down EU progress on general AI research and this would be good and I could understand the case for that but in my view think the evidence for the value of slowing EU general AI research is weak and my general preference for not building inappropriate or broken systems is stronger.
(Also the addition removes the ambiguity that was in the act as to whether it applied to general AI products, which is good as legal clarity is good.)