I see way too many people confusing movement with progress in the policy space.
There can be a lot of drafts becoming bills with still significant room for regulatory capture in the specifics, which will be decided later on. Take risk levels, for instance, which are subjective—lots of legal leeway for companies to exploit.
I don’t think we have a good answer to what happens after we do auditing of an AI model and find something wrong.
Given that our current understanding of AI’s internal workings is at least a generation behind, it’s not exactly like we can isolate what mechanism is causing certain behaviours. (Would really appreciate any input here- I see very little to no discussion on this in governance papers; it’s almost as if policy folks are oblivious to the technical hurdles which await working groups)