Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Executive summary: Model registries are an emerging form of AI governance that require developers to submit information about AI models to a centralized database, enabling governments to track and regulate AI development.
Key points:
Model registries require submitting basic information about AI models (purpose, size, algorithms) and sometimes more detailed data (benchmarks, risks, safety assessments).
Registries allow governments to monitor the AI industry, target regulations at specific models, and enforce an “algorithms as entry point” governance approach.
Precedents exist in other domains like pharmaceutical registries, which require safety testing, incident reporting, and postmarket surveillance.
China has the most comprehensive registry requirements, the EU requires registration of “high-risk” systems, and the US focuses on compute power thresholds.
Model registries indicate differing regulatory priorities: content control (China), citizen rights (EU), and national security (US).
Registries will enable future regulations like mandatory safety assessments, transparency, incident reporting, and postmarket evaluations.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Also, here is a link if anyone wants to read more on the China AI registry which seems to be based on the model cards paper
Nice summarization! I generally see model registries as a good tool to ensure deployment safety by logging versions of algorithms and tracking spikes in capabilities. I think a feasible way to push this into the current discourse is by setting it in the current algorithmic transparency agenda.
Potential risks here include who decides what is a new version of a given model. If the nomenclature is left in the hands of companies, it is prone to be misused. Also, the EU AI Act seems to take a risk-based approach, with the different kinds of risks being more or less lines in the sand.
Another important point is what we do with the information we gather from these sources—I think there are “softer”(safety assessments, incident reporting) and “harder”(bans, disabling) ways to go about this. It seems likely to me that governments are going to want to lean into the softer bucket to enable innovation and have some due process kick in. This is probably more true with the US which has always favoured sector-specific regulation.