Disclaimer: Quickly written; I am not an expert in this legislature and am happy to be corrected if my interpretations are wrong.
The current draft of the EU AI Act seems problematic. Efforts to address risks from transformative AI are overshooting in a way that would severely hamper development and application of generative AI in the EU, and would lead to unclear legal situations for persons outside the EU that make generative AI system available (including simply uploading a free-to-use open source model). This is bad in two ways:
Intrinsically: If the draft version would be enacted, it would lead to significant economic damage and public outrage in the EU, potentially even causing lasting damage to the EU as an institution.
Instrumentally, from an AI risk perspective: there will likely be a fierce backlash to the regulation as proposed. This risks over-correction or no regulation being enacted at all, and might decrease public trust in actors advocating for AI regulations.
The problem is that regulations in the draft are broadly applied to ‘foundation models’, without regard to their level of capability, autonomy or risk. The following regulatory requirements could therefore, in my reading, apply even to models as trivial as GPT-J or T5. There is sufficient lack of clarity that it is possible that fine-tuning an existing model would constitute bringing a novel foundation model to market, subjecting the person doing the fine-tuning to the same obligations. Note that no profit-motive is needed to fall under these regulations.
Article 28b
Obligations of the provider of a foundation model
1. A provider of a foundation model shall, prior to making it available on the market or putting it into service, ensure that it is compliant with the requirements set out in this Article, regardless of whether it is provided as a standalone model or embedded in an AI system or a product, or provided under free and open source licences, as a service, as well as other distribution channels.
2. For the purpose of paragraph 1, the provider of a foundation model shall:
(a) demonstrate through appropriate design, testing and analysis that the identification, the reduction and mitigation of reasonably foreseeable risks to health, safety, fundamental rights, the environment and democracy and the rule of law prior and throughout development with appropriate methods such as with the involvement of independent experts, as well as the documentation of remaining non-mitigable risks after development;
(b) process and incorporate only datasets that are subject to appropriate data governance measures for foundation models, in particular measures to examine the suitability of the data sources and possible biases and appropriate mitigation;
c) design and develop the foundation model in order to achieve throughout its lifecycle appropriate levels of performance, predictability, interpretability, corrigibility, safety and cybersecurity assessed through appropriate methods such as model evaluation with the involvement of independent experts, documented analysis, and extensive testing during conceptualisation, design, and development;
(d) design and develop the foundation model, making use of applicable standards to reduce energy use, resource use and waste, as well as to increase energy efficiency, and the overall efficiency of the system. This shall be without prejudice to relevant existing Union and national law and this obligation shall not apply before the standards referred to in Article 40 are published. They shall be designed with capabilities enabling the measurement and logging of the consumption of energy and resources, and, where technically feasible, other environmental impact the deployment and use of the systems may have over their entire lifecycle;
(e) draw up extensive technical documentation and intelligible instructions for use in order to enable the downstream providers to comply with their obligations pursuant to Articles 16 and 28.1.;
(f) establish a quality management system to ensure and document compliance with this Article, with the possibility to experiment in fulfilling this requirement,
(g) register that foundation model in the EU database referred to in Article 60, in accordance with the instructions outlined in Annex VIII paragraph C. When fulfilling those requirements, the generally acknowledged state of the art shall be taken into account, including as reflected in relevant harmonised standards or common specifications, as well as the latest assessment and measurement methods, reflected notably in benchmarking guidance and capabilities referred to in Article 58a (new).
3. Providers of foundation models shall, for a period ending 10 years after their foundation models have been placed on the market or put into service, keep the technical documentation referred to in paragraph 1(c) at the disposal of the national competent authorities;
4. Providers of foundation models used in AI systems specifically intended to generate, with varying levels of autonomy, content such as complex text, images, audio, or video (“generative AI”) and providers who specialise a foundation model into a generative AI system, shall in addition
a) comply with the transparency obligations outlined in Article 52 (1),
b) train, and where applicable, design and develop the foundation model in such a way as to ensure adequate safeguards against the generation of content in breach of Union law in line with the generally acknowledged state of the art, and without prejudice to fundamental rights, including the freedom of expression,
c) without prejudice to national or Union legislation on copyright, document and make publicly available a sufficiently detailed summary of the use of training data protected under copyright law.
The penalties levied for violating regulations are significant:
Non-compliance of AI system or foundation model with any requirements or obligations under this Regulation, other than those laid down in Articles 5, and 10 and 13, shall be
subject to administrative fines of up to 10 000 000 EUR or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
5. The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 5 000 000 EUR or, if the offender is a company, up to 1 % of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Given that the consequences of the current draft of the AI act would kill European AI startups and open-source projects, and could lead to the withdrawal of international AI corporations from the EU, it is likely that major modifications will still be made, or that the AI act will meet major challenges in its entirety.
What seems required to avoid over- and undershooting regulatory strictness from an AI global risk perspective is a viable definition of high-risk foundation models based on capability and risk thresholds, and focusing regulation to such systems. This is a difficult task that requires expertise in cutting-edge AI research and policy. Nonetheless, finding a balanced take on the risks of different foundation models is essential for shaping risk-reducing AI policy that works in practice.
The EU AI Act needs a definition of high-risk foundation models to avoid regulatory overreach and backlash
Disclaimer: Quickly written; I am not an expert in this legislature and am happy to be corrected if my interpretations are wrong.
The current draft of the EU AI Act seems problematic. Efforts to address risks from transformative AI are overshooting in a way that would severely hamper development and application of generative AI in the EU, and would lead to unclear legal situations for persons outside the EU that make generative AI system available (including simply uploading a free-to-use open source model). This is bad in two ways:
Intrinsically: If the draft version would be enacted, it would lead to significant economic damage and public outrage in the EU, potentially even causing lasting damage to the EU as an institution.
Instrumentally, from an AI risk perspective: there will likely be a fierce backlash to the regulation as proposed. This risks over-correction or no regulation being enacted at all, and might decrease public trust in actors advocating for AI regulations.
The problem is that regulations in the draft are broadly applied to ‘foundation models’, without regard to their level of capability, autonomy or risk. The following regulatory requirements could therefore, in my reading, apply even to models as trivial as GPT-J or T5. There is sufficient lack of clarity that it is possible that fine-tuning an existing model would constitute bringing a novel foundation model to market, subjecting the person doing the fine-tuning to the same obligations. Note that no profit-motive is needed to fall under these regulations.
The penalties levied for violating regulations are significant:
Given that the consequences of the current draft of the AI act would kill European AI startups and open-source projects, and could lead to the withdrawal of international AI corporations from the EU, it is likely that major modifications will still be made, or that the AI act will meet major challenges in its entirety.
What seems required to avoid over- and undershooting regulatory strictness from an AI global risk perspective is a viable definition of high-risk foundation models based on capability and risk thresholds, and focusing regulation to such systems. This is a difficult task that requires expertise in cutting-edge AI research and policy. Nonetheless, finding a balanced take on the risks of different foundation models is essential for shaping risk-reducing AI policy that works in practice.