Küspert also says “no exemptions,” which I interpret to mean “no exemptions to the systemic-risk rules for open-source systems.” Other reporting suggests there are wide exemptions for open-source models, but the requirements kick back in if the models pose systemic risks. However, Yann LeCun is celebrating based on this part of a Washington Post article: “The legislation ultimately included restrictions for foundation models but gave broad exemptions to “open-source models,” which are developed using code that’s freely available for developers to alter for their own products and tools. The move could benefit open-source AI companies in Europe that lobbied against the law, including France’s Mistral and Germany’s Aleph Alpha, as well as Meta, which released the open-source model LLaMA.” So it’s currently unclear to me where the Act lands on this question, and I think a close review by someone with legal or deep EU policy expertise might help illuminate.
It’s a shame this is so unclear. To me this is basically the most important part of the act, and intuitively seems like it makes the difference between ‘the law is net bad because it gives only the appearance of safety while adding a lots of regulatory overhead’ and ‘the law is good’.
Thanks for sharing!
It’s a shame this is so unclear. To me this is basically the most important part of the act, and intuitively seems like it makes the difference between ‘the law is net bad because it gives only the appearance of safety while adding a lots of regulatory overhead’ and ‘the law is good’.