What is the EU AI Act and why should you care about it?
On April 21 the European Commission announced their proposal for European regulation of AI
The proposal strives to be for AI what GDPR has been for data protection.
Over the coming years the proposal will go through multiple readings in the Parliament and Council, where modifications can be proposed to the act before it is finally adopted or dismissed.
After the AI Act was announced, most attention fell on the actâs wide definition of AI and blanket bans on âmanipulative AIâ. Unfortunately this focus has led many to miss out on the most important points of the act.
I noticed there were no forum posts that would help someone get up to speed on the act. In this post I will summarise the actâs most important points, how it may affect the development of transformative AI, as well as the EA communityâs response to the proposal.
What are the actâs important points?
I below outline what I deem the most important points of the regulation, based on their effect on the development of transformative AI. I skip lightly over many details of the act, such as regulatory sandboxes and special rules for biometric systems, to keep the summary brief.
The act will apply to all EU countries and supersede any conflicting national law. Because of the actâs broad definition of AI, it is difficult for any EU countries to make laws on AI which would not be in conflict.
It does not apply to military use of AI. Here countries are free to do as they see fit.
Rules for âhigh-riskâ AI
The Act lists a series of âhigh-riskâ areas. Systems operating in a high-risk area are considered high-risk and must be reviewed and approved before they can be placed on the market. This means that the AI Actâs regulation will not apply to AI developed and used internally by companies. High-risk systems include everything from AI management of electricity grids, to AI that determines who to promote or fire.
Areas that are considered high-risk where certain uses are restricted are the following:
biometric identification and categorization
management and operation of critical infrastructure
educational and vocational training
employment, worker management, and access to self-employment
access to and enjoyment of essential services and benefits
law enforcement
migration, asylum and border management
administration of justice and democracy
A red thread across the systems which are considered high-risk, is that they make decisions which significantly affect the lives of citizens. The full list of high-risk systems is two pages and can be read in Annex III.
After the law is passed, the commission can add new uses of AI that must be approved as long as they fall under any of the existing high-risk areas.
For a high-risk system to be approved the provider must submit detailed technical documentation for the system.š Requirements for technical documentation include:
design specifications, key design choices, description of what the system is optimizing.
description of any use of third party tools.
description of training data, how it has been obtained, how it has been processed.
How the system can be monitored and controlled.
The system must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator (!!)
Description of foreseeable risks the system poses to EU citizensâ health, safety and fundamental rights.
In other words the act creates a large, (partially) updatable list of areas where nobody is allowed to deploy AI without explicitly getting approval, which you only get by living up to numerous safety requirements of which one is a working off-switch. High-risk systems not only need approval, but must also be continuously monitored after they are placed on the market.
Establishment of the European AI Board
To enforce this, the act requires new institutions to be created.
The European AI Board.
Run by the EU Commission (EUâs civil service).
Oversees national authorities and settles disputes.
National supervisory authorities in every EU country.
Countries are free to structure AI authority as they see fit.
Can create regulatory sandboxes, which allow companies to override the actâs regulation in controlled settings.
The national supervisory authorities are responsible for approving high-risk systems and doing post-market monitoring to ensure approved systems are working as intended and pose no threat to EU citizens.
If two national supervisory authorities get into a dispute over whether a system should be approved or not, the European AI Board steps in and makes a final decision. The European AI Board is also responsible for overseeing and coordinating the national supervisory authorities.
Blanket ban of certain AI uses
Social scoring systems by governments are entirely banned.
The EU Act also bans use of AI that âdeploys subliminal techniques beyond a personâs consciousness in order to materially distort a personâs behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harmâ
If you are finding it unclear exactly which AI systems would fall under this definition you are picking up on an important EU practice. By leaving law deliberately vague, it becomes the job of the European Court of Justice and standardisation bodies to determine what systems fall under this definition when specific cases of accused misuse are brought to court. This is done with the belief that specific definitions fall prey to loopholes, whereas the court is better able to punish only those who go against the âspiritâ of the law.
A less flattering analysis is that the ban is window dressing. The commission has struggled to come up with any examples of a banned use case that wouldnât already considered illegal by other EU regulation.²
AI systems must make themselves known
Any AI system that interacts with humans, must make it clear that it is an AI.
A customer-service chatbot pretending to be a human agent will, for example, be illegal.
Users of an AI system which generates deepfakes or similar content must disclose to their audience that the content is fake.
Users of emotion recognition or biometric categorization systems must disclose this to the subjects that they are using the system on.
Why should you care about the AI Act?
The AI Act is a smoke-test for AI governance
The AI act creates institutions responsible for monitoring high-risk systems and possibly broader monitoring of AI development in Europe. Doing so effectively is a monumentally difficult task that takes trial and error to do well.
If/âWhen the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice. Other countries looking to implement similar oversight measures will also be able to learn from the AI Boardâs successes and failures.
The Brussels Effect
The EU AI Act is the single biggest piece of AI legislation introduced to the world yet. If history is anything to go by, there are good reasons to believe the act will influence the development of AI and AI legislation the world over.
When GDPR was introduced it was cheaper for Microsoft to just implement GDPR worldwide than to create a separate European version of every service they offer. Similarly we can expect it to be cheaper for AI developers serving the European market to ensure all systems are developed to be compliant with European regulation. This phenomenon has been dubbed the Brussels Effect.
The extent to which the Brussels Effect will affect the development of transformative AI is conditional on the continuity of AI takeoffs.
If transformative AI is brought about by a continuous stream of incremental improvements, we can expect development to be constrained by nearterm profits. Companies choosing to forego the European market face a competitive disadvantage. In such a world European laws and regulation is likely to play a significant international role.
In a world where transformative AI is brought about by discontinuous jumps in capability, we are much more likely to see races between private companies and governments alike all gunning to be first. In this world the European Union will struggle to be internationally influential.
I have written a rough analysis of why this is which can be read here.
The AI Act lays the foundation for future AI regulation
The AI act sets up institutions that will play an important role for all future regulation. Lawmakers around the world will draw lessons from its successes and failures. An AI act that is a smashing success moves the overton window, and enables future regulation. The act sets important legal precedents, for example that high-risk AI should be continuously monitored to prevent harm.
Once passed the legislation is unlikely to see major changes or updates
Flagship regulation of the EU such as GDPR and REACH (chemical regulation) tend not to see major updates even decades after having been passed.
Whatever act is passed, Europe will be stuck with for a while if going by historical precedent. Though that historical precedent may not be particularly applicable. If AI starts rapidly and visibly transforming society, I doubt the commission will be shy to suggest large updates to the regulation.
Responses from the EA community
The AI Act has received mixed responses within the EA community. Iâve summarized what I view as the main positive points emphasized by the EA community and the main areas that need improvements.
Positive points often emphasized
The act justifies AI regulation through the need to protect citizensâ health, safety and fundamental rights. This sets a fantastic precedent for future regulation.
The need for continuous monitoring of high-risk AI and the creation of institutions capable of doing so.
That high-risk systems âmust have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operatorâ (ie. working off-switch)
Commonly suggested improvements
Every definition and every wording assumes that AI is specialized and narrow. Only minor changes are needed to enable the commission to regulate general AI systems with many intended uses.
The European AI Board will currently do little to explicitly monitor progress of AI as a whole. The European AI Board can be made responsible for maintaining a database of AI accidents and near misses.
The regulation act only affects AI that is placed on the market. The European AI Board can be made responsible for monitoring non-market AI for industrial accidents, similarly to what is done with chemical regulation.
Operators and developers of high-risk systems must explicitly consider possible violations of an individualâs health, safety or fundamental rights. The conformity assessment for high-risk systems could require operators and developers to also consider societal-scale consequences.
You can read the public responses from various EA and EA Adjacent organisations here:
What is next
Insofar as the AI Act matters, now is the time to act. The EA community is generally hesitant to engage directly with policy for good reason. We barely know what good AI policy looks like, and we would preferably wait with acting before we know how to act and the consequences of doing so.
But the rest of the world is not static and will adopt policy even if we would prefer to wait. The choice is not to engage with the act now or later, it is to engage with the act now or never.
The AI act is not yet final, but the European Union is likely to see some version of this act passed in the coming years. Name of the game for EA organisations engaged with the act is generally to push for improvements similar to the commonly suggested ones, but there is much more work which can be done.
If the act is passed EAâs wanting to work with AI in the European Union, should keep an eye out for new opportunities such as working in the EU AI Board or national supervisory authorities. This may be particularly impactful in the early years of these institutions when the culture and practices are particularly malleable.
š The full list of required technical documentation can be found in annex IV
² Demystifying the AI act argues this in greater detail.
- UsÂing the âexÂecÂuÂtive sumÂmaryâ style: writÂing that reÂspects your readerâs time by 22 Jul 2022 0:17 UTC; 198 points) (
- 2021 AI AlignÂment LiterÂaÂture ReÂview and CharÂity Comparison by 23 Dec 2021 14:06 UTC; 176 points) (
- 2021 AI AlignÂment LiterÂaÂture ReÂview and CharÂity Comparison by 23 Dec 2021 14:06 UTC; 168 points) (LessWrong;
- A LandÂscape AnalÂyÂsis of InÂstiÂtuÂtional ImÂproveÂment Opportunities by 21 Mar 2022 0:15 UTC; 97 points) (
- Should you work in the EuroÂpean Union to do AGI govÂerÂnance? by 31 Jan 2022 10:34 UTC; 90 points) (
- A sumÂmary of curÂrent work in AI governance by 17 Jun 2023 16:58 UTC; 87 points) (
- EA AnalÂyÂsis of the GerÂman CoalÂiÂtion AgreeÂment 2021â2025 by 24 Jan 2022 13:25 UTC; 75 points) (
- EU AI Act now has a secÂtion on genÂeral purÂpose AI systems by 9 Dec 2021 12:40 UTC; 64 points) (
- 2023: news on AI safety, anÂiÂmal welfare, global health, and more by 5 Jan 2024 21:57 UTC; 54 points) (
- ColÂlecÂtion of work on âShould you foÂcus on the EU if youâre inÂterÂested in AI govÂerÂnance for longterÂmist/âx-risk reaÂsons?â by 6 Aug 2022 16:49 UTC; 45 points) (
- A sumÂmary of curÂrent work in AI governance by 17 Jun 2023 18:41 UTC; 44 points) (LessWrong;
- Will the EU regÂuÂlaÂtions on AI matÂter to the rest of the world? by 1 Jan 2022 21:56 UTC; 33 points) (
- EA UpÂdates for OcÂtoÂber 2021 by 1 Oct 2021 15:25 UTC; 32 points) (
- 11 Feb 2022 14:48 UTC; 16 points) 's comment on MichaelAâs Quick takes by (
- 27 Oct 2021 12:30 UTC; 7 points) 's comment on AMA on TruthÂful AI: Owen CotÂton-BarÂratt, Owain Evans & co-authors by (LessWrong;
Thank you for writing this summary!
I wanted to share this new website about the AI Act we have set up together with colleagues at the Future of Life Institute: https://ââartificialintelligenceact.eu/ââ. You can find the main text, annexes, some analyses of the proposal, and the latest developments on the site. Feel free to get in touch if youâd like to discuss the proposal or have suggestions for the website. Weâd like it to be a good resource for the general public but also for people interested in the regulation more closely.
âIf/âWhen the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice.â
Itâs true that setting up institutions earlier allows for more practice, and I suspect the act is probably good on the whole, but itâs also worth considering potential negative aspects of setting up institutions earlier. For example:
potential for more institutional sclerosis
institutional inertia may ~lock in features now, despite having a less clear-eyed view than weâll likely have in the future
Excellent overview, and I completely agree that the AI Act is an important policy for AI governance.
One quibble: as far as I know, the Center for Data Innovation is just a lobbying group for Big TechâI was a little surprised to see it listed in âpublic responses from various EA and EA Adjacent organisationsâ.
Iâm not very familiar with the Center for Data Innovation, thank you for pointing this out!
I included their response as its author is familiar with EA and well reasoned. I also felt it would be healthy to include a perspective and set of concerns vastly different from my own, as the post is already biased by my choice of focus.
That being said I havenât gotten the best impression by some of the Center for Data Innovationâs research. As far as I can tell their widely cited analysis which projects the act to cost âŹ31 billion has flaw in its methodology which results in the estimate turning out much higher. In their defense, their cost-analysis is also conservative in other ways, leading to a lower number than what might be reasonable.
Thank you for this! Very useful.
In what sense is the AI board (or some other institution?) responsible for monitoring AI progress as a whole?
Sorry I should have said âmonitoring AI progress in Europe as a wholeâ and even then I think it might be misleading.
One of the three central tasks of the AI board is to âcoordinate and contribute to guidance and analysis by the Commission and the national supervisory authorities and other competent authorities on emerging issues across the internal market with regard to matters covered by this Regulation;â
For example, if a high-risk AI system is compliant but still poses a risk the provider is required to immediately inform the AI Board. The national supervisory authorities must also regularly report back to the AI Board about the results of their market surveillance and more.
So the AI Board both gets the mandate and the information to monitor how AI progresses in the EU. And they have to do so to carry out their task effectively even if itâs not directly stated anywhere that they are required to do so.
I hope this clears it up, Iâm happy that you found the post useful!
I think this is a better link to FLIâs position on the AI act: https://ââec.europa.eu/ââinfo/ââlaw/ââbetter-regulation/ââhave-your-say/ââinitiatives/ââ12527-Artificial-intelligence-ethical-and-legal-requirements/ââF2665546_en
(The one in the post goes to their opinion on liability rules. I donât know the relationship between that and the AI act.)
Thank you for spotting this that mistake. This is the position I meant to link to, Iâve replaced the link in the post.
The latest information as of June 2023:
Thank you for this very useful post, it really helped me better understand the topic :)