What is the EU AI Act and why should you care about it?

On April 21 the European Commission announced their proposal for European regulation of AI

The proposal strives to be for AI what GDPR has been for data protection.
Over the coming years the proposal will go through multiple readings in the Parliament and Council, where modifications can be proposed to the act before it is finally adopted or dismissed.

After the AI Act was announced, most attention fell on the act’s wide definition of AI and blanket bans on ‘manipulative AI’. Unfortunately this focus has led many to miss out on the most important points of the act.

I noticed there were no forum posts that would help someone get up to speed on the act. In this post I will summarise the act’s most important points, how it may affect the development of transformative AI, as well as the EA community’s response to the proposal.

What are the act’s important points?

I below outline what I deem the most important points of the regulation, based on their effect on the development of transformative AI. I skip lightly over many details of the act, such as regulatory sandboxes and special rules for biometric systems, to keep the summary brief.

The act will apply to all EU countries and supersede any conflicting national law. Because of the act’s broad definition of AI, it is difficult for any EU countries to make laws on AI which would not be in conflict.

It does not apply to military use of AI. Here countries are free to do as they see fit.

Rules for ‘high-risk’ AI

The Act lists a series of ‘high-risk’ areas. Systems operating in a high-risk area are considered high-risk and must be reviewed and approved before they can be placed on the market. This means that the AI Act’s regulation will not apply to AI developed and used internally by companies. High-risk systems include everything from AI management of electricity grids, to AI that determines who to promote or fire.

Areas that are considered high-risk where certain uses are restricted are the following:

  • biometric identification and categorization

  • management and operation of critical infrastructure

  • educational and vocational training

  • employment, worker management, and access to self-employment

  • access to and enjoyment of essential services and benefits

  • law enforcement

  • migration, asylum and border management

  • administration of justice and democracy

A red thread across the systems which are considered high-risk, is that they make decisions which significantly affect the lives of citizens. The full list of high-risk systems is two pages and can be read in Annex III.

After the law is passed, the commission can add new uses of AI that must be approved as long as they fall under any of the existing high-risk areas.

For a high-risk system to be approved the provider must submit detailed technical documentation for the system.š Requirements for technical documentation include:

  • design specifications, key design choices, description of what the system is optimizing.

  • description of any use of third party tools.

  • description of training data, how it has been obtained, how it has been processed.

  • How the system can be monitored and controlled.

    • The system must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator (!!)

  • Description of foreseeable risks the system poses to EU citizens’ health, safety and fundamental rights.

In other words the act creates a large, (partially) updatable list of areas where nobody is allowed to deploy AI without explicitly getting approval, which you only get by living up to numerous safety requirements of which one is a working off-switch. High-risk systems not only need approval, but must also be continuously monitored after they are placed on the market.

Establishment of the European AI Board

To enforce this, the act requires new institutions to be created.

  • The European AI Board.

    • Run by the EU Commission (EU’s civil service).

    • Oversees national authorities and settles disputes.

  • National supervisory authorities in every EU country.

    • Countries are free to structure AI authority as they see fit.

    • Can create regulatory sandboxes, which allow companies to override the act’s regulation in controlled settings.

The national supervisory authorities are responsible for approving high-risk systems and doing post-market monitoring to ensure approved systems are working as intended and pose no threat to EU citizens.

If two national supervisory authorities get into a dispute over whether a system should be approved or not, the European AI Board steps in and makes a final decision. The European AI Board is also responsible for overseeing and coordinating the national supervisory authorities.

Blanket ban of certain AI uses

Social scoring systems by governments are entirely banned.

The EU Act also bans use of AI that “deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm”

If you are finding it unclear exactly which AI systems would fall under this definition you are picking up on an important EU practice. By leaving law deliberately vague, it becomes the job of the European Court of Justice and standardisation bodies to determine what systems fall under this definition when specific cases of accused misuse are brought to court. This is done with the belief that specific definitions fall prey to loopholes, whereas the court is better able to punish only those who go against the ‘spirit’ of the law.

A less flattering analysis is that the ban is window dressing. The commission has struggled to come up with any examples of a banned use case that wouldn’t already considered illegal by other EU regulation.²

AI systems must make themselves known

Any AI system that interacts with humans, must make it clear that it is an AI.
A customer-service chatbot pretending to be a human agent will, for example, be illegal.

Users of an AI system which generates deepfakes or similar content must disclose to their audience that the content is fake.

Users of emotion recognition or biometric categorization systems must disclose this to the subjects that they are using the system on.

Why should you care about the AI Act?

The AI Act is a smoke-test for AI governance

The AI act creates institutions responsible for monitoring high-risk systems and possibly broader monitoring of AI development in Europe. Doing so effectively is a monumentally difficult task that takes trial and error to do well.

If/​When the monitoring of transformative AI systems becomes necessary, the AI Act ensures that the European Union will have institutions with plenty of practice. Other countries looking to implement similar oversight measures will also be able to learn from the AI Board’s successes and failures.

The Brussels Effect

The EU AI Act is the single biggest piece of AI legislation introduced to the world yet. If history is anything to go by, there are good reasons to believe the act will influence the development of AI and AI legislation the world over.

When GDPR was introduced it was cheaper for Microsoft to just implement GDPR worldwide than to create a separate European version of every service they offer. Similarly we can expect it to be cheaper for AI developers serving the European market to ensure all systems are developed to be compliant with European regulation. This phenomenon has been dubbed the Brussels Effect.

The extent to which the Brussels Effect will affect the development of transformative AI is conditional on the continuity of AI takeoffs.

If transformative AI is brought about by a continuous stream of incremental improvements, we can expect development to be constrained by nearterm profits. Companies choosing to forego the European market face a competitive disadvantage. In such a world European laws and regulation is likely to play a significant international role.

In a world where transformative AI is brought about by discontinuous jumps in capability, we are much more likely to see races between private companies and governments alike all gunning to be first. In this world the European Union will struggle to be internationally influential.

I have written a rough analysis of why this is which can be read here.

The AI Act lays the foundation for future AI regulation

The AI act sets up institutions that will play an important role for all future regulation. Lawmakers around the world will draw lessons from its successes and failures. An AI act that is a smashing success moves the overton window, and enables future regulation. The act sets important legal precedents, for example that high-risk AI should be continuously monitored to prevent harm.

Once passed the legislation is unlikely to see major changes or updates

Flagship regulation of the EU such as GDPR and REACH (chemical regulation) tend not to see major updates even decades after having been passed.

Whatever act is passed, Europe will be stuck with for a while if going by historical precedent. Though that historical precedent may not be particularly applicable. If AI starts rapidly and visibly transforming society, I doubt the commission will be shy to suggest large updates to the regulation.

Responses from the EA community

The AI Act has received mixed responses within the EA community. I’ve summarized what I view as the main positive points emphasized by the EA community and the main areas that need improvements.

Positive points often emphasized

  • The act justifies AI regulation through the need to protect citizens’ health, safety and fundamental rights. This sets a fantastic precedent for future regulation.

  • The need for continuous monitoring of high-risk AI and the creation of institutions capable of doing so.

  • That high-risk systems ‘must have in-built operational constraints that cannot be overridden by the system itself and is responsive to the human operator’ (ie. working off-switch)

Commonly suggested improvements

  • Every definition and every wording assumes that AI is specialized and narrow. Only minor changes are needed to enable the commission to regulate general AI systems with many intended uses.

  • The European AI Board will currently do little to explicitly monitor progress of AI as a whole. The European AI Board can be made responsible for maintaining a database of AI accidents and near misses.

  • The regulation act only affects AI that is placed on the market. The European AI Board can be made responsible for monitoring non-market AI for industrial accidents, similarly to what is done with chemical regulation.

  • Operators and developers of high-risk systems must explicitly consider possible violations of an individual’s health, safety or fundamental rights. The conformity assessment for high-risk systems could require operators and developers to also consider societal-scale consequences.

You can read the public responses from various EA and EA Adjacent organisations here:

What is next

Insofar as the AI Act matters, now is the time to act. The EA community is generally hesitant to engage directly with policy for good reason. We barely know what good AI policy looks like, and we would preferably wait with acting before we know how to act and the consequences of doing so.

But the rest of the world is not static and will adopt policy even if we would prefer to wait. The choice is not to engage with the act now or later, it is to engage with the act now or never.

The AI act is not yet final, but the European Union is likely to see some version of this act passed in the coming years. Name of the game for EA organisations engaged with the act is generally to push for improvements similar to the commonly suggested ones, but there is much more work which can be done.

If the act is passed EA’s wanting to work with AI in the European Union, should keep an eye out for new opportunities such as working in the EU AI Board or national supervisory authorities. This may be particularly impactful in the early years of these institutions when the culture and practices are particularly malleable.

š The full list of required technical documentation can be found in annex IV

² Demystifying the AI act argues this in greater detail.