Current paths to impact in EU AI Policy (Feb ’24)

This document aims to outline paths to how EU AI Policy could be shaped in the short, mid, and long term, given that the EU AI Act opens new doors for impact. Since I expect a lot of path dependency, it is very likely a great option to join organisations like the EU AI Office immediately when these are set up in the following months.


About the author: I am someone well-versed in EU AI policy, having spent considerable time in the field and building a strong network. I am posting anonymously because discussing policy paths openly could affect my reputation.

Working on enforcement of the EU AI Act (and monitoring of risks)

Work for the EU AI Office

This is overall the largest opportunity for impact, and I would highly encourage people to make plans to join this (and encourage your talented friends as well). We know the EU AI Office wants to hire 100 people in total, of which ~80 are coming from outside of DG CONNECT.

  • From this article: Following the new rules on GPAI models and the obvious need for their enforcement at EU level, an AI Office within the Commission is set up tasked to oversee these most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states. A scientific panel of independent experts will advise the AI Office about GPAI models, by contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergence of high impact foundation models, and monitoring possible material safety risks related to foundation models. The AI Board, which would comprise member states’ representatives, will remain as a coordination platform and an advisory body to the Commission and will give an important role to Member States on the implementation of the regulation, including the design of codes of practice for foundation models. Finally, an advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board. See also more info in this recent article from Euractiv.

  • This letter proposes topics people could focus on when working for the EU AI Office. My main Theory of Change would be: Make sure that supervisory power is proportionally divided on the severity of risks and the scale of the companies. This means that larger risks (in terms of likelihood and severity) should get the most resources, as well as the larger companies/​models (e.g. OpenAI, DeepMind and Anthropic and Meta)

  • Timeline

    • Rules of the AI Act will apply ~12 months after the AI Act is officially adopted (this is expected to happen in the coming months). So if agreed upon in April 2024, the rules will apply a year later, from April 2025 onwards. Nine months before the rules apply, the AI Office must start working on Codes of Conduct (CoC) and risk taxonomy. This will mean the hiring processes for the EU AI Office could probably start in April, but also a bit earlier or later.

  • Expertise

    • Since they need to hire incredibly fast against below-market rates compared to tech companies (~$3000 post-tax), we expect it might be hard for the EC to hire the right talent. Sufficient experience here could be experience at a tech/​GPAI company, to have written op-eds/​research articles on GPAI/​EU-US harmonization or some relevant work experience in think tanks, EC etc. Of course, you need to be an EU citizen, be proficient in English and a second EU language and have completed a Master’s degree. For the more senior roles, obviously more requirements apply.

  • Application Process

    • We can see that interested individuals already write letters of interest for roles to key stakeholders in the European Commission, e.g. to Heads of Unit. Most hires will probably be sourced from people in the CAST database. People can already apply to other EC jobs and take the CAST test, which takes about 2 months. Doing this would prepare you well.

Work for national member state authorities.

They will play an enormous role in enforcing the EU AI Act, e.g. the Authoriteit Persoonsgegevens in The Netherlands. Other countries very likely have similar structures in place.

Work in think tanks

that advise on the best enforcement structure, e.g. CEPS, The Future Society, EPC, FLI, and other Brussels-based think tanks.

Working on potential future regulation /​ preventing further watering down of AI regulation

Important for short, mid and long-term

  • Try to position yourself for the future cabinets of the relevant Commissioners on AI (for after June) or DG Connect, especially unit A2 that deals with AI Policy, Development and Coordination. They might initiate new AI-related laws and directives over the coming EC period.

  • We have seen at the end of the AI Act process that member states can carry large power, and there was pushback against GPAI regulation from member states like France at the end of the process. Climbing the ladder in important national administrations is a great long-term career path to keep the pressure on from the inside.

  • You can also become an APA /​ intern for an MEP that deals with AI.

Working on compute governance

  • Make sure that the lever of compute governance gains acceptance in Europe, is implemented in ways aligned with the latest scientific insights, and is properly enforced.

    • Climb the ladder within the policy team of ASML (see some of the recent coverage on ASML), or alternatively to one of the other European companies in the supply chain, e.g. at Carl Zeiss

    • Work for the Dutch government that deals with export restrictions on ASML machines. Since ASML is a Dutch company and one of the most important pieces in the compute supply chain, Extreme Ultraviolet Lithography (EUV), this option could be very relevant.

    • Again the EU AI Office: a first task of the EU AI Office will be to systematize reporting of FLOPs (because they are used to assess riskiness of the models)

Some other options

  • Work on standard-setting of the EU AI Act through CEN CENELEC. This is a very near-term option since the relevant discussion will be going on for the coming 12 months or so. This option is only useful if you can enter the discussion right now.

  • Feed into the current open-source debate with sensible takes. This on-going debate could use some nuanced takes on possibilities to keep the positives from open source (e.g. checks from third parties), while not falling prey to the risks of open source.

    • The debate will be fed by mostly think tanks /​ lobbyists

    • Working on different positions on the inside of EU institutions might make sense as well

  • Work in tech diplomacy: There are lots of possibilities to influence the upcoming AI summits (organised by France and South Korea etc.). The Theory of Change could be to enhance the odds of a de jure Brussels effect or to enhance collaboration on GPAI regulation, open source, technical standards and international monitoring, which seems robustly good. The AI Office will also work on “the export” of some of the Brussels regulations.

Some general tips based on giving career advice to dozens of people