Opportunities for Impact Beyond the EU AI Act
Summary
The EU AI Act trilogues are coming to a close in the coming weeks. It appears likely that the final iteration of the act will include regulations covering foundation models.
The window for effecting change here has not yet closed and is likely to extend several years into the future. Therefore, there is still an impact case for those interested in reducing risks from advanced AI to enter European policy.
Some upcoming opportunities for impact include:
Establishing mechanisms for enforcing the most important aspects of the AI Act.
Shaping the precise technical standards & guidelines that are enforced by the AI Act.
Encouraging international bodies & other nations to adopt the EU’s AI standards (conditional on sensible standards being set by the EU in the coming years).
Applications for the EU Tech Policy Fellowship close on October 15. We encourage EU citizens interested in AI governance to apply.
The EU AI Act: Where are we now?
As the EU AI Act trilogues come to a close, AI governance in Europe is on the brink of a significant milestone.
Although the precise scope of the act remains somewhat unclear, there are hopeful signs that it will regulate foundation models. Early drafts of the EU AI Act listed a series of high-risk areas in which the deployment of AI was restricted (e.g. law enforcement; migration, asylum & border management). More recent drafts include an article which extends this legislation to cover foundation models and would require providers to demonstrate the mitigation of reasonably foreseeable risks & document remaining non-mitigatable risks.
If such requirements remain following these final trilogue negotiations, major AI labs must decide whether to withdraw their models from the EU, create separate models for an EU market, or develop a single model which is compliant with the EU AI Act. If AI labs opt for this third option, this would constitute a “de facto” Brussels Effect and EU standards would become a global norm.
Opportunities for impact beyond the AI Act
However, passing the AI Act is just the beginning. The act will not come into effect until Fall 2025 and the window for effecting change here has not yet closed and is likely to extend several years into the future beyond this. For EU citizens interested in reducing risks from advanced AI, we believe there is still a strong impact case for entering European policy.
Enforcement
Enforcement is an essential component of the EU AI Act. Without adequate enforcement, the act will likely have little impact on reducing risks from advanced AI.
The challenges of enforcing the AI Act are akin to those faced with GDPR. Just as GDPR is enforced by individual data protection authorities within each member state, enforcement of the AI Act is also likely to involve a shared responsibility between national bodies at a member state level and European authorities. Organisations tasked with this responsibility will likely be talent and resource-constrained. This provides an opportunity for knowledgeable individuals with a strong prioritisation mindset to have a large impact
How to enter: Apply directly to relevant open vacancies at the EU level (e.g. this role at ECAT). Join your local data protection authority responsible for enforcing GDPR (full list here) with the intention of transitioning to an AI department if/when it is established or alternatively leveraging this experience to join the organisation which will ultimately enforce the AI Act in your country.
Technical standards & guidelines
Technical standards and guidelines will continue to be developed by CEN and CENELEC between 2023-2025. The forthcoming EU AI Office could also play an important role here, as could national standardisation committees. These standards will play a pivotal role in shaping the practical aspects of the act, e.g. outlining the exact risk management requirements for voluntary audits. Those with technical backgrounds and a strong understanding of the risks of advanced AI models could make significant contributions here.
How to enter: Those interested could join one of the National Standards Bodies (full list), the bodies responsible for standards in each of the member states; the European Standards Organisations (i.e. CEN, CENELEC, and ETSI), the organisations responsible for all EU standard-setting; the European Commission; or through think tanks working on developing technical standards.
Exporting European regulation
The EU’s standards and regulations have the potential to influence the global AI landscape. Provided that sensible technical standards & enforcement mechanisms are enacted, it could make sense for individuals to encourage other international bodies to adopt similar protocols. For example, the TTC, OECD, UN, and G7 are currently considering frameworks for classifying AI systems and technical standards for regulating them.
How to enter: Those with strong diplomacy skills (or a professional background in diplomacy) and an understanding of the relevant aspects of EU technical standards / enforcement protocols could join other international bodies likely to introduce AI policy.
Advocating for increased AI safety funding
Lobbying the next Multiannual Financial Framework (2028-2034) for AI safety funding could also be a high impact opportunity. The 2021-2027 MFF allocated ~€7.6 billion to the Digital Europe Programme which aims to accelerate economic recovery and drive the digital transformation of Europe (among others, areas invested in include core AI & supercomputing capacities). Horizon Europe also received a dedicated budget of €95.5 billion for research related to “digital, industry and space”. This budget will be dedicated to developing AI, high performance computing, and other key digital technologies. Given the scale of the funding available, redirecting even a small portion of it to safety-focused efforts could be enormously impactful.
How to enter: Joining the European Commission, likely within the DG RTD could be a helpful step. The European Parliament will play a role later on, so joining as an assistant to a relevant MEP could also be helpful. It’s also possible to advocate through other channels, for example at a think tank focused on tech policy or through a national government.
Other potential paths
Updating the AI Act: Technical standards & guidelines could be updated over time as models advance. It’s unclear where this responsibility will fall, but working at the forthcoming EU AI Office could enable one to shape future amendments to the act in a positive direction.
EU Careers focused on a narrow threat model: It could make sense for a portion of talent to specialise in a specific domain relevant to concrete risk model. This could be especially true if you have a background particularly suited to one area (e.g. an individual with a biotechnology background could be best suited to working on the potential use of AI to release engineered pandemics). Other options could include specialising within cybersecurity to improve Europe’s resilience to infrastructure attacks (e.g. at ENISA) or in tackling disinformation to protect against AI-facilitated disinformation campaigns.
Interested? Apply for the EU Tech Policy Fellowship (deadline 15 October)
If you’re interested we encourage you to apply for the EU Tech Policy Fellowship or check out this list of opportunities & resources we’ve compiled.
The EU Tech Policy Fellowship is a 7-month programme for ambitious graduates to launch European policy careers focused on emerging technology (Jan → July 2024). Applications for the winter cohort are due October 15.
As a fellow in our program, you’ll have the choice of two distinct tracks: Training & Placement. Training track fellows explore the intricacies of tech policy through an 8-week online course and a week-long policymaking summit in Brussels. In addition to this training, placement track fellows also secure a fully-funded 4-6 month placement at a respected think tank such as the Centre for European Policy Studies or the Future Society.
Upon completion of the fellowship, we expect you’ll transition into a career working directly on AI policy in Europe (such as those outlined in this post). Beyond the fellowship, we’ll continue to provide support, guidance, and networking opportunities to alumni as we build a strong community of policy professionals working to reduce risks from emerging technology in Europe.
Apply here by October 15
Regarding influencing Horizon funding, my understanding is that universities are consulted quite extensively on this. For example, if I recall correctly, the head of the EU funds team at my old university was in meetings where decisions were made. If you’re an academic it might be worth reaching out to your equivalent and lobbying them.
Nice post! Yet another path to impact could be to influence international regulation processes, such as the AI Safety Summit, through influencing the EU and member states positions. In a positive scenario, the EU could even take a mediation role between the US and China.