The Rise of the European AI Office: From Concept to Enforcement
Two years ago the European AI Office was little more than a footnote in the AI Act's institutional architecture. Today it employs roughly 100 staff, holds real enforcement powers over frontier AI models, and is drafting the standards that will define compliance across the continent. The question is whether it can actually deliver.
The European AI Office has transformed from a single paragraph in a legislative text into the most consequential AI regulator the world has yet produced, and it has done so at a speed that has surprised even its own architects.
Established formally in February 2024 within the European Commission's DG CONNECT, the AI Office was granted a legal basis by the EU AI Act once the regulation entered into force in August 2024. What followed was a hiring sprint that would be unusual in any Brussels institution: by early 2025, the Office had grown to approximately 100 staff, drawing from Member State national authorities, seconded national experts, and direct Commission recruits. The ambition, from day one, was enforcement rather than advisory guidance.
Advertisement
To understand why this matters, it helps to recall what existed before. The AI Act was negotiated over three years, but the institutional question of who would actually supervise it was largely deferred until near the end of trilogue. The answer that emerged was architecturally novel: a dual-layer system in which Member States would regulate most AI systems through national market surveillance authorities, while a central EU-level office would hold exclusive jurisdiction over so-called general-purpose AI models, the frontier systems at the heart of present-day commercial AI development.
"The AI Office holds exclusive jurisdiction over general-purpose AI models with systemic risk, a carve-out that places the most powerful AI systems on the planet directly under Brussels supervision."
AI in Europe analysis of EU AI Act, Title III, Chapter 5
That jurisdictional carve-out is the AI Office's defining power. Any provider of a general-purpose AI model with what the Act defines as systemic risk, broadly calibrated at 10 to the power of 25 floating-point operations of training compute, falls directly under Brussels supervision. In practice, that means Meta's Llama series, Google's Gemini, OpenAI's GPT-4 and its successors, Anthropic's Claude, and Mistral's frontier releases are all subject to AI Office oversight. The Office can demand technical documentation, conduct evaluations, issue binding decisions, and recommend fines to the Commission. For a body that did not legally exist before 2024, that is an extraordinary remit.
Leadership and Organisational Design
The Office is led by Lucilla Sioli, a senior Commission official who previously headed the AI and Digital Industry unit within DG CONNECT. Sioli has been consistent in her public messaging: the Office intends to be a technical regulator, not a political one, and it will base its enforcement decisions on scientific evidence rather than on lobbying pressure from either the industry or civil society sides of the debate.
Below Sioli, the Office is structured into four operational units. The first handles the codes of practice for general-purpose AI, a critical workstream because the Act requires these codes to be developed in dialogue with industry before binding obligations fully apply. The second unit manages AI safety evaluation, including the scientific advisory panel that the Act mandates to provide independent technical input. The third covers AI innovation and coordination with national authorities. The fourth is responsible for regulatory affairs and international engagement.
Cecilia Bonefeld-Dahl, Director-General of DigitalEurope, the Brussels-based technology industry association, has publicly described the codes-of-practice process as a genuine attempt at co-regulation, while also flagging that the timelines are extremely tight and that smaller European AI companies face a disproportionate documentation burden. DigitalEurope has been an active participant in the multi-stakeholder working groups that the Office convened in late 2024, and Bonefeld-Dahl's commentary has consistently called for proportionality to be built into the final obligations, particularly around transparency requirements for model weights and training data.
The Codes of Practice: A Race Against the Clock
The general-purpose AI code of practice is arguably the most consequential document the AI Office will produce in its first operational year. Under the Act's timetable, the code needed to be finalised by August 2025, roughly twelve months after the regulation entered into force. Drafting began in October 2024, with three working groups covering transparency and copyright rules, risk identification and mitigation for systemic-risk models, and technical standards referencing.
The process has not been without friction. Several civil society organisations, including AlgorithmWatch, a Berlin-based digital rights group that has tracked EU AI policy closely, have raised concerns that the working groups are weighted too heavily towards industry participation and that the code risks becoming a vehicle for self-regulation dressed up as binding obligation. The AI Office has pushed back on this characterisation, pointing to the breadth of the stakeholder pool and to the scientific panel's independent oversight role.
What is certain is that the substantive debates inside those working groups will determine how much actual accountability frontier AI providers face in Europe. Questions such as whether model providers must disclose training data sources to regulators, how red-teaming results must be shared, and what incident-reporting thresholds apply are all live. The draft text that emerged from the first two working group iterations in early 2025 was generally regarded as more substantive than many observers had expected, though critics noted that key enforcement triggers remained vague.
The scale of the AI Office's mandate becomes clearer when set against some concrete figures. The Office supervises frontier models trained at compute levels that have grown by several orders of magnitude in less than five years. Its own staffing has increased from zero to approximately 100 in twelve months. The fines it can recommend for systemic-risk model violations reach up to 3 per cent of global annual turnover, or 15 million euros, whichever is higher. Across the continent, more than two dozen national market surveillance authorities are being stood up to handle the rest of the AI Act's scope, each requiring their own technical capacity. The aggregate regulatory architecture is unprecedented in digital technology governance anywhere in the world.
Capacity Gaps and the Auditors' Warning
The European Court of Auditors has not yet produced a full audit of the AI Office specifically, but its earlier work on the Commission's digital regulation capacity, including a 2022 briefing on the adequacy of resources for implementing the Digital Markets Act and Digital Services Act, flagged a recurring structural problem: Brussels tends to legislate at a speed that outpaces its own institutional capacity to implement. That pattern is visibly at risk of repeating itself here.
The AI Office's scientific panel, which is meant to provide independent technical evaluation of high-capability models, was still in its formation phase in early 2025. Recruiting AI safety researchers willing to work in a public-sector environment, bound by confidentiality obligations and Commission salary scales, is not straightforward when the same researchers can command multiples of that compensation in industry. The Office has been transparent about this tension, but transparency does not resolve the underlying labour market problem.
There is also the question of compute access for independent evaluation. To meaningfully assess a frontier model, a regulator needs the ability to run adversarial prompts, evaluate capability elicitation, and probe alignment properties at scale. That requires significant inference compute. The AI Office does not currently have its own dedicated compute infrastructure. It has relied on cooperation with model providers and on partnerships with public research institutions such as ELLIS, the European Laboratory for Learning and Intelligent Systems, which has research units across ten European countries. ELLIS has been a constructive interlocutor, but there is a material difference between academic collaboration and in-house regulatory capacity.
International Positioning and the Standards Race
The AI Office is not operating in isolation. The United Kingdom's AI Safety Institute, now rebranded as the AI Security Institute under the Labour government, has moved in a broadly comparable direction, and the two bodies have agreed a memorandum of understanding on information sharing and joint evaluation. That cooperation is practically significant given that many of the frontier model providers operate across both jurisdictions and that duplicating compliance demands would impose real costs.
At the international standards level, the AI Office is actively engaged with the work of ISO and IEC on AI standards, particularly the ISO 42001 management system standard and the emerging technical standards that the AI Act explicitly references as a route to presumed conformity. CEN-CENELEC, the European standardisation bodies, are producing harmonised standards under the AI Act mandate. The Office's ability to shape those standards, rather than simply receive them, will be a test of its technical authority in the coming years.
Mistral AI, the Paris-based frontier model developer and the only European company currently in scope for GPAI systemic-risk obligations, occupies an interesting position in this landscape. It is simultaneously subject to AI Office supervision and a vocal advocate for proportionate regulation that does not disadvantage European developers relative to their American and Chinese competitors. Mistral's leadership has engaged substantively in the codes-of-practice process, and the company's position, broadly, is that strong safety standards are acceptable provided they are genuinely science-based rather than precautionary to the point of being commercially disabling.
What Enforcement Actually Looks Like
The Act gives the AI Office the power to request information from model providers, to conduct evaluations independently or in partnership with the scientific panel, and, where it finds a violation, to issue a decision and recommend a fine to the Commission. The Commission retains the formal decision-making authority on penalties, which is an important accountability check but also a potential source of delay.
No formal enforcement action had been concluded by the time of publication. The Act's general-purpose AI provisions only became applicable from August 2025, meaning the Office is in the period between standing up its regulatory machinery and deploying it. How it handles its first contested case will be enormously instructive. Will it move quickly, accepting the risk of legal challenge from a well-resourced provider, or will it proceed with the caution typical of a new institution uncertain of its own legal robustness? The answer will define whether the AI Office is genuinely an enforcement body or a sophisticated advisory function with powers it is reluctant to use.
THE AI IN EUROPE VIEW
The European AI Office deserves credit for what it has actually done: built a credible regulatory institution in roughly twelve months, convened a meaningful multi-stakeholder process, and resisted the temptation to become a lobbying capture exercise. That is harder than it sounds, and Lucilla Sioli's team has managed it under considerable political and commercial pressure. But institutional credibility and enforcement credibility are not the same thing, and Europe has a long history of building the former while deferring the latter indefinitely. The compute gap for independent model evaluation is not a minor operational detail; it is a foundational problem. A regulator that cannot independently verify what it is regulating is not, in any meaningful sense, regulating. The AI Office needs dedicated public compute, a fully operational scientific panel, and the political backing to bring its first contested enforcement case without flinching. Cecilia Bonefeld-Dahl and DigitalEurope are right that proportionality matters, and Mistral is right that European developers should not be crushed by compliance costs designed with American hyperscalers in mind. But neither of those legitimate concerns should become a reason to hollow out the obligations before the ink is dry. The AI Office has the architecture. Now it needs the spine.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article6 terms
inference
When an AI model processes input and produces output. The actual 'thinking' step.
at scale
Applied broadly, to a large number of users or use cases.
AI safety
Research focused on ensuring AI systems behave as intended without causing harm.
alignment
Ensuring AI systems pursue goals that match human intentions and values.
red-teaming
Deliberately trying to make an AI system fail or produce harmful outputs to find weaknesses.
compute
The processing power needed to train and run AI models.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.