Skip to main content
The EU AI Act at Year Two: A Scorecard That Demands Harder Choices
Long read
· 10 min read

The EU AI Act at Year Two: A Scorecard That Demands Harder Choices

The EU AI Act has been partly enforceable for a year. Some parts have worked better than expected; others have created genuine regulatory gridlock. This scorecard identifies what the AI Office got right, where the general-purpose AI rules are failing in practice, and why the scheduled 2027 review should not wait.

The EU AI Act is not broken, but it is already showing stress fractures that regulators and legislators cannot responsibly ignore for another two years. Twelve months into partial enforceability, the picture is mixed in ways that matter enormously to the companies, civil-society groups, and public institutions that have to live inside this framework every day. The Act deserves credit where credit is due; it also deserves direct, unsparing scrutiny on the parts that are not working. This is that scrutiny.

What the Act Got Right

Fewer than 40
Notified bodies formally designated under the AI Act as of mid-2025

Against a projected backlog of several thousand high-risk systems requiring conformity assessment over the next eighteen months, the current pool of qualified assessors represents a critical infrastructure gap.

Source
10^25
Floating-point operations: the training compute threshold that triggers systemic-risk classification under the GPAI provisions

This threshold was regarded as technically arbitrary at enactment and has since been complicated by frontier developers engineering architectures that make straightforward compute accounting contested.

Source
6
Member states where market surveillance authority designation remained incomplete more than twelve months after relevant provisions took effect

Bird and Bird's multi-jurisdiction tracker identified at least six EU member states where administrative decisions on market surveillance designation were still pending, effectively suspending enforcement in those territories.

Source
02/08/2027
Statutory deadline for the European Commission's general review of the AI Act

The Act mandates a Commission review of scope, enforcement effectiveness, and risk classification by this date. Critics argue the pace of frontier model development makes waiting until 2027 a defensible but wrong choice.

Source

Start with the definitional architecture, because it is genuinely better than most critics predicted. The risk-tiering approach, sorting AI systems into unacceptable, high, limited, and minimal-risk categories, gave practitioners something they had been missing from earlier EU digital regulation: a workable entry point. Law firm Bird and Bird, which has been running one of the more rigorous compliance trackers across multiple EU member states, noted in its 2025 tracker update that the majority of in-scope companies were able to place their systems in a risk tier within six months of the relevant provisions taking effect. That is not a trivial achievement. The definitions of a prohibited practice, particularly real-time remote biometric identification in public spaces, turned out to be precise enough to be actionable without being so narrow that they were trivially circumvented.

The EU AI Office, established under Article 88 of the Act, has been the other genuine success story. Operating out of Brussels under the European Commission's DG CONNECT, it has moved faster than most new regulatory bodies. It published its first set of model reporting templates ahead of the statutory deadline, coordinated with national competent authorities across seventeen member states within its first eight months, and produced technical guidance on high-risk classification that was broadly welcomed by industry and civil-society groups alike. AlgorithmWatch, which approaches EU AI governance from a rights-based perspective and is rarely generous with official bodies, acknowledged in its 2025 commentary that the AI Office had demonstrated more operational discipline than equivalent bodies established under the GDPR had shown in their early years.

"The AI Office demonstrated more operational discipline than equivalent bodies established under the GDPR had shown in their early years."
AlgorithmWatch, 2025 EU AI Act commentary

The IMCO and LIBE committees of the European Parliament have also performed better than their reputations for legislative sprawl might suggest. Joint hearings held across late 2024 and early 2025 produced substantive technical records, including expert testimony from academic researchers, national data protection authorities, and deployers of high-risk systems in healthcare and critical infrastructure. Those records are now forming the evidentiary basis for the Act's first formal review cycle. That is how the process is supposed to work, and on this occasion it broadly has.

An editorial photograph of a notified body testing laboratory in continental Europe, showing technical staff in a neutral industrial workspace examining server hardware and running diagnostic software

Where the Framework Is Failing

The general-purpose AI provisions, introduced late in the trilogue process and set out in Chapter V of the Act, are a different story. The GPAI rules were always a political compromise stapled onto a risk-based framework that had not been designed to accommodate them, and the seams are showing badly. The core problem is a definitional one: the distinction between a GPAI model and a GPAI model with systemic risk hinges on a training compute threshold of ten to the power of twenty-five floating-point operations, a figure that was already being questioned as technically arbitrary at the time of enactment. Within months of the relevant provisions entering force, frontier model providers were engineering architectures that complicated straightforward compute accounting, and the AI Office lacked the technical staff to audit those claims rigorously.

Hogan Lovells, whose Brussels-based regulatory practice produced a detailed GPAI compliance tracker in early 2025, flagged a further structural problem: the codes of practice developed for GPAI providers were being treated by some large developers as a substitute for binding obligation rather than a supplement to it. The voluntary nature of code participation, combined with the absence of a clear sanction pathway for non-participation, created an incentive structure that rewarded strategic ambiguity. Companies could signal engagement with the process whilst deferring the harder operational commitments indefinitely. The AI Office has acknowledged this gap informally, but the Act as written gives it limited leverage to close it without legislative amendment.

The conformity assessment bottleneck is a separate, structural failure that risks undermining the entire high-risk category. Under the Act, high-risk AI systems in certain domains, notably biometric systems, critical infrastructure management tools, and AI used in employment decisions, require third-party conformity assessment before they can be placed on the market. The problem is that the pool of notified bodies with the technical competence to conduct those assessments is dramatically smaller than the volume of systems requiring them. As of mid-2025, fewer than forty notified bodies across the EU had been formally designated under the Act, against a backlog that independent estimates suggest could run to several thousand systems requiring assessment over the next eighteen months.

EDRi, the European Digital Rights network, has raised a concern that is distinct from the bottleneck itself but equally serious: that the conformity assessment process, even when it does take place, is not surfacing the algorithmic harms it was designed to catch. Their commentary on early assessment practice noted that assessors were concentrating on process documentation, data governance records, and technical architecture descriptions rather than on empirical testing of discriminatory outputs or adversarial robustness. A conformity assessment that confirms a company has completed its paperwork is not the same as a conformity assessment that confirms a system is safe to deploy against vulnerable populations. The Act does not, in its current form, require the latter with sufficient specificity.

A photo-editorial image of a European Parliament committee hearing room, taken from a gallery angle, showing MEPs at curved benches facing expert witnesses seated at a central table. Papers and name p

The Enforcement Gap

Behind all of this sits an enforcement landscape that is fragmented in ways the Act's architects underestimated. National competent authorities vary enormously in capacity and appetite. Germany's Bundesnetzagentur and France's CNIL have moved relatively quickly to build AI-specific enforcement teams; several smaller member states have assigned the AI Act supervisory function to existing digital or data protection authorities with no additional headcount. The result is that the same system deployed by the same company in Frankfurt and in Vilnius will face materially different levels of scrutiny, a problem the AI Office can coordinate around but cannot solve through guidance alone.

The market surveillance provisions, which are supposed to catch non-compliant systems already in circulation, depend on member state authorities having both the tools to identify suspect systems and the legal powers to compel disclosure of technical documentation. Both conditions are inconsistently met across the single market. Bird and Bird's tracker identified at least six member states where the designation of market surveillance authorities under the Act remained incomplete more than twelve months after the relevant provisions took effect. In those jurisdictions, enforcement is effectively suspended pending administrative decisions that are themselves subject to domestic political pressures.

The quantitative picture reinforces the qualitative assessment: progress is real but unevenly distributed, and several headline metrics suggest the compliance infrastructure is not scaling at the rate the Act's enforcement architecture assumes.

Should the 2027 Review Be Brought Forward?

The Act mandates a general review by the Commission no later than 02/08/2027. That review is supposed to assess the Act's scope, its enforcement effectiveness, and whether the risk classification categories remain appropriate given technological development. The question being asked with increasing urgency in Brussels is whether waiting until 2027 is a defensible choice.

The honest answer is that it is not. Three specific problems require legislative attention before 2027, and none of them can be adequately addressed through implementing acts, delegated regulations, or AI Office guidance alone. First, the GPAI compute threshold needs either replacement with a capability-based trigger or supplementation with a secondary criterion that is less easily gamed. Second, the conformity assessment framework needs a structural intervention: either a fast-track designation process for new notified bodies with AI-specific technical competence, or a hybrid model that allows AI Office technical staff to conduct or co-conduct assessments in high-volume or high-risk cases. Third, the enforcement asymmetry between member states needs a coordination mechanism with actual teeth, not just the soft-law harmonisation the AI Office is currently attempting.

Bringing the review forward to 2026 would not require a full legislative reopening of the Act. A targeted amending regulation, focused on these three structural problems, could be drafted without disturbing the risk-tier architecture or the prohibited practices list, both of which are working. The political will for such a move exists in the European Parliament, where members of both IMCO and LIBE have indicated publicly that they regard the current timeline as too slow given the pace of model development. The Commission's AI Office has the technical credibility to draft the necessary proposals. What has been missing is a formal political decision to treat the Act as a living instrument rather than a five-year project that must not be touched until its review date arrives.

The EU has built something genuinely significant with this Act. The risk-tier framework, the AI Office, and the prohibited practices list are real achievements that deserve defence against those who would weaken them in the name of competitiveness. But defending the Act's achievements does not mean pretending its failures are either minor or distant. The GPAI provisions are broken in material ways. The conformity assessment system is not scaling. Enforcement is fragmented. These are not edge-case concerns: they are central to whether the Act delivers on its promise. Waiting until 2027 to address them is a choice, and it is the wrong one.

THE AI IN EUROPE VIEW

The EU AI Act was always going to be imperfect at birth. Legislation that tries to regulate an entire technology class across twenty-seven jurisdictions simultaneously cannot be otherwise. What is not forgivable is treating known structural failures as features to be reviewed on a pre-set timetable when the technology they are supposed to govern is moving on a dramatically faster cycle. The general-purpose AI provisions were a late addition that the Act's architecture was not designed to carry, and they are visibly buckling. The conformity assessment bottleneck is not a teething problem: it is a design defect that will produce a growing queue of high-risk systems deployed without meaningful oversight. The enforcement asymmetry between member states is a single-market integrity issue, not merely a compliance inconvenience. The AI Office has done well with what the Act gives it, and that is precisely the problem: the Act does not give it enough. A targeted amending regulation in 2026, focused on the GPAI compute trigger, the notified body shortage, and enforcement coordination, is not a concession to critics of the Act. It is the responsible act of a legislature that takes its own framework seriously enough to fix it when fixing is needed. The 2027 review date should be treated as a deadline for broader assessment, not a cordon around intervention.

Updates

AI Terms in This Article 3 terms
leverage

Use effectively.

AI governance

The policies, standards, and oversight structures for managing AI systems.

compute

The processing power needed to train and run AI models.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment