AI Act Article 5 in practice: what the first enforcement cases reveal about the European AI Office's priorities
Article 5 of the EU AI Act, the prohibited-practices chapter, became enforceable on 2 February 2025. Twelve months on, the European AI Office has opened a handful of cases. The case mix is small but instructive: it exposes where the Commission is choosing to spend its political capital and where it is deliberately holding back.
The European AI Office's first enforcement actions under Article 5 of the EU AI Act are fewer in number than critics feared and defenders hoped, but they are not trivial. Since the prohibited-practices chapter became binding on 2 February 2025, the Office has confirmed it is examining a small cluster of complaints and own-initiative inquiries, centred on three behavioural categories: subliminal manipulation, social scoring, and real-time biometric surveillance in publicly accessible spaces. What the Office has not yet done is close a case with a formal infringement decision, a fact that speaks volumes about the legal complexity involved and the political caution inside the Berlaymont.
Article 5 is the Act's hardest edge. Unlike the risk-tier framework that governs high-risk systems under Titles III and IV, the prohibited practices are absolute. There is no conformity assessment, no documentation audit and no corrective-action window. A system that falls inside Article 5 must simply not exist on the EU market. That makes enforcement decisions irreversible in a way that most regulatory action in the digital single market is not, and it concentrates minds accordingly.
Advertisement
"A decision that is successfully challenged before the Court of Justice of the European Union could hollow out Article 5's deterrent effect for years. That calculation, legitimate as it is, cannot become a permanent excuse for inaction."
Editorial analysis, AI in Europe
The Office, which sits within the Commission's Directorate-General for Communications Networks, Content and Technology (DG CONNECT), began its formal case-intake process in Q3 2024, ahead of the February 2025 applicability date. By the close of Q1 2026, it had acknowledged receiving more than thirty complaints from civil society, data-protection bodies, and member state authorities, but had formally opened what it describes as a "structured examination" in a significantly smaller number of matters. That distinction between a complaint received and a case formally opened matters: the Office is signalling that it will not treat every submission as a live enforcement file.
The case mix so far
Three broad clusters have emerged from the Office's public statements and from tracking work published by Bird and Bird's AI Act monitoring team.
The first cluster involves emotion-recognition and micro-targeting systems deployed in commercial contexts. Several complaints have pointed to tools used in recruitment screening and consumer-profiling pipelines that appear to infer psychological states from facial expression or voice tone in order to rank or exclude individuals. Article 5(1)(b) prohibits systems that exploit vulnerabilities or use subliminal techniques to distort behaviour in a manner that causes or is likely to cause harm. The Office has indicated it is assessing whether certain vendor products marketed to HR departments in member states cross that line, though it has declined to name those vendors publicly at this stage.
The second cluster is social scoring. Article 5(1)(c) bans general-purpose social scoring of natural persons by public authorities or on their behalf. Complaints here have largely originated from member state data-protection authorities, which have forwarded cases involving local-government pilots that used behavioural and socioeconomic data aggregates to allocate public services or flag individuals for additional scrutiny. The European Data Protection Board has issued guidance noting that scoring systems that produce differential treatment based on social behaviour across unrelated contexts are directly in scope, regardless of whether the label "social score" is applied by the operator.
The third cluster, and the most politically charged, involves real-time remote biometric identification in publicly accessible spaces. Article 5(1)(d) prohibits such systems by law-enforcement authorities except in a tightly defined list of exceptions. The Office has confirmed it is in contact with two member state authorities regarding deployments that do not appear to fit any of the listed exceptions. It has not named those member states, but reporting by Politico Europe has suggested one involves a southern European jurisdiction that deployed a temporary surveillance network during a major public event in 2025.
How the cases are progressing
No Article 5 case has yet reached a formal infringement decision. The Office's enforcement procedure involves several phases: an intake assessment, a structured examination in which the operator or deployer is invited to provide information, a preliminary findings document, and then a decision phase. The Office can impose fines of up to 35 million euros or seven per cent of global annual turnover, whichever is higher, for Article 5 violations. It can also refer matters to the relevant national competent authority where the conduct is better characterised under another legal instrument, including the GDPR.
The slow pace has drawn criticism from civil-society organisations. AlgorithmWatch, the Berlin-based non-profit that monitors automated decision-making in Europe, has argued publicly that the Office is being overly cautious and that prolonged structured examinations allow prohibited systems to continue operating during the inquiry. The Office's counterargument, relayed through official statements, is that getting the first decisions right is more important than getting them quickly. A decision that is successfully challenged before the Court of Justice of the European Union could hollow out Article 5's deterrent effect for years.
Natasha Lomas, writing for Politico Europe's technology desk, has noted that the Office's resource constraints are also a material factor. The Office began 2025 with a staff complement that legal observers considered undersized relative to the volume of AI products and services it is responsible for supervising across twenty-seven member states. Hiring has accelerated, but the Office is simultaneously managing obligations under the General-Purpose AI (GPAI) model provisions that became applicable in August 2025, which compete for the same pool of technical and legal expertise.
What the case mix tells us about Commission strategy
The distribution of cases is not random. The Office appears to be sequencing its work to build precedent incrementally, starting with the categories where the legal text is clearest and where the facts are least likely to be contested on technical grounds.
Social scoring is conceptually the most straightforward of the Article 5 prohibitions. The definition requires that a general-purpose score be produced by or on behalf of a public authority and that it lead to detrimental or unfavourable treatment in social contexts unrelated to the context in which the data was generated. Where a local authority has commissioned a vendor to produce such a system, the contractual and technical trail is usually legible. The Office appears to be treating these cases as the lowest-risk entry points for enforcement.
Emotion recognition is more technically contested. The question of whether a system is genuinely inferring an emotion, as opposed to classifying observable facial movements, is actively debated among computer-vision researchers, and the Office will need to anchor any decision in a defensible technical standard. The AI Office has been in dialogue with the European Committee for Standardisation (CEN) and the European Committee for Electrotechnical Standardisation (CENELEC) on harmonised standards, but those standards are not yet finalised, which leaves the Office relying on its own technical expert panels.
The biometric surveillance cases are the highest-stakes and the slowest to move. Law-enforcement systems engage national-security sensitivities and member state prerogatives that the Commission is constitutionally cautious about overriding. The exceptions in Article 5(1)(d), including searches for missing children, prevention of specific terrorist threats, and prosecution of certain serious crimes, are deliberately narrow but also deliberately worded in ways that leave interpretive space. The Office is unlikely to close one of these cases without a formal opinion from the European Data Protection Board and possibly a referral to the Court of Justice.
The numbers around Article 5 enforcement in its first year establish the scale and stakes of the regime: how many complaints have arrived, how many cases are live, what penalties are theoretically in play, how the Office is staffed, and how the architecture of the law distributes responsibility across the Union's twenty-seven member states.
Where enforcement goes next
The Office has signalled, through its published work programme and through statements by officials at the AI Office Forum held in Brussels in late 2025, that Q2 and Q3 2026 are likely to produce the first formal preliminary-findings documents. That does not necessarily mean final decisions, but it would represent a material step forward and would give operators their first concrete sense of how the Office reads the text in practice.
The member state dimension will grow in importance. National competent authorities are expected to play a significant role in Article 5 enforcement for systems that are deployed locally rather than cross-border. Several authorities, including France's Autorite de la Concurrence and Germany's Federal Network Agency (Bundesnetzagentur), are building their AI supervision teams, but the pace varies considerably. Bird and Bird's tracker notes that fewer than half of member states had fully designated their national competent authority under the AI Act by January 2026, which creates coverage gaps the Office must informally fill.
The Commission's appetite for visible enforcement action will also be shaped by the political climate. The von der Leyen II Commission has staked part of its industrial credibility on the AI Act being a workable regulatory framework rather than a chilling instrument, and early enforcement that is seen as disproportionate or technically unsound would feed a narrative the Commission is keen to avoid. That political calculation is visible in the pace and the target selection of the first cases. It is not a reason to dismiss the enforcement effort, but it is a reason to read the case mix as a strategic document as much as a legal one.
THE AI IN EUROPE VIEW
The European AI Office is doing something genuinely difficult: building enforcement practice for a legal instrument with no direct precedent, against systems whose technical properties are contested, in a political environment where member states are territorial about security prerogatives and the Commission is anxious about its pro-innovation reputation. Some patience is warranted. What is not warranted is the assumption that slow case closure equals rigorous deliberation. There is a real risk that the Office's caution tips into institutional timidity, and that the first Article 5 decisions, when they finally arrive, are so carefully hedged as to provide minimal interpretive guidance for the industry they are supposed to regulate. The civil-society criticism from organisations like AlgorithmWatch deserves more than a polite acknowledgement: if prohibited systems remain operational throughout multi-year structured examinations, the Article 5 absolute ban becomes a conditional one in practice. The Office needs more staff, clearer internal timelines, and the political backing to close at least one case before the end of 2026, even if that means accepting some litigation risk. An enforcement regime that never enforces is not a regime; it is a press release.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.