Skip to main content
EU AI Act 2026: The Essential Plain-English Guide for Business and Legal Teams

EU AI Act 2026: The Essential Plain-English Guide for Business and Legal Teams

The EU AI Act is now the world's most consequential AI regulation, and its core obligations are live or arriving fast. This guide walks European business and legal teams through scope, prohibited practices, high-risk categories, foundation-model rules, enforcement structure, and the penalties that make non-compliance genuinely expensive.

The EU AI Act is not a future problem: its first hard obligations entered force on 02/08/2025, and the rules that will reshape how European companies build, buy, and deploy artificial intelligence are arriving in waves through 2026 and beyond.

Passed by the European Parliament on 13/03/2024 and published in the Official Journal of the European Union on 12/07/2024, Regulation (EU) 2024/1689 is the world's first comprehensive, legally binding horizontal framework for AI. It applies not just to companies headquartered in the EU, but to any organisation placing an AI system on the EU market or whose AI outputs affect people inside the EU. That extraterritorial reach is deliberate and sweeping.

Advertisement

This guide covers everything a business or legal team needs to know: what the Act covers, what it bans outright, how risk tiers work, what it demands of foundation-model providers, how enforcement is structured, and what non-compliance actually costs.

Scope: Who and What Does It Cover?

The Act applies to providers (those who develop or place AI systems on the market), deployers (businesses and public bodies that use AI systems in a professional context), importers, distributors, and product manufacturers. It covers AI systems and, separately, general-purpose AI (GPAI) models, which are treated under their own distinct set of rules.

There are genuine exclusions. AI used exclusively for military or national-security purposes falls outside the Act, as does AI used purely for personal, non-professional activity. Scientific research and development benefits from a lighter-touch regime during the development phase, though systems released to the market must then comply in full.

"The AI Act applies not just to companies headquartered in the EU, but to any organisation placing an AI system on the EU market or whose AI outputs affect people inside the EU. That extraterritorial reach is deliberate and sweeping."
AI in Europe analysis of Regulation (EU) 2024/1689

The European Commission has published FAQ guidance clarifying that the Act covers AI systems made available free of charge, including open-weight models released publicly, where the provider has a commercial presence or a meaningful user base in the EU. That reading has significant implications for the open-source community, even if certain transparency obligations are modulated for genuinely open releases.

The Four-Tier Risk Architecture

The Act organises AI systems into four risk levels, and the compliance burden scales accordingly.

Unacceptable Risk: Prohibited Practices

A defined set of AI applications is banned outright from 02/02/2025. These prohibitions cover:

  • Subliminal manipulation techniques that exploit psychological weaknesses or unconscious biases to distort behaviour in ways that cause harm.
  • Exploitation of vulnerabilities of specific groups, including children and people with disabilities.
  • Social scoring by public authorities, where citizens are ranked or rated on the basis of their behaviour in ways that lead to unjustified or disproportionate treatment.
  • Real-time remote biometric identification (RBI) in publicly accessible spaces by law-enforcement, with only narrowly defined exceptions subject to prior judicial or independent administrative authorisation.
  • Retrospective RBI by law-enforcement, except for serious crimes and subject to judicial authorisation.
  • Emotion recognition in workplace and educational settings (with limited safety-use exceptions).
  • Biometric categorisation systems that infer sensitive attributes such as political opinion, religious belief, or sexual orientation.
  • AI-based predictive policing of individuals based solely on profiling.

The prohibitions are not hypothetical compliance footnotes. The German data-protection landscape and ongoing scrutiny of facial-recognition vendors across several Member States signal that enforcement interest in this tier is high from day one.

Close-up editorial photograph of a legal or compliance professional's hands on a desk, annotating a printed multi-page regulatory document with a red pen. A laptop with a spreadsheet open sits to one

High-Risk AI Systems

High-risk systems face the Act's heaviest obligations before they can be placed on the market or put into service. Annex III of the Act lists eight domains that automatically qualify:

  • Biometric identification and categorisation.
  • Critical infrastructure management (energy, water, transport).
  • Education and vocational training (access, assessment, monitoring).
  • Employment, workforce management, and access to self-employment.
  • Access to essential private and public services, including credit scoring and insurance.
  • Law enforcement.
  • Migration, asylum, and border-control management.
  • Administration of justice and democratic processes.

High-risk providers must establish a risk-management system, ensure training-data governance, maintain technical documentation, enable human oversight, achieve a defined level of accuracy and robustness, and register in the EU database for high-risk AI systems before deployment. For AI systems embedded in products already governed by existing EU harmonisation legislation (medical devices, machinery, vehicles), the conformity-assessment route runs through the product's existing notified body or self-assessment pathway.

AlgorithmWatch, the Berlin-based civil-society organisation that maintains one of the most detailed public trackers of the Act's implementation, has flagged that the self-assessment route available to most Annex III systems creates significant risk of regulatory arbitrage, particularly in HR and credit-scoring applications where competitive pressure is highest.

Limited and Minimal Risk

Systems in the limited-risk tier, primarily chatbots and deepfake-generating tools, must meet targeted transparency obligations: users must be told they are interacting with an AI, and synthetic content must be labelled. Minimal-risk systems, the vast majority of commercial AI, have no mandatory obligations under the Act, though the Commission encourages voluntary codes of conduct.

General-Purpose AI Models: A Separate Rulebook

The GPAI provisions, which apply to providers of large foundation models used across many downstream tasks, are among the Act's most commercially significant elements. They entered application on 02/08/2025, the same date as the prohibited-practices rules.

All GPAI model providers must prepare and maintain technical documentation, comply with EU copyright law (including providing summaries of training data used), and publish a sufficiently detailed model policy. Where a GPAI model is integrated into a downstream high-risk system, the GPAI provider must cooperate with the downstream provider to enable compliance.

Models that pose systemic risk, defined initially as those trained on compute above 10^25 floating-point operations (FLOPs), face additional obligations: adversarial testing (red-teaming), incident reporting to the AI Office, cybersecurity measures, and energy-efficiency reporting.

The systemic-risk threshold matters enormously in practice. It currently captures the largest frontier models, including those developed by Mistral AI, the Paris-based laboratory whose Mistral Large model family sits close to the boundary, as well as models from non-European providers offered in the EU market. The European AI Office has the authority to revise the FLOPs threshold by delegated act as compute costs change.

Editorial photograph of a small team of three people, diverse in age and gender, gathered around a large wall-mounted monitor in a glass-walled meeting room. The screen displays a flowchart or complia

Enforcement: The AI Office, the EU AI Board, and National Authorities

The Act creates a layered enforcement architecture designed to prevent the fragmentation that has plagued GDPR enforcement.

The European AI Office, established within the European Commission's Directorate-General for Communications Networks, Content and Technology (DG CNECT), has primary supervisory jurisdiction over GPAI model providers, regardless of where they are established. It is the lead authority for systemic-risk models and coordinates the overall implementation framework. The AI Office began operating in February 2024, before the Act even entered force, and has already launched its first consultation on GPAI codes of practice.

The EU AI Board brings together the AI supervisory authorities of all 27 Member States. Its remit is to issue opinions, coordinate cross-border cases, and advise the Commission on delegated acts. It is the mechanism intended to prevent the kind of supervisory inconsistency that has undermined the GDPR's credibility in some quarters.

National competent authorities designated by each Member State handle enforcement for everything below the GPAI tier. In practice, many Member States are expected to assign these responsibilities to existing data-protection authorities or sector regulators, though the Act requires a formally designated market-surveillance authority for AI.

The Centre for Information Policy Leadership (CIPL), which has published detailed practical guidance on the Act for corporate legal and compliance teams, notes that the overlap between AI Act obligations and GDPR requirements, particularly in automated decision-making, means that data-protection officers will need to be closely integrated into AI compliance programmes from the start.

Timeline: When Does What Apply?

  • 02/08/2024: Act entered into force.
  • 02/02/2025: Prohibited practices and AI literacy obligations apply.
  • 02/08/2025: GPAI model rules, governance obligations, and AI Office powers fully apply.
  • 02/08/2026: High-risk AI systems (Annex III) obligations fully apply; national competent authorities must be designated and operational.
  • 02/08/2027: High-risk AI systems embedded in regulated products under existing EU harmonisation legislation must comply.

Penalties: The Numbers That Focus Minds

The Act's penalty regime is tiered to reflect the seriousness of the violation. Prohibited-practice violations carry fines of up to 35,000,000 euros or 7% of total worldwide annual turnover, whichever is higher. Violations of other obligations, including high-risk system requirements and GPAI rules, carry fines of up to 15,000,000 euros or 3% of worldwide annual turnover. Supplying incorrect, incomplete, or misleading information to authorities attracts fines of up to 7,500,000 euros or 1.5% of worldwide annual turnover.

For SMEs and start-ups, the Act specifies that penalties should be proportionate to their size and that the lower of the two figures (absolute or percentage-based) should apply. The Act explicitly requires national authorities to take into account the interests and specific needs of SMEs when applying penalties.

The Act's reach, timelines, and financial stakes are best understood through hard figures. The data below captures the key metrics that business and legal teams need to benchmark their compliance posture.

THE AI IN EUROPE VIEW

The EU AI Act is the most ambitious attempt by any jurisdiction to govern artificial intelligence through binding law, and that ambition deserves credit. The risk-tiering logic is fundamentally sound: concentrate compliance obligations where harm is most plausible, and leave lower-stakes applications to compete freely. The GPAI provisions, in particular, reflect a sophisticated understanding of how foundation models create systemic dependencies across an entire AI supply chain.

But the Act's credibility will be determined entirely by enforcement quality, and here the grounds for scepticism are substantial. The GDPR's uneven application across Member States, driven by differences in regulatory capacity and political will, is a cautionary tale that Brussels has not fully internalised. Designating national competent authorities is not the same as funding them adequately or insulating them from domestic industry lobbying.

The AI Office is the Act's most important institutional innovation, and it needs to be given both the resources and the independence to act. If the first major GPAI enforcement action takes three years to resolve, or if the systemic-risk threshold is quietly adjusted upwards under commercial pressure, the Act's deterrent effect collapses. European business teams should prepare for genuine compliance, not regulatory theatre. The penalties are real. The question is whether the enforcement machinery will be too.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 4 terms
benchmark

A standardized test used to compare AI model performance.

red-teaming

Deliberately trying to make an AI system fail or produce harmful outputs to find weaknesses.

compute

The processing power needed to train and run AI models.

open-weight

Models whose learned parameters are shared, but training code may not be.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment