Skip to main content
The EU AI Act Is Not The Global Template: What European Policymakers Can Learn From The Voluntary Model
· 7 min read

The EU AI Act Is Not The Global Template: What European Policymakers Can Learn From The Voluntary Model

Eighteen months into the EU AI Act's implementation, compliance costs are rising and European model output is lagging. The case for outcome-based, voluntary AI assurance frameworks is no longer a fringe position. European policymakers and their partners need to ask hard questions about whether prescriptive regulation is delivering what it promised.

The EU AI Act was sold as the world's gold standard for AI governance. Eighteen months into its phased implementation, the compliance costs are real, European frontier model output is lagging behind the United States and East Asia, and a growing number of researchers and regulators are asking whether prescriptive risk-tiering was the right instrument. The honest answer, supported by the data now available, is that the voluntary, outcome-based model deserves serious reconsideration, not as an excuse to abandon consumer protection, but as a more effective way to deliver it.

[[KEY-TAKEAWAYS:EU AI Act compliance costs are creating measurable drag on product iteration speed|Stanford AI Index 2026 shows European model output growing more slowly than US and East Asian peers|Voluntary outcome-based frameworks can protect consumers more precisely than horizontal risk classification|Sectoral regulators with genuine AI literacy are the critical missing ingredient in any lighter-touch model|Financial services firms face the sharpest compliance overhead from overlapping AI Act and DORA obligations]]

Advertisement

Where The EU AI Act Creates Real Costs For European Financial Services

The EU AI Act imposes mandatory risk classifications, high-risk system obligations, and significant governance paperwork on firms operating across the single market. For financial services firms in particular, which already carry compliance obligations under DORA, MiFID II, and the EBA's algorithmic model risk guidelines, the AI Act layers a parallel documentation and audit stack on top of existing requirements.

The obligations include:

  • Applicability assessments to determine whether a system qualifies as high-risk under Annex III
  • Documented conformity assessments and technical documentation before deployment
  • Transparency labelling for AI-generated content and disclosures to affected individuals
  • Ongoing post-market monitoring and incident reporting to national market surveillance authorities
  • Registration in the EU database for high-risk systems before they go live

Each obligation is defensible in isolation. In combination, they form a compliance stack that imposes a measurable drag on product iteration speed, particularly for smaller fintechs and challenger banks that cannot absorb the overhead through dedicated legal and compliance teams.

A wide-angle editorial photograph taken inside a modern European fintech office, showing compliance and technology teams reviewing AI governance documentation on large monitors. The Canary Wharf skyli

What A Voluntary, Outcome-Based Model Looks Like In Practice

The alternative is not deregulation. It is a different regulatory architecture, one that anchors on outcome obligations rather than input prohibitions, and that uses voluntary assurance frameworks as the measurement layer rather than mandatory pre-deployment licensing.

Professor Sandra Wachter of the Oxford Internet Institute, one of the UK's leading voices on AI accountability, has argued consistently that rights-based outcome obligations, backed by strong sectoral enforcement, deliver more precise protection than horizontal risk classification. Her position, expressed in multiple peer-reviewed papers since 2021, is that the EU's tiered approach risks becoming a compliance theatre exercise that satisfies regulators on paper while missing the actual harms it was designed to prevent.

In the UK, the Financial Conduct Authority has taken a deliberately different posture. The FCA's AI update published in 2024 emphasised outcomes-based principles, governance expectations, and explainability requirements mapped to existing regulatory frameworks, rather than a separate AI-specific licensing regime. The FCA's approach allows firms to demonstrate compliance through their own internal governance, subject to supervisory review, rather than pre-clearing systems through a centralised registry.

That contrast matters for the EU debate. The UK and the EU are regulating broadly comparable financial markets with broadly comparable consumer protection objectives. The divergence in approach is not an accident of political culture; it reflects a genuine disagreement about which architecture is more likely to work.

The Innovation Evidence Is Now In

The most important evidence is not rhetorical. The Stanford AI Index 2026 shows that European AI model output has grown more slowly than that of the United States and East Asian peers over the past two years, despite world-class research infrastructure at institutions including ETH Zurich, the Max Planck Institute, and INRIA. Meanwhile, Mistral AI, the most prominent European frontier lab, has consistently cited regulatory uncertainty as a factor in its decision to structure certain product lines through non-EU entities.

The correlation between lighter-touch regulatory environments and faster model shipment cadence is not causal proof on its own. Multiple factors drive innovation output:

  • Talent supply and researcher mobility across borders
  • Access to growth capital and the depth of venture markets
  • Sectoral demand signals from large enterprise customers
  • Public compute infrastructure and data access regimes

But the simplest explanation for why Europe is producing fewer frontier models despite its research base is that the regulatory stack discourages the iteration cycles that frontier work requires. Compliance teams and legal review processes slow the deployment loops that convert research into product. That is a structural problem, not a one-off.

An editorial photograph of the exterior of the European Banking Authority offices in Paris or the FCA building in London, shot on an overcast day with a wide lens. Subtle framing device of a laptop or

Why Consumer Protection Arguments Need A Closer Read

The standard defence of the EU model is that voluntary regimes under-protect consumers and workers. The strongest version of this argument deserves a fair hearing. AI deployments in credit scoring, insurance underwriting, hiring, healthcare triage, and financial advice all carry risks where market failure would be real and costly, and where individual consumers lack the information to protect themselves.

The response, however, is that outcome-based rules combined with rigorous sectoral enforcement cover these risks more precisely than horizontal risk classification. A credit scoring algorithm that discriminates on protected characteristics is already illegal under the Equal Treatment Directives and national anti-discrimination law. A medical decision-support tool that causes patient harm already triggers product liability and medical negligence frameworks. The AI Act, in its current form, layers an additional compliance regime on top of those existing obligations rather than replacing gaps in them.

The FCA's approach in the UK illustrates what the alternative looks like in practice. Rather than requiring firms to pre-register AI systems or obtain conformity assessments, the FCA expects firms to demonstrate that their AI governance meets the same outcomes it demands of any other risk management process: fairness, explainability, and robustness. The burden of proof sits with the firm, and supervisory challenge is real, but the mechanism is proportionate to the actual risk rather than the category of the tool.

Three Practical Positions For EU Policymakers To Consider Now

The AI Act is not going to be repealed, and this article is not arguing that it should be. But there are practical adjustments that could reduce the compliance drag without weakening consumer protection:

  1. Anchor sectoral guidance on outcome obligations. The European Banking Authority and EIOPA should issue sector-specific guidance that maps AI Act requirements to existing supervisory frameworks, reducing duplication and clarifying which obligations are genuinely additive.
  2. Introduce a voluntary assurance track for lower-risk high-risk systems. The current Annex III classification catches a very wide range of systems at very different risk levels. A voluntary conformity track, backed by accredited third-party auditors, would allow proportionate treatment without removing accountability.
  3. Invest seriously in sectoral regulators' AI literacy. Any lighter-touch model depends on competent supervisors who can challenge firms meaningfully. Without that investment, voluntary frameworks degrade into marketing exercises. The European Commission's AI Office is a start, but resourcing and sectoral depth remain concerns.

The Honest Counter-View

Voluntary regimes carry genuine risks that deserve acknowledgement. They depend on credible enforcement of outcome obligations, which requires regulators with technical capacity and a genuine willingness to impose sanctions when firms fail. A voluntary framework without credible audit does become a checkbox exercise, and the history of self-regulatory models in financial services is not uniformly encouraging.

Professor Wachter's broader body of work also notes that outcome-based regulation can be gamed if the outcome metrics themselves are poorly specified. Fairness benchmarks, explainability requirements, and robustness standards all require careful definition to avoid becoming procedural fig leaves. The EU Act's prescriptiveness is, in part, a response to that risk.

The best available model is therefore not a binary choice between the EU Act and full deregulation. It is a voluntary assurance framework, mapped to sector-specific outcome obligations, backed by well-resourced regulators with meaningful sanction powers and genuine technical expertise. That combination is achievable. The current EU Act, as implemented, falls short of it in a different direction: it is heavy on procedure and light on outcome verification.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
AI Terms in This Article 4 terms
world-class

Of the highest quality globally.

AI governance

The policies, standards, and oversight structures for managing AI systems.

explainability

The ability to understand and describe how an AI reached a particular decision.

compute

The processing power needed to train and run AI models.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment