Skip to main content
The EU AI Act Has a Rival: What Europe's Financial Sector Must Learn from the World's Second Comprehensive AI Law

The EU AI Act Has a Rival: What Europe's Financial Sector Must Learn from the World's Second Comprehensive AI Law

Ten weeks after a comprehensive national AI law took effect in a major non-European jurisdiction, global tech firms are scrambling to comply. For EU and UK financial services companies deploying AI in credit decisions, healthcare diagnostics, and public-facing systems, the lessons are immediate and the compliance clock is already ticking.

The world now has two comprehensive, jurisdiction-spanning AI regulatory frameworks in force, and European financial services firms had better be paying attention to both of them. While much of the industry's compliance energy remains fixed on the phased rollout of the EU AI Act, a second national AI law took effect on 22 January 2026 in a major economy outside Europe, creating extraterritorial obligations that snare any company meeting relatively modest revenue or user thresholds. Ten weeks in, big tech is scrambling, enforcement decrees are being finalised at pace, and the era of jurisdictional arbitrage on AI is definitively over.

The law in question unified 19 separate regulatory proposals into a single regime covering generative AI, high-impact AI, and high-performance AI simultaneously. Its extraterritorial reach is explicit: foreign companies are captured if they exceed defined revenue or user thresholds, regardless of where they are headquartered. For European firms with international AI deployments, particularly in financial services where credit scoring, fraud detection, and algorithmic trading already trigger scrutiny at home, the exposure is real and immediate.

Advertisement

What the New Regime Actually Requires

The compliance obligations divide into three distinct categories. Generative AI systems must notify users that products are AI-powered, label AI-generated outputs, and apply deepfake labelling requirements. High-impact AI, defined as systems deployed in public decision-making, healthcare, transport, energy, and credit decisions, faces far stricter obligations: pre-deployment impact assessments, risk evaluation, mandatory human oversight, user notifications, and continuous monitoring. Systems exceeding ten to the power of twenty-six floating-point operations in training compute must conduct and document risk mitigation assessments independently.

The penalties are not trivial, and reputational damage in regulated sectors compounds the financial exposure. Enforcement has not yet begun in earnest, but the responsible ministry has publicly committed to transparent, consistent oversight.

Editorial photograph taken inside a modern European financial institution's technology compliance hub, showing two analysts reviewing dual monitors displaying AI governance documentation and risk asse

A Convergence European Firms Cannot Ignore

This new law does not exist in isolation. It arrived simultaneously with fresh agentic AI governance guidance from another jurisdiction and within weeks of a further national AI statute taking effect elsewhere. The cumulative effect is a patchwork of overlapping, competing compliance regimes that are now live, not theoretical. For EU and UK firms that have spent the past two years modelling their AI governance programmes around the EU AI Act alone, the reckoning is uncomfortable.

Andrea Renda, Senior Research Fellow at the Centre for European Policy Studies in Brussels and one of Europe's foremost AI governance analysts, has argued consistently that the EU AI Act's risk-based tiering creates clear obligations for high-risk applications but leaves significant ambiguity in the middle tiers. The new law takes a different structural approach: rather than four risk tiers, it targets three specific AI categories with prescriptive rules for each. Neither model is obviously superior, but the coexistence of both means European firms operating internationally must now maintain parallel compliance architectures.

Dragoș Tudorache, the Romanian MEP who co-led the European Parliament's negotiations on the EU AI Act, has repeatedly emphasised that the Act's extraterritorial provisions are designed to prevent regulatory arbitrage. The emergence of a second comprehensive extraterritorial AI law reinforces that logic globally: no major economy is willing to let foreign AI providers operate without accountability. For financial services firms, this convergence means that compliance-by-design is no longer a differentiator; it is the baseline cost of market access.

Financial Services: The Sector Under Most Pressure

Credit decisions sit explicitly within the high-impact AI category under the new law, mirroring the EU AI Act's own classification of credit scoring as a high-risk application. For European banks, insurers, and fintech platforms that have built AI-driven underwriting or affordability assessment tools and deployed them across multiple jurisdictions, the compliance burden is now multiplicative rather than additive.

The practical questions are already stacking up inside compliance teams. Does a large language model used to generate personalised financial advice qualify as high-impact AI, or merely as a generative AI system with lighter obligations? Does a symptom-checking chatbot embedded in an employee health benefit platform trigger pre-deployment impact assessment requirements? Where exactly is the line between informational software and a system that influences a consequential decision?

These are not hypothetical. They are the questions that legal and compliance officers at firms including major European banks and AI platform providers are actively working through, and the enforcement decrees that would clarify them have not yet been published.

Implementation Ambiguities: The Next Eight to Twelve Weeks Are Critical

Three significant ambiguities remain unresolved. First, the definition of high-impact AI names six sectors but does not codify granular thresholds. A credit decision algorithm at a retail bank and a macro-level risk model at a central bank both theoretically fall within scope, but the practical obligations may differ substantially once implementing guidance arrives.

Second, human oversight is mandated but not precisely defined. The law does not specify whether this requires human-in-the-loop veto authority, human review before each deployment, or continuous real-time monitoring. For financial services firms running automated decisioning at scale, the difference between these interpretations is operationally enormous.

Third, deepfake labelling requirements remain loosely drafted. Will they apply only to synthetic media mimicking real individuals, or to any AI-generated content whatsoever? The answer has significant implications for firms using AI to generate client communications, regulatory reports, or risk summaries.

Delayed clarity is costly. Foreign investors planning operations in any jurisdiction subject to this law cannot finalise technology architectures or vendor contracts while core definitions remain open. The responsible ministry is expected to publish substantive implementation guidance by mid-2026, but that timeline is not formally committed. Companies should assume the law applies as written and begin compliance audits immediately. Delayed clarity is not a compliance excuse.

How European AI Strategy Must Respond

The emergence of a second comprehensive AI law with extraterritorial reach accelerates a dynamic that European policymakers and financial regulators have been slow to operationalise: the need for mutual recognition or at least structured equivalence frameworks between major AI regulatory regimes. Without them, European firms face the prospect of maintaining genuinely separate compliance programmes for each jurisdiction, with no credit given for investment already made in EU AI Act adherence.

The European Banking Authority and the European Securities and Markets Authority have both begun issuing guidance on AI use in financial services, but neither has yet addressed the cross-jurisdictional compliance burden in concrete terms. The Bank of England's Prudential Regulation Authority published a discussion paper on AI model risk in 2024 and has signalled further guidance for 2026, but coordination with non-European frameworks remains largely absent from the published agenda.

For EU and UK financial services firms, the immediate practical steps are clear. Map all AI systems against both the EU AI Act's high-risk categories and the equivalent categories under any jurisdiction where you serve users above the relevant thresholds. Identify credit, healthcare, and public-facing decisioning applications first, as these carry the highest exposure under both regimes. Engage in any open rule-making consultations now, because companies that file substantive comments during decree finalisation can meaningfully shape how obligations are interpreted. And do not wait for enforcement to begin before treating compliance as mandatory.

The second comprehensive national AI law is live. The question is no longer whether overlapping global AI regulation will affect European financial services. It already does.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
  • Slug regenerated from saudi-arabia-ai-law-enforcement-big-tech-scrambling to the-eu-ai-act-has-a-rival-what-europes-financial-sector-must-learn-from-the-worlds-second to match the rewritten Europe title per editorial integrity policy.
AI Terms in This Article 6 terms
agentic

AI that can independently take actions and make decisions to complete tasks.

generative AI

AI that creates new content (text, images, music, code) rather than just analyzing existing data.

AI-powered

Uses artificial intelligence as part of its functionality.

AI-driven

Primarily guided or operated by artificial intelligence.

at scale

Applied broadly, to a large number of users or use cases.

AI governance

The policies, standards, and oversight structures for managing AI systems.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment