What the New Regime Actually Requires
The compliance obligations divide into three distinct categories. Generative AI systems must notify users that products are AI-powered, label AI-generated outputs, and apply deepfake labelling requirements. High-impact AI, defined as systems deployed in public decision-making, healthcare, transport, energy, and credit decisions, faces far stricter obligations: pre-deployment impact assessments, risk evaluation, mandatory human oversight, user notifications, and continuous monitoring. Systems exceeding ten to the power of twenty-six floating-point operations in training compute must conduct and document risk mitigation assessments independently.
The penalties are not trivial, and reputational damage in regulated sectors compounds the financial exposure. Enforcement has not yet begun in earnest, but the responsible ministry has publicly committed to transparent, consistent oversight.
A Convergence European Firms Cannot Ignore
This new law does not exist in isolation. It arrived simultaneously with fresh agentic AI governance guidance from another jurisdiction and within weeks of a further national AI statute taking effect elsewhere. The cumulative effect is a patchwork of overlapping, competing compliance regimes that are now live, not theoretical. For EU and UK firms that have spent the past two years modelling their AI governance programmes around the EU AI Act alone, the reckoning is uncomfortable.
Andrea Renda, Senior Research Fellow at the Centre for European Policy Studies in Brussels and one of Europe's foremost AI governance analysts, has argued consistently that the EU AI Act's risk-based tiering creates clear obligations for high-risk applications but leaves significant ambiguity in the middle tiers. The new law takes a different structural approach: rather than four risk tiers, it targets three specific AI categories with prescriptive rules for each. Neither model is obviously superior, but the coexistence of both means European firms operating internationally must now maintain parallel compliance architectures.
Dragoș Tudorache, the Romanian MEP who co-led the European Parliament's negotiations on the EU AI Act, has repeatedly emphasised that the Act's extraterritorial provisions are designed to prevent regulatory arbitrage. The emergence of a second comprehensive extraterritorial AI law reinforces that logic globally: no major economy is willing to let foreign AI providers operate without accountability. For financial services firms, this convergence means that compliance-by-design is no longer a differentiator; it is the baseline cost of market access.
Financial Services: The Sector Under Most Pressure
Credit decisions sit explicitly within the high-impact AI category under the new law, mirroring the EU AI Act's own classification of credit scoring as a high-risk application. For European banks, insurers, and fintech platforms that have built AI-driven underwriting or affordability assessment tools and deployed them across multiple jurisdictions, the compliance burden is now multiplicative rather than additive.
The practical questions are already stacking up inside compliance teams. Does a large language model used to generate personalised financial advice qualify as high-impact AI, or merely as a generative AI system with lighter obligations? Does a symptom-checking chatbot embedded in an employee health benefit platform trigger pre-deployment impact assessment requirements? Where exactly is the line between informational software and a system that influences a consequential decision?
These are not hypothetical. They are the questions that legal and compliance officers at firms including major European banks and AI platform providers are actively working through, and the enforcement decrees that would clarify them have not yet been published.
Implementation Ambiguities: The Next Eight to Twelve Weeks Are Critical
Three significant ambiguities remain unresolved. First, the definition of high-impact AI names six sectors but does not codify granular thresholds. A credit decision algorithm at a retail bank and a macro-level risk model at a central bank both theoretically fall within scope, but the practical obligations may differ substantially once implementing guidance arrives.
Second, human oversight is mandated but not precisely defined. The law does not specify whether this requires human-in-the-loop veto authority, human review before each deployment, or continuous real-time monitoring. For financial services firms running automated decisioning at scale, the difference between these interpretations is operationally enormous.
Third, deepfake labelling requirements remain loosely drafted. Will they apply only to synthetic media mimicking real individuals, or to any AI-generated content whatsoever? The answer has significant implications for firms using AI to generate client communications, regulatory reports, or risk summaries.
Delayed clarity is costly. Foreign investors planning operations in any jurisdiction subject to this law cannot finalise technology architectures or vendor contracts while core definitions remain open. The responsible ministry is expected to publish substantive implementation guidance by mid-2026, but that timeline is not formally committed. Companies should assume the law applies as written and begin compliance audits immediately. Delayed clarity is not a compliance excuse.
How European AI Strategy Must Respond
The emergence of a second comprehensive AI law with extraterritorial reach accelerates a dynamic that European policymakers and financial regulators have been slow to operationalise: the need for mutual recognition or at least structured equivalence frameworks between major AI regulatory regimes. Without them, European firms face the prospect of maintaining genuinely separate compliance programmes for each jurisdiction, with no credit given for investment already made in EU AI Act adherence.
The European Banking Authority and the European Securities and Markets Authority have both begun issuing guidance on AI use in financial services, but neither has yet addressed the cross-jurisdictional compliance burden in concrete terms. The Bank of England's Prudential Regulation Authority published a discussion paper on AI model risk in 2024 and has signalled further guidance for 2026, but coordination with non-European frameworks remains largely absent from the published agenda.
For EU and UK financial services firms, the immediate practical steps are clear. Map all AI systems against both the EU AI Act's high-risk categories and the equivalent categories under any jurisdiction where you serve users above the relevant thresholds. Identify credit, healthcare, and public-facing decisioning applications first, as these carry the highest exposure under both regimes. Engage in any open rule-making consultations now, because companies that file substantive comments during decree finalisation can meaningfully shape how obligations are interpreted. And do not wait for enforcement to begin before treating compliance as mandatory.
The second comprehensive national AI law is live. The question is no longer whether overlapping global AI regulation will affect European financial services. It already does.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.