The law in question unifies 19 separate regulatory proposals into a single regime. Unlike the EU AI Act's four-tier risk classification, this framework targets three distinct categories of AI: generative systems, high-impact systems, and high-performance systems trained beyond defined compute thresholds. Its extraterritorial scope is explicit and unapologetic. Foreign companies that meet any one of three revenue or user-volume thresholds are immediately in scope, regardless of where their servers sit.
For European banks, insurers, credit bureaus, and health-tech platforms with global deployments, this is not a distant regulatory curiosity. It is a live compliance obligation that is already competing for the same internal legal and engineering resources that the EU AI Act demands.
What the Law Actually Requires
The compliance obligations divide cleanly across three categories. Understanding them is prerequisite to assessing exposure.
Generative AI systems must:
- Notify users that a product is AI-powered
- Label AI-generated outputs as such
- Implement deepfake labelling requirements for synthetic media
High-impact AI, defined as systems deployed in public decision-making, healthcare, transport, energy, nuclear operations, and credit decisions, faces considerably stricter obligations:
- Pre-deployment impact assessments
- Risk evaluation and documentation
- Mandatory human oversight
- User notifications at point of interaction
- Continuous monitoring post-deployment
High-performance AI, meaning systems whose training compute exceeds 10^26 floating-point operations, must conduct and document risk mitigation assessments before deployment.
The penalties are administrative fines, with reputational damage in tightly regulated sectors compounding the financial exposure. Enforcement has not yet begun in earnest, but the relevant ministry has publicly committed to transparent oversight, and implementation decrees are expected by mid-2026.
Why European Financial Services Cannot Ignore This
European lenders and fintech platforms are already navigating the EU AI Act's requirements for high-risk AI in credit scoring, fraud detection, and customer-facing automated decision-making. The addition of a second overlapping extraterritorial regime raises a structural question: are European firms building compliance architectures that can flex across multiple jurisdictions, or are they hard-coding assumptions that only hold inside the EU?
Andrea Renda, Senior Research Fellow at the Centre for European Policy Studies in Brussels and one of Europe's most cited AI governance scholars, has argued consistently that the EU AI Act's risk-based tiering was designed with exactly this kind of global interoperability problem in mind. The Act's conformity assessment processes and technical standards were intended to serve as a de facto global benchmark, a Brussels Effect for AI. But that logic only holds if other jurisdictions converge on compatible frameworks. The evidence so far is mixed.
Contrast the EU's approach with the structure of this newer law. The EU AI Act prohibits certain applications outright, then classifies the remainder into high-risk, limited-risk, and minimal-risk tiers. The newer framework does not tier all AI; it identifies three specific categories and applies prescriptive rules to each. The result is that a credit-scoring model deployed in both the EU and a second jurisdiction could face genuinely different pre-deployment obligations, different human-oversight standards, and different labelling requirements in each market. Compliance-by-design becomes compliance-by-geography.
Luca Schnettler, Policy Director at the Munich-based AI governance think-tank appliedAI Institute for Europe, has noted that the proliferation of national AI laws creates what practitioners call a compliance matrix problem: firms must map each system's functionality against every jurisdiction's category definitions, and those definitions do not align. For a European bank running a single credit-risk foundation model across multiple markets, that matrix can become prohibitively complex within two to three regulatory cycles.
A Snapshot of the Global Regulatory Landscape
To put this in context, the table below captures the four most significant comprehensive AI frameworks currently active or entering their enforcement phase:
- EU AI Act: Phased from Q3 2026; risk-based four-tier structure; extraterritorial for third-country providers targeting EU users; high-risk category as the primary compliance trigger
- The law discussed here: Live since 22 January 2026; three-category structure covering high-impact, generative, and high-performance AI; explicitly extraterritorial via three concurrent revenue and user thresholds
- Morocco AI Law: Live from 1 March 2026; covers high-risk and high-impact AI; partial extraterritoriality requiring local representative appointment; focused on public interest and critical sectors
- UAE Agentic AI Framework: Live from 22 January 2026; targets autonomous decision-making systems specifically; primarily domestic in scope
The cumulative signal is unmistakable. AI regulation is no longer a single-jurisdiction compliance project. It is a multi-front, overlapping discipline that is already competing for budget alongside GDPR, DORA, and Basel IV obligations in European financial services compliance functions.
The Open Implementation Questions That Matter Most
Three definitional ambiguities in the newer law are directly relevant to European financial services firms assessing their exposure.
First, the definition of high-impact AI names six sectors but does not codify granular thresholds. Does a chatbot that advises retail banking customers on mortgage eligibility constitute high-impact AI in the credit decision category, or is it informational software? Until enforcement decrees clarify this, firms must make a conservative assumption and treat any AI touching a credit outcome as high-impact.
Second, the human oversight requirement exists but does not specify its form. The law does not mandate human-in-the-loop veto authority, human review before deployment, or real-time continuous monitoring. It requires meaningful oversight without defining it. European firms accustomed to the EU AI Act's more prescriptive human-oversight standards will find this ambiguity uncomfortable.
Third, the deepfake labelling requirement remains loosely defined. It is unclear whether it applies only to synthetic media mimicking real individuals or to any AI-generated image, video, or audio output. For European fintechs using AI-generated content in customer communications, this matters.
The enforcement decree finalisation window, expected to close around mid-2026, is a genuine opportunity. Companies that file substantive comments during the rule-making process can influence how these ambiguities resolve. European firms with legal operations in the relevant jurisdiction should treat this as a strategic compliance moment, not a passive waiting period.
What Compliance Teams Should Be Doing Now
The practical steps for European financial services firms are not exotic. They follow the logic of any mature regulatory response:
- Audit every AI system currently deployed or in development that touches credit decisions, healthcare diagnostics, or insurance underwriting for extraterritorial exposure under any active non-EU AI law
- Map each system's training compute against the high-performance threshold to determine whether risk mitigation documentation obligations apply
- Review generative AI deployments for labelling and disclosure compliance across all active jurisdictions simultaneously
- Establish a monitoring brief on enforcement decree publications to capture definitional clarifications as they emerge
- Engage with EU institutions and standards bodies, particularly the European AI Office established under the EU AI Act, to push for mutual recognition or harmonisation dialogues with other active regulatory regimes
The European AI Office, which began operations in early 2024 as the central EU body for AI Act implementation, is the natural interlocutor for this kind of regulatory diplomacy. European financial services trade bodies, including Insurance Europe and the European Banking Federation, are already engaging it on high-risk AI definitions. Extending that engagement to cover jurisdictional divergence is the logical next step.
Delayed clarity from any regulator is not a compliance excuse. The law applies as written. European firms that treat enforcement-decree ambiguity as a reason to defer compliance audits will find themselves in a costly catch-up position when the decrees land.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.