Belgium's AI Act Compliance Gap: What the EU's Own Regulatory Architecture Gets Wrong
As Brussels finalises its AI Act implementation framework, a comparative look at Saudi Arabia's universal design-principle approach reveals a fundamental tension at the heart of EU AI governance. European financial services firms and AI vendors now face a dual compliance burden that neither framework was designed to handle alone.
The EU AI Act is the world's most discussed AI regulation, but it is not the world's most demanding one. As the European Commission pushes ahead with implementing measures and member states stand up their national authorities, a rival regulatory architecture has quietly emerged in Riyadh that exposes a structural weakness Brussels has not yet resolved: the gap between risk classification and design-level obligation.
That gap matters directly to European financial services firms, AI vendors, and platform operators, many of whom are building products for global markets and discovering that the EU's tiered risk model provides less engineering clarity than it promised. Understanding the Saudi approach is not an academic exercise; it is a competitive intelligence requirement for any company with international AI ambitions.
Advertisement
The Architecture Problem at the Heart of EU AI Governance
The EU AI Act divides AI systems into four tiers: unacceptable risk, high risk, limited risk, and minimal risk. Obligations attach to the tier. That model is logical in theory, but in practice it has produced protracted debates about classification, sector-specific carve-outs, and a compliance culture focused on avoiding the high-risk designation rather than building better systems.
Contrast that with the approach now being finalised in Saudi Arabia, where the Saudi Data and AI Authority has published a draft Responsible AI Policy requiring universal design principles applied to every AI developer regardless of use case or scale. Watermarking on all AI outputs. Bias mitigation across all training pipelines. Interpretability built into all models. Privacy, transparency, and safety by design, not by retrofit.
The obligation is not conditional on how the system is classified. It is unconditional on the basis that you are building AI at all.
For European AI governance specialists, this raises an uncomfortable question. Dragoș Tudorache, the Romanian MEP who steered the AI Act through the European Parliament and has been its most prominent institutional champion, has acknowledged that the Act's risk-tier model was a political compromise as much as a technical one. The classification debates consumed years of legislative bandwidth that might otherwise have gone into prescriptive design standards.
What European Financial Services Firms Are Actually Facing
The financial services sector sits at the sharpest edge of this problem. Under the EU AI Act, credit-scoring systems, fraud-detection tools, and insurance underwriting models are likely to land in the high-risk category, triggering conformity assessments, technical documentation requirements, and human oversight obligations. That is appropriate. But firms building those same systems for deployment in multiple jurisdictions now face a layered problem.
A product that qualifies as limited-risk under the EU framework because it is a customer-facing chatbot, for example, may nonetheless require substantial re-engineering to meet baseline design requirements in other markets that have adopted universal design-principle models. The EU classification does not travel.
Andrea Renda, Senior Research Fellow at the Centre for European Policy Studies in Brussels and one of Europe's most cited AI policy analysts, has argued consistently that the Act's architecture privileges legal certainty over engineering clarity. In a sector like financial services, where model behaviour must be explainable to regulators, auditors, and increasingly to customers under the European Banking Authority's own guidance, interpretability-by-design is not optional regardless of what risk tier a product occupies.
The European Banking Authority's guidelines on internal governance already push in this direction. Firms subject to those guidelines and the AI Act simultaneously are discovering that the two frameworks do not fully align on what interpretability means in practice.
The Universal Design-Principle Model: Sharper Than It Looks
The Saudi framework's seven foundational ethics principles cover integrity and fairness, privacy and security, humanity, accountability, transparency, reliability, and social and environmental considerations. What makes it operationally significant is the technical specificity attached to those principles.
Embedded watermarks in all AI outputs. Content-tracking mechanisms for provenance. Bias mitigation via data-source diversification. Interpretable model features. Privacy, transparency, and safety built into design. None of these obligations are conditioned on use-case classification. They apply to every system, every developer, every deployer operating within scope.
For European vendors eyeing market access beyond the EU's single market, this is material. A product shipping in Europe under a limited-risk designation may require significant additional engineering before it meets the baseline required elsewhere. The compliance floor is rising in multiple jurisdictions simultaneously, and the EU's risk-tier model does not automatically prepare firms for that reality.
Lucilla Sioli, Director for Artificial Intelligence and Digital Industry at the European Commission's DG CONNECT, has been the Commission's lead voice on AI Act implementation. Her office has signalled that the Commission intends to publish standardisation requests to European standards bodies, which will eventually translate the Act's high-level obligations into technical specifications. But that process will take years, and European firms building for global markets cannot wait.
Enforcement Infrastructure: The Detail That Changes Everything
Regulatory frameworks are only as serious as their enforcement apparatus. Here, the comparison between the EU's still-forming implementation machinery and existing models elsewhere is instructive.
The Saudi authority administering the Responsible AI Policy also administers the kingdom's Personal Data Protection Law, which came into force in September 2024 and has produced 48 violation decisions across 2024 and 2025. The enforcement infrastructure is not being built; it is already running. The Responsible AI Policy will be enforced through the same operational apparatus once consultation closes and the policy is finalised, expected in the second half of 2026.
By contrast, the EU AI Act's enforcement model distributes authority across member state market surveillance authorities, the AI Office at the Commission level for general-purpose AI models, and a nascent European AI Board. Belgium, which hosts the Commission and several key EU institutions, has yet to fully designate its national authority. The coordination layer between member state bodies and the AI Office remains a work in progress.
That is not a criticism of the ambition; it is an observation about sequencing. The EU has chosen to legislate first and build enforcement capacity second. Jurisdictions building enforcement infrastructure first and layering policy on top of it have a different sequencing advantage.
What European AI Vendors Should Do Now
For AI companies based in EU member states or the UK, the practical implications are concrete. If you are building for global markets, you cannot treat the EU AI Act as your only compliance reference. Universal design-principle frameworks are proliferating, and they tend to impose a higher technical floor than the EU's risk-tier model for products that do not land in the high-risk category.
Audit your watermarking implementation. The EU has narrowed its mandatory watermarking requirements to AI-generated content under the limited-risk tier, but other jurisdictions apply the obligation universally. If your current implementation is narrowly calibrated to the EU requirement, it may be insufficient elsewhere.
Map your bias-mitigation approach against data-source diversification requirements, not just statistical fairness metrics. The two are related but not identical, and some frameworks are beginning to specify the former explicitly.
Treat interpretability as a baseline product feature rather than a high-risk-tier add-on. Financial services regulators across the EU, the UK's Financial Conduct Authority, and the European Banking Authority are all moving in this direction regardless of what the AI Act's risk tiers say. Building interpretability in from the start is both a compliance hedge and a competitive advantage.
The Structural Question Brussels Has Not Answered
The EU AI Act's risk-tier architecture made political sense as a way to build legislative consensus across 27 member states with divergent views on AI regulation. It may also prove correct as a long-term governance model. Risk proportionality is a legitimate regulatory principle.
But the Act's implementation timeline is long, its classification debates are unresolved in several high-stakes sectors, and its enforcement architecture is still forming. Meanwhile, the global regulatory environment is moving toward higher baseline design requirements for all AI systems, not just those that attract a high-risk designation.
European financial services firms sitting in this environment face a dual burden: compliance with a framework that is still being defined, and market access requirements from jurisdictions that have already defined theirs more prescriptively. The companies that will navigate this best are not those waiting for the Commission's standardisation requests. They are those treating universal design principles as a product requirement today.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "Marie Lefèvre" (marie-lefevre) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article3 terms
responsible AI
Developing and deploying AI with consideration for ethics, fairness, and safety.
AI governance
The policies, standards, and oversight structures for managing AI systems.
bias
When an AI system produces unfair or skewed results, often reflecting prejudices in training data.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.