Skip to main content
Morocco Enforces the First Binding AI Law South of the Mediterranean, and Europe Is Watching

Morocco Enforces the First Binding AI Law South of the Mediterranean, and Europe Is Watching

Morocco's Decree 13/2025 came into force on 1 March 2026, making it the first country south of the Mediterranean to enforce a comprehensive, standalone AI law. Built on a risk-based framework directly inspired by the EU AI Act, it sets a precedent that European regulators, technology firms, and trading partners cannot afford to ignore.

Morocco has done what many larger economies have only promised: enacted a binding, enforceable AI law with real institutional teeth. Decree 13/2025 came into force on 1 March 2026, making Morocco the first country south of the Mediterranean to impose comprehensive, standalone AI regulation. For European governments, technology companies, and regulators who have spent years debating the Brussels Effect, the Casablanca Effect may now be an equally instructive case study.

The decree introduces a risk-based regulatory framework that draws clear inspiration from the EU AI Act, but is deliberately tailored to Morocco's development priorities, industrial structure, and social context. It is not a photocopy of European legislation. It is a purposeful adaptation, and that distinction matters for every multinational company with operations or customers in the country, including a substantial number headquartered in Paris, Madrid, and Amsterdam.

Advertisement

What the Law Actually Requires

Decree 13/2025 is built around several interlocking obligations that apply to both domestic developers and foreign companies operating in Morocco. The central requirement is mandatory registration of high-risk AI systems with the Ministry of Information and Communications, coupled with impact assessments that must be completed before any such system can be deployed.

Transparency is a core pillar. High-risk AI systems must meet disclosure requirements, ensuring that users and affected parties are informed when they are interacting with, or subject to, automated decision-making. The law also introduces strict restrictions on AI-generated deepfakes, reflecting documented concern about synthetic media's role in disinformation and fraud. A national AI development fund is established under the decree to channel resources into domestic AI research and capability building. This dual posture of regulating risk whilst investing in capacity reflects a sophistication that goes well beyond simple prohibition.

The OECD AI Policy Observatory, which tracks binding AI legislation globally, has noted that Morocco's approach aligns structurally with tier-based risk classification models first articulated in the EU AI Act. The similarities are not accidental. Moroccan policymakers consulted extensively with European counterparts during the drafting process, and the country's Francophone legal tradition made the EU framework a natural reference point.

Professor Luc Steels, a leading AI researcher at the Free University of Brussels and a long-standing contributor to European AI ethics debates, has argued that the EU's risk-based model travels well to other jurisdictions precisely because it is framework legislation rather than prescriptive technical rules. Morocco's decree appears to validate that thesis in practice.

Data Localisation and Sector-Specific Rules

One of the most commercially significant elements of the decree is the introduction of data localisation requirements for sensitive sectors. Companies handling AI-processed data in healthcare, finance, and national security will be required to store and process that data within Moroccan borders. For European technology firms running regional infrastructure across North Africa, this creates real compliance complexity that legal teams will need to address urgently.

  • Healthcare AI systems must comply with localisation rules for patient data
  • Financial AI applications face both registration and data residency obligations
  • National security-adjacent AI use cases are subject to the most stringent oversight
  • Civil society organisations retain the right to seek redress through algorithmic transparency mechanisms

The data localisation provisions echo debates that are very much alive within the EU itself. The European Data Act and ongoing negotiations around data spaces under the European Health Data Space initiative reflect the same underlying tension between cross-border data flows and sovereign control over sensitive information. Morocco's choices here will resonate with European policymakers who are navigating identical trade-offs at home.

A wide-angle editorial photograph inside a European regulatory or government technology office, showing multiple monitors displaying AI compliance dashboards and risk classification interfaces. The se

Industry Concerns and Civil Society Response

Reaction to Decree 13/2025 has divided along predictable lines. Industry groups, particularly those representing startups and smaller technology companies, have raised legitimate concerns about compliance costs. Registration requirements, impact assessments, and data localisation infrastructure all carry costs that established multinationals can absorb far more easily than early-stage Moroccan ventures. This is precisely the same complaint levelled at the EU AI Act by European startup founders, and it deserves a serious answer rather than dismissal.

Civil society organisations have taken a different view. Provisions on algorithmic transparency and citizen redress have been broadly welcomed by digital rights advocates who argue that without such safeguards, AI systems deployed in public services risk entrenching existing inequalities with no mechanism for challenge or correction. The European Digital Rights network, known as EDRi, which coordinates civil society engagement on AI policy across the EU and associated countries, has consistently made the same argument about the importance of redress mechanisms in the EU AI Act's own provisions.

The National AI Ethics Committee, a new body established under the decree, is tasked with reviewing emerging AI applications and providing policy recommendations as the technology develops. Whether it has genuine independence and adequate resourcing will be critical to its credibility. European observers will recognise this challenge: the EU's own AI Office, established under the AI Act, faces identical questions about its capacity to provide meaningful oversight at scale.

The European Dimension: Trading Partners and Regulatory Spillover

Morocco's move does not exist in isolation from European interests. The country is one of the EU's closest trading partners, has an Association Agreement with the bloc, and receives substantial investment from French, Spanish, and Dutch technology companies. Any company that has deployed AI systems in Morocco, whether in customer service, credit scoring, medical diagnostics, or logistics optimisation, now faces a compliance clock that cannot be deferred indefinitely.

This is the Brussels Effect in reverse, or at least in parallel. The EU AI Act has been described by Andrea Renda, Senior Research Fellow at the Centre for European Policy Studies in Brussels, as a potential global standard-setter precisely because companies that comply with it gain a template applicable in other jurisdictions. Morocco's decree is close enough to the EU framework that firms already investing in EU AI Act compliance may find the incremental cost of Moroccan compliance manageable. Those who have deferred EU compliance will now face a doubled burden.

The broader regional picture also matters to European technology exporters. Countries including Egypt, Jordan, and Tunisia are all at various stages of developing AI governance frameworks. Morocco's decree now serves as a concrete reference point for that work. A regulatory cluster aligned broadly with EU norms, covering a combined market of well over 150 million people, would represent a significant extension of European regulatory influence southward across the Mediterranean.

Comparing Regulatory Approaches in the Southern Neighbourhood

CountryAI Regulatory StatusKey Approach
MoroccoBinding standalone law in force (March 2026)Risk-based, registration, ethics committee
EgyptEarly-stage policy consultationNational AI strategy exists; binding rules pending
TunisiaVoluntary frameworks onlyPrinciples-based, no hard enforcement
JordanDraft AI legislation in parliamentRights-based framing, still in debate
EUBinding regulation in phased implementationRisk-based, tiered obligations, AI Office oversight

The Deepfake Problem and Why Morocco Moved Quickly

One of the most politically sensitive provisions in the decree concerns AI-generated deepfakes. Morocco has experienced a documented surge in synthetic media used for fraud, political manipulation, and reputational damage. The law's restrictions on deepfakes are not symbolic gestures. They reflect harms that have already reached Moroccan courts and regulatory bodies.

This connects to a problem that is equally acute in Europe. The EU AI Act explicitly classifies certain deepfake applications as high-risk or prohibited, and the EU's Digital Services Act creates additional obligations for platforms hosting synthetic media at scale. European policymakers have arrived at strikingly similar conclusions to their Moroccan counterparts through different legislative routes, which suggests the underlying threat model is genuinely converging across regions.

  • Deepfake fraud cases involving financial scams have increased significantly across North Africa and the Mediterranean basin in recent years
  • Political deepfakes targeting candidates and officials have been documented in multiple national elections across Europe and the southern neighbourhood
  • Platform self-regulation has proven insufficient, prompting legislative responses across multiple jurisdictions simultaneously

What Comes Next: Enforcement Is the Real Test

The enforcement phase is where the law will prove itself or fail. A regulation that exists on paper but lacks the institutional capacity to enforce it provides only partial protection, as the EU has itself discovered in the patchy early application of the General Data Protection Regulation. Morocco's Ministry of Information and Communications will need to build genuine technical expertise to evaluate AI impact assessments, maintain the high-risk system registry, and investigate complaints under the redress provisions.

The National AI Ethics Committee faces an equally demanding institutional challenge. Its credibility will depend on the independence of its members, the transparency of its deliberations, and whether its recommendations are incorporated into policy updates. These are institutional design questions that are routinely more consequential than the text of the law itself. The EU AI Office in Brussels is grappling with exactly these questions right now, and neither body has yet produced a definitive answer.

For European companies operating in Morocco or planning to enter the market, the practical priority is clear: identify which AI systems fall into the high-risk classification under the decree, map those obligations against existing EU AI Act compliance programmes, and engage proactively with the Ministry of Information and Communications before enforcement guidance hardens into precedent. Those who engage early will shape implementation. Those who wait will be shaped by it.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 3 terms
at scale

Applied broadly, to a large number of users or use cases.

AI governance

The policies, standards, and oversight structures for managing AI systems.

regulatory framework

A set of rules and guidelines governing how something can be used.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment