Skip to main content
Vietnam's AI Law Is Live: What the EU's Own Regulatory Blueprint Looks Like When Someone Else Builds With It

Vietnam's AI Law Is Live: What the EU's Own Regulatory Blueprint Looks Like When Someone Else Builds With It

Vietnam became the first country in Southeast Asia to enforce a comprehensive AI law on 2 March 2026, using the EU AI Act's risk-based architecture as its foundation. For European regulators and technology firms, the moment is instructive: the framework they spent years debating is now being adapted, exported, and operated by others.

Vietnam has become the first country in Southeast Asia to enforce a comprehensive AI law, and the legislation that came into force on 2 March 2026 is built, quite deliberately, on the skeleton of the EU AI Act. That fact alone should prompt reflection in Brussels, Berlin, and London: the regulatory model Europe spent the better part of a decade constructing is now being adapted by a rapidly developing economy that wants the rigour without the multilateral obligations. European policymakers and technology companies would do well to study what Vietnam has done, and what it signals for the next phase of global AI governance.

Passed unanimously by Vietnam's National Assembly in December 2025, the law establishes a tiered, risk-based classification system for artificial intelligence applications. The architecture will be immediately familiar to anyone who has worked through the EU AI Act: systems are sorted by their potential to cause harm, with the highest-risk category triggering outright bans and lower-risk categories facing proportionate compliance obligations. The parallel is not accidental. Vietnamese policymakers explicitly modelled their framework on the EU approach, while adjusting it to serve national priorities that diverge sharply from Europe's emphasis on cross-border harmonisation.

Advertisement

A Risk-Based Framework Adapted for Local Conditions

At the top of Vietnam's risk hierarchy sit technologies that pose direct threats to human rights and public safety. Non-consensual facial recognition systems and malicious deepfakes designed to deceive or manipulate are banned outright. At the other end, minimally invasive applications such as spam filters, basic recommendation algorithms, and routine automation tools face negligible regulatory burden, giving developers room to iterate without excessive compliance overhead.

Between those poles, medium and higher-risk applications face graduated requirements: transparency documentation, human oversight mechanisms, impact assessments, and regular audits. The structure should feel recognisable to European compliance teams, even if the enforcement machinery is entirely Vietnamese in character.

Where the two frameworks diverge most sharply is in their political orientation. The EU AI Act emerged from a consensus-driven process designed to harmonise standards across twenty-seven member states and create mutual recognition mechanisms for the single market. Vietnam's law, by contrast, is explicitly framed around digital sovereignty and the development of indigenous AI capabilities. A national AI development fund will channel investment into data centres and research facilities. A National AI Database, operated by the Ministry of Science and Technology, will track systems, monitor compliance, and serve as a centralised governance hub. These are not the priorities of a country looking to integrate with a supranational regulatory regime; they are the priorities of a country that wants to build competitive AI infrastructure of its own.

Editorial photograph taken inside a modern European parliamentary or regulatory building, such as the European Parliament chamber in Brussels or the atrium of the Berlaymont building, showing official

What European Observers Are Making of It

For those watching from Europe, the Vietnamese law raises a pointed question: if the EU AI Act is genuinely the global gold standard, why is the most notable international adoption of its architecture coming paired with an explicit rejection of its multilateral logic?

Dragoș Tudorache, the Romanian MEP who co-authored the EU AI Act and steered it through the European Parliament, has consistently argued that the regulation's risk-based structure was designed to be portable, and that its adoption elsewhere validates the approach. The Vietnamese case tests that argument. Portability is real; the tiered model has been lifted and transplanted with apparent success. But the values that animate the EU version, open markets, mutual recognition, fundamental rights as a supranational commitment, have not travelled with the architecture.

Yoshua Bengio, the Montreal-based AI safety researcher whose input has been sought repeatedly by European policymakers and who submitted evidence to EU consultations ahead of the AI Act's finalisation, has argued that risk-based frameworks are only as effective as the enforcement infrastructure behind them. Vietnam's law establishes the framework; whether the Ministry of Science and Technology can operationalise robust, independent auditing of high-risk systems remains an open question. That challenge is not unique to Vietnam. Several EU member states are themselves still building the technical capacity their own national authorities will need to enforce the AI Act credibly.

Transparency, Labelling, and the Sovereignty Dimension

One of Vietnam's most consequential requirements concerns AI-generated content. Companies deploying AI systems must clearly label outputs as artificially generated, whether synthetic media, deepfakes, or AI-authored text. The provision mirrors obligations embedded in the EU AI Act and the EU's separate AI Liability Directive discussions, reinforcing the sense that a global consensus on content transparency is forming, even if enforcement will be fragmented across jurisdictions.

The sovereignty framing that runs through Vietnam's legislation deserves more attention than it typically receives in European commentary. Vietnamese officials have been explicit that the law is partly designed to reduce economic dependence on foreign technology providers and ensure that AI development serves Vietnamese interests. This is a geopolitical posture, not merely a regulatory one. It reflects a broader pattern visible in the EU's own AI and data strategies: the recognition that who controls AI infrastructure shapes who captures economic value and who bears systemic risk.

The law also supersedes AI-related provisions from Vietnam's 2025 Law on Digital Technology Industry, consolidating scattered regulations into a coherent single framework. European policymakers attempting to navigate the overlapping obligations of the AI Act, the General Data Protection Regulation, the Digital Services Act, and the forthcoming AI Liability Directive will recognise both the ambition and the difficulty of that consolidation exercise.

Implications for European Technology Firms

For European companies with operations or commercial interests in Southeast Asia, Vietnam's law creates immediate compliance obligations. Firms must implement transparency controls for AI-generated content, conduct impact assessments for higher-risk systems, and ensure their Vietnamese deployments meet the new classification standards. Those that have already invested in EU AI Act compliance infrastructure will find the Vietnamese requirements broadly familiar, though the enforcement context differs substantially.

The more interesting strategic question is whether compliance in Vietnam can be used as a credible signal in other regional markets that are watching and preparing their own legislation. Thailand, the Philippines, Malaysia, and Indonesia are all at various stages of developing AI policy frameworks. A company that demonstrates rigorous, auditable compliance in Vietnam is plausibly better positioned when those frameworks crystallise. European firms that built compliance capabilities early for the EU AI Act could find themselves with a similar first-mover advantage in Southeast Asian markets, provided they invest in adapting those capabilities to local regulatory specifics rather than simply transplanting EU-format documentation.

Implementation Will Determine Everything

Vietnam's law is operative, but implementation is where regulatory ambitions are made or broken. Detailed guidance on risk classification must be developed and published. Audit mechanisms must be established and resourced. The National AI Database must function as a governance tool rather than a surveillance instrument. The national AI development fund must direct capital toward technically credible projects rather than politically favoured ones.

These are demanding requirements for any regulatory system, including European ones. The EU AI Act itself faces significant implementation pressure: member state authorities are still being designated, conformity assessment bodies are thin on the ground, and the technical standards that underpin several key obligations are still being finalised by CEN-CENELEC working groups. Vietnam faces a compressed version of the same challenge with fewer institutional resources.

Success would validate the risk-based model as genuinely exportable and set a template that other developing economies follow. Failure would give ammunition to those who argue that comprehensive AI regulation is a luxury that only large, well-resourced jurisdictions can operationalise without strangling innovation. Europe has a direct interest in the outcome: the more credible the global ecosystem of risk-based AI regulation becomes, the stronger the argument for the EU AI Act's own legitimacy as a governance standard.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 5 terms
ecosystem

A network of interconnected products, services, and stakeholders.

robust

Strong, reliable, and able to handle various conditions.

first-mover advantage

The benefit of being the first to enter a market.

AI governance

The policies, standards, and oversight structures for managing AI systems.

AI safety

Research focused on ensuring AI systems behave as intended without causing harm.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment