A Risk-Based Framework Adapted for Local Conditions
At the top of Vietnam's risk hierarchy sit technologies that pose direct threats to human rights and public safety. Non-consensual facial recognition systems and malicious deepfakes designed to deceive or manipulate are banned outright. At the other end, minimally invasive applications such as spam filters, basic recommendation algorithms, and routine automation tools face negligible regulatory burden, giving developers room to iterate without excessive compliance overhead.
Between those poles, medium and higher-risk applications face graduated requirements: transparency documentation, human oversight mechanisms, impact assessments, and regular audits. The structure should feel recognisable to European compliance teams, even if the enforcement machinery is entirely Vietnamese in character.
Where the two frameworks diverge most sharply is in their political orientation. The EU AI Act emerged from a consensus-driven process designed to harmonise standards across twenty-seven member states and create mutual recognition mechanisms for the single market. Vietnam's law, by contrast, is explicitly framed around digital sovereignty and the development of indigenous AI capabilities. A national AI development fund will channel investment into data centres and research facilities. A National AI Database, operated by the Ministry of Science and Technology, will track systems, monitor compliance, and serve as a centralised governance hub. These are not the priorities of a country looking to integrate with a supranational regulatory regime; they are the priorities of a country that wants to build competitive AI infrastructure of its own.
What European Observers Are Making of It
For those watching from Europe, the Vietnamese law raises a pointed question: if the EU AI Act is genuinely the global gold standard, why is the most notable international adoption of its architecture coming paired with an explicit rejection of its multilateral logic?
Dragoș Tudorache, the Romanian MEP who co-authored the EU AI Act and steered it through the European Parliament, has consistently argued that the regulation's risk-based structure was designed to be portable, and that its adoption elsewhere validates the approach. The Vietnamese case tests that argument. Portability is real; the tiered model has been lifted and transplanted with apparent success. But the values that animate the EU version, open markets, mutual recognition, fundamental rights as a supranational commitment, have not travelled with the architecture.
Yoshua Bengio, the Montreal-based AI safety researcher whose input has been sought repeatedly by European policymakers and who submitted evidence to EU consultations ahead of the AI Act's finalisation, has argued that risk-based frameworks are only as effective as the enforcement infrastructure behind them. Vietnam's law establishes the framework; whether the Ministry of Science and Technology can operationalise robust, independent auditing of high-risk systems remains an open question. That challenge is not unique to Vietnam. Several EU member states are themselves still building the technical capacity their own national authorities will need to enforce the AI Act credibly.
Transparency, Labelling, and the Sovereignty Dimension
One of Vietnam's most consequential requirements concerns AI-generated content. Companies deploying AI systems must clearly label outputs as artificially generated, whether synthetic media, deepfakes, or AI-authored text. The provision mirrors obligations embedded in the EU AI Act and the EU's separate AI Liability Directive discussions, reinforcing the sense that a global consensus on content transparency is forming, even if enforcement will be fragmented across jurisdictions.
The sovereignty framing that runs through Vietnam's legislation deserves more attention than it typically receives in European commentary. Vietnamese officials have been explicit that the law is partly designed to reduce economic dependence on foreign technology providers and ensure that AI development serves Vietnamese interests. This is a geopolitical posture, not merely a regulatory one. It reflects a broader pattern visible in the EU's own AI and data strategies: the recognition that who controls AI infrastructure shapes who captures economic value and who bears systemic risk.
The law also supersedes AI-related provisions from Vietnam's 2025 Law on Digital Technology Industry, consolidating scattered regulations into a coherent single framework. European policymakers attempting to navigate the overlapping obligations of the AI Act, the General Data Protection Regulation, the Digital Services Act, and the forthcoming AI Liability Directive will recognise both the ambition and the difficulty of that consolidation exercise.
Implications for European Technology Firms
For European companies with operations or commercial interests in Southeast Asia, Vietnam's law creates immediate compliance obligations. Firms must implement transparency controls for AI-generated content, conduct impact assessments for higher-risk systems, and ensure their Vietnamese deployments meet the new classification standards. Those that have already invested in EU AI Act compliance infrastructure will find the Vietnamese requirements broadly familiar, though the enforcement context differs substantially.
The more interesting strategic question is whether compliance in Vietnam can be used as a credible signal in other regional markets that are watching and preparing their own legislation. Thailand, the Philippines, Malaysia, and Indonesia are all at various stages of developing AI policy frameworks. A company that demonstrates rigorous, auditable compliance in Vietnam is plausibly better positioned when those frameworks crystallise. European firms that built compliance capabilities early for the EU AI Act could find themselves with a similar first-mover advantage in Southeast Asian markets, provided they invest in adapting those capabilities to local regulatory specifics rather than simply transplanting EU-format documentation.
Implementation Will Determine Everything
Vietnam's law is operative, but implementation is where regulatory ambitions are made or broken. Detailed guidance on risk classification must be developed and published. Audit mechanisms must be established and resourced. The National AI Database must function as a governance tool rather than a surveillance instrument. The national AI development fund must direct capital toward technically credible projects rather than politically favoured ones.
These are demanding requirements for any regulatory system, including European ones. The EU AI Act itself faces significant implementation pressure: member state authorities are still being designated, conformity assessment bodies are thin on the ground, and the technical standards that underpin several key obligations are still being finalised by CEN-CENELEC working groups. Vietnam faces a compressed version of the same challenge with fewer institutional resources.
Success would validate the risk-based model as genuinely exportable and set a template that other developing economies follow. Failure would give ammunition to those who argue that comprehensive AI regulation is a luxury that only large, well-resourced jurisdictions can operationalise without strangling innovation. Europe has a direct interest in the outcome: the more credible the global ecosystem of risk-based AI regulation becomes, the stronger the argument for the EU AI Act's own legitimacy as a governance standard.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.