Skip to main content
The EU AI Act Has a New Mirror: What Europe's Healthcare Sector Must Learn from Vietnam's Binding AI Law

The EU AI Act Has a New Mirror: What Europe's Healthcare Sector Must Learn from Vietnam's Binding AI Law

Vietnam enforced Southeast Asia's first comprehensive, standalone AI law on 1 March 2026, adopting a risk-based framework that closely echoes the EU AI Act. European healthcare technology companies with global ambitions now have a second binding rulebook to navigate, and the compliance lessons run in both directions.

Vietnam's binding AI legislation, Law No. 134/2025/QH15, came into force on 1 March 2026, making it the first comprehensive standalone AI law in Southeast Asia. Passed by the National Assembly on 10/12/2025, the statute spans eight chapters and 35 articles and governs every stage of the AI lifecycle, from research and development through to deployment and end-user interaction. For European healthcare technology firms with operations or ambitions beyond the continent, this law is not a distant curiosity; it is a live compliance obligation that sits alongside Brussels' own rulebook.

The timing matters. Europe's own EU AI Act is in phased implementation, with high-risk system obligations biting from August 2026. Healthcare AI providers that have spent the past two years mapping their products to the EU Act's risk tiers now face a second, structurally similar regime on the other side of the world. The two frameworks were not designed together, but the family resemblance is unmistakable, and the divergences are where the real compliance work lies.

Advertisement

A Risk-Based Architecture That Will Feel Familiar

At the heart of Vietnam's legislation sits a three-tier risk classification system. Article 9 sets out the criteria for sorting AI systems into high-risk, medium-risk, and low-risk categories, a structure that any compliance team that has worked through the EU AI Act's Annex III will recognise immediately.

High-risk systems are those with the potential to cause significant harm to life, health, national security, or the lawful rights of individuals. Medical diagnostic AI, autonomous clinical decision-support tools, and any application touching critical health infrastructure fall squarely into this bracket. Providers must complete a pre-market conformity assessment and register on the national one-stop AI portal managed by Vietnam's Ministry of Science and Technology.

Medium-risk systems cover scenarios where users may be confused or misled by undisclosed AI interactions or AI-generated content. Patient-facing chatbots that do not clearly identify themselves as non-human are a textbook example. Providers must self-classify and notify authorities before deployment.

Low-risk systems, the large majority of everyday AI applications including spam filters, recommendation engines, and basic analytics tools, face minimal obligations beyond general transparency principles.

Professor Maja Pantic, a machine learning researcher at Imperial College London whose work spans healthcare AI ethics, has argued consistently that risk-tiered frameworks are the only coherent approach to governing AI in clinical settings. "Any credible AI governance regime has to start with the question of consequence," she noted in a 2024 Royal Academy of Engineering panel. "The higher the stakes for the patient, the heavier the obligation on the developer." Vietnam's law embodies precisely that logic.

Who Is Covered, and What They Must Do

The law defines five roles in the AI supply chain, each carrying specific responsibilities: developers (those who design and train models), providers (those who place systems on the market), deployers (organisations using AI commercially), users (individuals interacting directly with the system), and affected persons (anyone whose rights are impacted by an AI decision).

The provision that will most directly affect European medtech and health-AI companies is the local presence requirement. Foreign providers of high-risk AI systems must either establish a commercial presence in Vietnam or appoint an authorised representative in-country. This is, in structural terms, identical to the EU AI Act's own requirement for non-EU providers to designate an EU representative. European companies that have already built that infrastructure for Brussels compliance should find the principle straightforward; the operational detail of standing up a Vietnamese entity is another matter.

A wide-angle editorial photograph taken inside a modern European hospital technology suite, showing a clinical workstation displaying AI-assisted diagnostic imaging software on dual monitors. A medica

Penalties: Modest Now, Scalable Later

The headline fine figures are modest by European standards. Organisations that breach the law face administrative penalties of up to VND 2 billion (approximately 75,800 US dollars), while individuals can be fined up to VND 1 billion. Compare that with the EU AI Act, where fines for prohibited-practice violations reach 35 million euros or 7 per cent of global annual turnover, and the Vietnamese numbers look almost negligible.

But the sleeper clause is the revenue-based penalty provision. The law explicitly permits future implementing decrees to tie fines to a percentage of global turnover, mirroring the EU approach. For a large European medtech group with Vietnamese operations, that provision transforms the compliance calculus entirely. Andrea Renda, Senior Research Fellow at the Centre for European Policy Studies in Brussels and one of Europe's most cited AI governance analysts, has noted that revenue-based penalties are the single most effective deterrent mechanism available to regulators precisely because they scale with the offender's capacity to absorb harm. Vietnam has reserved that option, and multinational operators should plan as if it will be exercised.

Healthcare Gets Extra Time, But Not Forever

One provision of direct relevance to European healthcare AI companies is the sector-specific grace period. Existing AI systems deployed in healthcare, finance, and education have until 01/09/2027 to achieve full compliance, three months longer than the 01/03/2027 deadline applying to all other sectors. That window is generous by regulatory standards, but it is not infinite, and the conformity assessment process for high-risk systems will take time to work through in a jurisdiction where the relevant technical infrastructure is still being established.

For European companies that have been treating Vietnamese market entry as a future priority rather than a present task, the 18-month clock is already running. Healthcare AI systems, particularly those involving diagnostic imaging, clinical decision support, or patient data processing, are precisely the category most likely to attract high-risk classification under Article 9.

Innovation Incentives: A Lesson Brussels Might Note

Vietnam's law is not purely restrictive. It establishes a National AI Development Fund and introduces significant tax incentives for qualifying AI projects. A voucher scheme targets startups, helping them access computing resources, curated datasets, and sandbox testing environments. The stated intent is to attract investment and grow a domestic AI ecosystem, not to repel foreign capital.

This dual-track approach, binding obligations for high-risk applications combined with active public investment to support innovation, is something European policymakers have struggled to execute convincingly. The EU AI Act has been criticised in some quarters, including by Mistral AI's leadership, for imposing compliance costs that fall disproportionately on smaller European companies relative to large American and Chinese incumbents. Vietnam's fund and voucher scheme represent a more explicit attempt to solve that distribution problem. Whether the mechanism works in practice remains to be seen, but the design intent is instructive.

What European Healthcare AI Firms Should Do Now

The practical implications for European companies are straightforward, even if the execution is not.

First, audit your AI systems against Vietnam's three-tier classification. If your product is already classified as high-risk under the EU AI Act's Annex III for healthcare applications, assume it will attract the same classification in Vietnam and plan accordingly.

Second, review your disclosure practices. Medium-risk classification triggers transparency requirements around AI identity disclosure. Patient-facing tools that do not clearly identify themselves as AI-powered will need remediation before the grace periods expire.

Third, resolve the local presence question early. Whether you establish a commercial entity or appoint a representative, this decision involves legal, tax, and operational considerations that take time to work through. Do not leave it until the final quarter before the deadline.

Finally, monitor the implementing decrees. The revenue-based penalty provision and much of the operational detail will be fleshed out in secondary legislation over the coming 18 months. Companies that track that process closely will have a compliance advantage over those that wait for the final text.

Vietnam's AI law is not perfect. Enforcement mechanisms are untested, and the gap between legislative ambition and on-the-ground implementation will take years to close. But for any European company building, deploying, or investing in healthcare AI with a global footprint, this is now a reference point that belongs alongside the EU AI Act in your compliance framework.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 5 terms
machine learning

Software that improves at tasks by learning from data rather than being explicitly programmed.

AI-powered

Uses artificial intelligence as part of its functionality.

ecosystem

A network of interconnected products, services, and stakeholders.

AI governance

The policies, standards, and oversight structures for managing AI systems.

sandbox

A controlled testing environment for trying out new technologies or regulations.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment