The EU Wrote an Encyclopaedia; Taiwan Wrote a Constitution
To understand why Taiwan's approach matters for a European audience, you first need to confront what the EU AI Act actually is in practice. Europe's legislation is built around a risk pyramid. Every AI system gets sorted into one of four boxes: banned outright, high-risk (extensive obligations), limited risk (transparency requirements), or minimal risk (no specific rules). On paper it is elegant. In practice, the compliance architecture is formidable.
Consider a medical AI startup developing a diagnostic imaging tool. Under the EU AI Act, the company must first determine its risk classification, then work through conformity assessments, technical documentation requirements, quality management systems, and post-market monitoring obligations, all before a single patient benefits. Margrethe Vestager, former European Commission Executive Vice-President for A Europe Fit for the Digital Age, repeatedly acknowledged this tension, stating that the Commission's goal was rules that were proportionate and that avoid unnecessary burdens on innovators, particularly small and medium enterprises. Whether the final text achieved that remains hotly debated across Brussels and London alike.
Principles, Not Prescriptions
Where the EU writes detailed specifications, Taiwan sets broad principles: fairness, transparency, accountability, and meaningful human oversight. Crucially, it then delegates the specifics to sector-level regulators who actually understand the domain. Financial regulators write the fintech rules. Health authorities set the medical AI standards. This is not regulatory laziness; it is a conscious acknowledgement that generalist lawmakers cannot anticipate every edge case in a fast-moving technical field.
The most practically significant feature is the sandbox clause. High-risk AI faces serious oversight, but research and development retain breathing room. The threshold is real-world deployment: the moment a system touches actual people or live environments, accountability obligations activate. Experimentation in controlled settings remains protected. This single design choice addresses one of the loudest complaints from European AI researchers, namely that the EU's requirements risk chilling academic and startup innovation before it reaches market.
Anna Felicitas Herr, policy analyst at the European AI Office established under the AI Act, has noted publicly that implementation guidance will need to clarify how obligations apply at different stages of the development lifecycle, a tacit admission that the current text leaves genuine ambiguity precisely where Taiwan's sandbox approach is explicit.
What Europe Can Actually Learn
The structural differences between the two approaches are worth laying out plainly:
- EU AI Act: risk-based categories with mandatory pre-market conformity assessments and ongoing post-market surveillance for high-risk systems, creating high but predictable entry costs.
- Taiwan AI Basic Act: principles-plus-sector-expertise model with a research sandbox; deployment triggers accountability, not development.
- UK approach: post-Brexit, the UK has opted for a sector-by-sector, pro-innovation framework overseen by existing regulators such as the FCA and MHRA, closer in spirit to Taiwan than to the EU but without Taiwan's explicit statutory principles.
The UK's approach has attracted criticism from Yoshua Bengio, the Turing Award-winning AI safety researcher and Scientific Director of Mila in Montreal, who has argued at the UK AI Safety Summit and subsequently that voluntary frameworks without statutory backing leave critical gaps in oversight of frontier systems. Taiwan's model is not purely voluntary; it carries legislative authority whilst remaining flexible in implementation. That combination is precisely what critics of both the EU's rigidity and the UK's softness are asking for.
The Broader Stakes for Smaller Tech Economies
Taiwan's position is structurally analogous to several European mid-sized tech economies: significant innovation capacity, global supply chain integration, democratic values, and limited appetite for regulatory overreach that would hand an advantage to less scrupulous competitors. The Netherlands, Sweden, and Finland all face a version of this tension. ASML, headquartered in Eindhoven, sits at the centre of global semiconductor supply chains in much the same way Taiwan's chipmakers do; regulatory decisions taken in either place ripple worldwide.
The key principles emerging from Taiwan's framework that European policymakers should examine include:
- Sector-specific expertise produces better rules than one-size-fits-all legislation.
- Research freedom and deployment accountability can be separated by design, not just by interpretation.
- International regulatory compatibility does not require copying the largest jurisdiction's rulebook wholesale.
- Flexible frameworks that allow incremental tightening outperform static codes as technology evolves.
- Human oversight requirements should scale with risk level, not be uniform across all applications.
The Implementation Test
Principles-based regulation has a well-documented failure mode: it sounds reasonable and then collapses into inconsistency because nobody agrees what the principles mean in a specific case. The EU AI Act, for all its bulk, at least provides companies with a concrete compliance checklist. Taiwan's approach demands that sector regulators are genuinely expert, genuinely independent, and genuinely funded. If any of those conditions fail, the elegant framework becomes a rubber stamp.
That caveat applies with equal force to the UK's sector-led model. The FCA's AI-related guidance has been thoughtful but slow; the MHRA's work on software as a medical device is substantive but under-resourced. Europe's implementation challenge and Taiwan's are therefore more similar than the headline contrast suggests. Both are asking regulators to move faster than regulators are traditionally designed to move.
What Taiwan demonstrates, nonetheless, is that the binary choice between the EU's encyclopaedic prescriptivism and a purely voluntary market-led approach is a false one. A third path exists: statutory principles, domain expertise in implementation, and a hard line at the point of real-world deployment. Brussels and Westminster would do well to study it before the EU AI Act's next review cycle opens.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.