This comparison breaks down what matters most for EU and UK readers: scope, risk classification, enforcement teeth, innovation incentives, and what each approach means for businesses navigating a fragmenting global regulatory landscape.
The Philosophical Divide
The EU AI Act starts from precaution. Its architects designed a framework where AI systems carry ascending obligations based on risk tiers, with the European Commission treating AI governance as consumer protection first and innovation second. The Act's chief proponent, EU Internal Market Commissioner Thierry Breton, consistently framed it as a necessary guardrail before leaving office in late 2024; his successor continues that line.
South Korea's approach inverts this logic. The AI Basic Act is explicitly pro-innovation, with the Korean government framing regulation as an enabler of its 2.2 trillion Korean Won AI investment strategy. Rather than blanket obligations, it targets what it calls "high-impact AI" with focused requirements while leaving lower-risk applications largely to self-regulation.
For Kilian Gross, Head of Unit for AI Policy at the European Commission's DG CONNECT, the EU's approach reflects a deliberate choice: "Trustworthy AI is not a constraint on competitiveness; it is a precondition for it." South Korean officials argue the opposite, insisting that lighter-touch governance allows domestic developers to move faster and compete globally.
This distinction matters beyond philosophy. For multinationals headquartered in London, Paris, Berlin, or Amsterdam, two major trading partners now operate fundamentally different AI governance systems. Aligning internal compliance programmes to both simultaneously is not straightforward.
Risk Classification: Tiers vs. Impact
The EU AI Act uses a four-tier pyramid:
- Unacceptable risk: banned outright (for example, real-time biometric surveillance in public spaces)
- High risk: heavy conformity assessment and documentation obligations
- Limited risk: transparency duties only (for example, chatbot disclosure)
- Minimal risk: no obligations beyond existing law
The classification is technology-centric, focusing on what the AI system does and in what context it operates.
South Korea's model is narrower. The AI Basic Act identifies "high-impact AI" based on societal consequence rather than technical function. An AI system sorting job applications is high-impact in both jurisdictions, but Korea's definition centres on outcomes affecting fundamental rights, physical safety, and access to essential services. Everything else sits in a largely unregulated general category.
The practical upshot for European businesses: a system classified as limited-risk under the EU Act may still qualify as high-impact under Korea's binary framework, or vice versa. Companies cannot assume that EU compliance maps cleanly onto Korean obligations.
Enforcement: The Penalty Gap
The most striking divergence is in enforcement firepower. The EU can levy fines of up to EUR 35 million or 7% of global annual turnover, whichever is higher. For a large European enterprise, that could mean nine-figure exposure.
South Korea's maximum penalty of 300 million Korean Won (roughly USD 220,000) is, by comparison, a rounding error on most tech balance sheets. Critics argue this undermines deterrence. Supporters counter that Korea's approach relies on industry self-regulation and reputational incentives rather than punitive fines.
Anu Bradford, Professor of Law at Columbia and author of The Brussels Effect, has argued that the EU's willingness to impose large fines is precisely what gives its regulatory model global reach: companies building to EU standards rather than facing costly divergence is the mechanism by which Brussels exports its rules. Korea's lighter penalties mean no equivalent gravitational pull on global compliance norms, at least for now.
It is worth noting that South Korea's ruling People Power Party has already signalled that penalties could increase as the regulatory infrastructure matures. The current ceiling may not reflect the long-term picture.
Innovation Provisions: Where Korea Leads
Korea's AI Basic Act includes provisions that the EU lacks at equivalent scale. A dedicated AI Committee, chaired by the Prime Minister, coordinates policy across ministries. The government has committed 2.2 trillion Korean Won (approximately USD 1.6 billion) to AI research and development, with regulatory sandboxes allowing companies to test high-impact AI systems under supervised conditions before full compliance obligations kick in.
The EU offers regulatory sandboxes too, but they are narrower in scope and slower to operationalise. Brussels has faced sustained criticism from industry bodies including DigitalEurope and the European AI Alliance for creating a framework that large corporations can navigate but that crushes smaller players under compliance costs. The European Commission's own impact assessment estimated conformity costs of EUR 300,000 to EUR 400,000 per high-risk system, a figure that makes many SME founders wince.
The contrast is sharpest for early-stage companies. A European startup developing a novel medical AI diagnostic tool faces immediate high-risk classification, extensive documentation requirements, and no straightforward route to test in a live environment before those obligations bite. A Korean counterpart can enter a supervised sandbox, gather evidence, and approach compliance incrementally.
What This Means for European Businesses
For companies based in the EU, the UK, or Switzerland that sell into the Korean market, or for Korean firms expanding into Europe, the divergence creates concrete strategic decisions. Three practical considerations stand out:
- Dual compliance is expensive but unavoidable for companies active in both markets. Building to EU standards by default is the safest strategy, as it typically exceeds Korean requirements. Over-engineering for Korea is unnecessary; under-engineering for Europe is not an option.
- Korea's sandbox provisions offer genuine advantages for companies developing novel AI applications. European firms with Korean subsidiaries or partnerships can access these provisions through local entities, giving multinationals a useful testing ground that has no real EU equivalent at scale.
- The penalty asymmetry may not last. Seoul has signalled that fines will increase as enforcement data accumulates. Companies that treat the current gap as a structural feature rather than a temporary condition are taking a planning risk.
The Bigger Picture for Global AI Governance
The Korea-EU divergence is one data point in a broader fragmentation of global AI regulation. The United Kingdom, post-Brexit, has chosen a sector-led, principles-based approach through its AI Safety Institute rather than enacting a standalone AI Act. Switzerland, outside the EU but closely aligned with its regulatory orbit, is watching Brussels carefully before deciding whether to mirror the Act. The result is that even within Europe's immediate neighbourhood, no single template dominates.
For European policymakers, the Korean experiment is worth watching carefully. A government that treats regulation as industrial strategy, backs it with substantial public R&D investment, and uses sandboxes to reduce compliance friction for innovators is running a genuine alternative hypothesis. Whether Korea's lighter-touch model produces safer, more trustworthy AI outcomes than Europe's prescriptive approach will not be clear for several years. But the enforcement data that accumulates between now and 2028 will be among the most important evidence the global AI governance debate has yet produced.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.