Skip to main content
Two Asian AI Blueprints, One European Compliance Question: Which Template Will Your Firm Follow?

Two Asian AI Blueprints, One European Compliance Question: Which Template Will Your Firm Follow?

Japan's soft-law AI Promotion Act and South Korea's binding AI Basic Act have now fully bedded down into operating rules, creating two sharply divergent regulatory templates. European multinationals with Asian market exposure can no longer run a single APAC compliance playbook and must decide which standard to build to by default.

Asian AI regulation has just split into two irreconcilable philosophies, and European firms with exposure to Japanese and Korean markets need to pick a posture now rather than wait for convergence that will not come.

Eleven months after enactment, Japan's AI Promotion Act and South Korea's AI Basic Act have finished bedding down into concrete operating rules. What looked in 2024 like two countries moving in parallel now looks starkly different. Tokyo has chosen the softest AI law of any major economy. Seoul has gone the other way and built the strictest comprehensive framework outside the EU. Every other Asian regulator is quietly choosing a side, and so must every European board with an APAC strategy.

Advertisement

Japan: No Fines, No Bans, Just Guidelines

Japan's AI Promotion Act passed on 28/05/2025, with most provisions in force from 04/06/2025. The AI Strategy Headquarters, chaired by the Prime Minister, held its first meeting on 13/09/2025 and is now drafting the 2026 AI Basic Plan.

The Act does something that has not really been tried at scale: it governs AI without a single mandatory prohibition. There are no pre-launch checks, no prohibited use categories, and no fines. Non-compliance is handled through voluntary guidelines from METI and the Ministry of Internal Affairs and Communications, plus a naming mechanism that publishes operators who refuse to engage. Data use for training is highly permissive, with no opt-out right for rights-holders. That permissiveness is exactly why Japan is now one of the friendliest training jurisdictions in the world, and why several European AI labs are quietly factoring it into their data strategy decisions.

For European context, this stands in near-total contrast to the EU AI Act. Dragoș Tudorache, the European Parliament rapporteur who shepherded the EU AI Act through to adoption, has consistently argued that a purely soft-law approach leaves users without enforceable recourse. Japan's model tests that proposition at national scale.

Editorial photograph taken inside a European regulatory or legal compliance setting: two open legal binders side by side on a glass conference table, one annotated with soft pastel sticky notes sugges

Korea: Binding, Extraterritorial, and Focused on High-Impact AI

Korea's AI Basic Act entered into force on 22/01/2026 and is Asia's first comprehensive AI statute. The Enforcement Decree published on 21/01/2026 sets the operational rules and draws the high-impact AI line at the 1026 FLOPS threshold for training compute. Above that line, operators must carry out risk assessments, document user protection measures, and submit to enhanced supervisory attention from MSIT.

The Act also reaches beyond Korean soil. Any AI system whose outputs affect Korean users triggers the regime, including a requirement to appoint a domestic representative if the provider has no local presence. That is exactly the extraterritorial logic the EU embedded in the GDPR and later in the AI Act, and it is already reshaping how American and Chinese platforms plan Korean launches. European providers are not exempt: a Paris-based generative AI firm serving Korean enterprise clients must appoint a Korean representative or face administrative fines of up to KRW 30 million per violation, with compounding corrective orders for continued non-compliance.

Andrea Renda, senior research fellow at CEPS in Brussels and one of Europe's most-cited AI policy analysts, has noted that extraterritorial scope is the single most consequential design choice a regulator can make, because it forces global operators to raise their compliance floor regardless of their home jurisdiction. Korea has made exactly that choice.

Where the Two Models Converge and Where They Diverge

Both frameworks are promotion-first. Both avoid the EU's prohibited-use list. Both lean on sector guidance for specifics rather than prescribing everything in primary legislation.

But the core philosophical divide is sharp. Japan treats AI law as a developmental scaffold and trusts firms to self-regulate under published guidelines. Korea treats AI law as a user-protection backstop with genuine teeth, even if those teeth are relatively modest by EU standards.

The practical effect on multinational legal counsel is that a single Asia-Pacific AI compliance playbook is no longer sufficient. Korea requires labelling of generative AI outputs under Article 31 of the Act, including deepfake labelling. Japan encourages it. Korea imposes operator notification duties. Japan nudges them. For any global model provider serving both markets, the minimum sensible posture is Korea-grade compliance, because a Japan-only build is then trivially portable upward at marginal additional cost.

Which Model Are Other Regulators Copying?

The early signals are revealing. Thailand's AI Act, now in force, looks more Japanese than Korean, with soft-law emphasis. Singapore's AI Verify framework stays firmly voluntary and export-focused. India's April 2026 labelling rules sit somewhere in between: mandatory for a narrow set of generative AI categories but without Korea's extraterritorial reach. Indonesia's draft regulation leans Japanese.

That leaves Korea largely alone in the region as a fully binding, extraterritorial regime, a position that will shape where global AI launches first and where they quietly delay. For European firms, the parallel to draw is with the EU AI Act itself: binding extraterritorial regimes do not attract less business, they attract better-prepared business.

Hot-Button Contrasts at a Glance

  • Fines: Japan none; Korea up to KRW 30 million per violation.
  • Prohibited use categories: Japan none; Korea none. Both differ from the EU AI Act.
  • Training data opt-out: Japan no; Korea not explicit; EU yes.
  • Extraterritorial reach: Japan no; Korea yes.
  • Mandatory labelling of generative AI output: Japan no; Korea yes.
  • Designated domestic representative required for foreign providers: Japan no; Korea yes.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 3 terms
generative AI

AI that creates new content (text, images, music, code) rather than just analyzing existing data.

at scale

Applied broadly, to a large number of users or use cases.

compute

The processing power needed to train and run AI models.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment