Skip to main content
Korea's AI Basic Act Is Already Being Rewritten: What European Firms Must Know Now

Korea's AI Basic Act Is Already Being Rewritten: What European Firms Must Know Now

South Korea's AI Basic Act took effect in January 2026 and entered a calibration phase within three months. With extraterritorial reach, a 10^26 FLOPS threshold, and mandatory transparency obligations, it is already shaping compliance planning for European multinationals serving Korean users.

South Korea's AI Basic Act is a live, evolving document, not a finished rulebook, and European multinationals with any Korean user footprint need to treat it that way. The Act took effect on 22/01/2026, following publication of its Enforcement Decree on 21/01/2026. By April, less than three months into enforcement, South Korea's Ministry of Science and ICT (MSIT) was already in active calibration mode, responding to feedback from Korean firms and foreign counsel. For EU and UK technology businesses, that signal matters: the first comprehensive AI law in a major Asian economy is being refined in real time, and the obligations it imposes reach well beyond Korea's borders.

The 10^26 FLOPS Line Is the Most Consequential Technical Detail

Under Article 31 of the Enforcement Decree, AI systems trained above the 10^26 FLOPS compute threshold trigger enhanced obligations: risk assessment, mandatory user protection measures, and heightened supervisory attention from MSIT. Below that threshold, operators face only advance notice duties. That single number has become the dividing line between systems that require a Korea-specific compliance plan and those that do not. For European AI developers, particularly those building frontier models at scale, this threshold maps uncomfortably close to where serious training runs now sit. Any European lab or cloud provider whose models are deployed to Korean users, directly or through resellers, should classify those systems against this benchmark immediately.

Extraterritorial Reach Is Where the Real Teeth Are

The fines themselves are modest by European standards. A KRW 30 million cap (approximately EUR 20,000) is not going to reshape a large technology firm's budget planning. What matters far more is the Act's extraterritorial application: it applies to AI systems whose outputs affect Korean users, regardless of where the provider is located. That means European model providers and platform operators that target Korean users, whether directly or through Korean distribution partners, must appoint a domestic representative or face administrative penalties.

Advertisement

The representative requirement is functionally a gatekeeper. Korean regulators now have a named individual to serve notices on, pursue corrective orders against, and compel to produce compliance documentation. European firms that have already navigated the EU AI Act's authorised representative provisions will recognise this structure immediately; the compliance logic is almost identical.

Editorial photograph taken inside a contemporary European financial services compliance office, showing two professionals reviewing regulatory documents on dual monitors displaying structured data tab

Transparency Obligations: Where Most Compliance Work Is Now Concentrated

Article 31 transparency obligations are where Korean and foreign businesses are currently concentrating operational effort. The requirements include advance user notice when interacting with AI, clear labelling of generative AI output, and deepfake labelling. This is the piece of the law that most closely mirrors what the EU AI Act and the UK's emerging AI governance framework are implementing. It is also the easiest piece to fail quietly, without noticing, until a complaint lands.

Dragoș Tudorache, the Romanian MEP who co-rapporteured the EU AI Act through the European Parliament, has consistently argued that transparency obligations are the enforcement mechanism most likely to generate early case law, precisely because consumer-facing failures are visible. Korea's Article 31 labelling regime operates on the same logic. The European Commission's AI Office, established in 2024 to coordinate AI Act enforcement across member states, has itself flagged deepfake and synthetic content labelling as a priority area for the first wave of enforcement guidance. European firms that have already built labelling pipelines for EU compliance should audit whether those pipelines extend to Korean-language interfaces.

How Korea Compares to the EU AI Act

Korea's regulatory position sits between Japan and the EU. Japan's AI Promotion Act carries no fines, no bans, and no mandatory labelling, relying instead on voluntary guidelines. Korea's AI Basic Act carries modest fines, no bans, and mandatory labelling for generative AI outputs. The EU AI Act carries large fines, prohibited use categories, and a full risk-tier framework with penalties reaching EUR 35 million or seven per cent of global annual turnover.

For European companies running multi-jurisdiction AI deployments, the practical implication is that Korea-grade compliance is now the minimum viable baseline for any product touching Korean users. It is less demanding than full EU AI Act compliance, but it is not optional, and it is actively being tightened.

The Sectors Most Exposed

  • Financial services: credit evaluation, fraud detection, and algorithmic trading support all appear on the high-impact list. European banks and fintech firms with Korean operations or Korean-facing digital products face the most immediate obligations.
  • Healthcare AI: medical devices, triage support, and diagnostic imaging systems.
  • Employment and HR technology: candidate screening and workforce analytics.
  • Education AI: student evaluation and adaptive learning platforms.
  • Public sector and transport automation.

For European financial services firms specifically, the overlap with existing EU obligations under the Digital Operational Resilience Act and the EBA's guidelines on machine learning in credit risk is significant. A Korea-compliance workstream does not need to be built from scratch; it can be grafted onto existing EU AI governance structures, provided someone actually does the grafting.

What European Multinationals Should Do Right Now

Four concrete steps are appropriate for any European firm with a Korean user footprint.

  1. Appoint a Korean representative. Failing to appoint one is itself a violation. This is the single lowest-effort compliance step and among the highest-risk omissions.
  2. Classify your models by FLOPS. If you train or significantly fine-tune above 10^26 FLOPS, expect to be treated as a high-impact operator regardless of your industry or corporate structure.
  3. Ship AI labelling in Korean interfaces. Advance user notice and generative AI output labels must be visible in Korean-language user interfaces, not buried in English-language documentation.
  4. Build a functioning complaints channel. Korean regulators expect operators to handle user complaints on AI-related matters and to produce evidence that the channel is operational.

The Competitiveness Balance and What It Means for Foreign Providers

MSIT's public posture has been deliberate: minimum regulation to support domestic competitiveness, serious transparency and user protection requirements, and active calibration based on industry feedback. Korean firms such as Samsung, Naver, and Kakao have benefited from that calibration posture; the Act's modest fines have not slowed commercial deployment. For foreign providers, including European ones, the calibration posture is less reassuring. Ambiguous rules tend to tighten over time as enforcement case law develops, and European firms that wait for regulatory certainty before building compliance infrastructure will find themselves behind.

Margrethe Vestager, during her tenure as Executive Vice-President of the European Commission for a Europe Fit for the Digital Age, repeatedly noted that extraterritorial regulatory reach is only effective if the regulated entity has a known domestic point of contact. Korea has applied exactly that logic. European firms that have already absorbed this lesson from the EU AI Act's own representative requirements have a head start; those that have not should not assume Korea's enforcement will remain gentle indefinitely.

The Broader Regulatory Picture

Korea's approach is politically exportable in ways the EU AI Act is not, at least not at pace. The Article 31 labelling regime is technically legible, operationally straightforward, and politically palatable to governments that want to regulate AI without appearing to strangle domestic industry. European trade lawyers and compliance teams advising clients on multi-jurisdiction AI deployments should expect Singapore, Japan, and other major markets to borrow elements of this framework over the next 18 to 24 months. A Korea-grade compliance playbook is not just a Korea solution; it is increasingly the global minimum viable compliance architecture for any AI product with cross-border reach.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 6 terms
machine learning

Software that improves at tasks by learning from data rather than being explicitly programmed.

generative AI

AI that creates new content (text, images, music, code) rather than just analyzing existing data.

benchmark

A standardized test used to compare AI model performance.

at scale

Applied broadly, to a large number of users or use cases.

AI governance

The policies, standards, and oversight structures for managing AI systems.

compute

The processing power needed to train and run AI models.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment