Skip to main content
The Brussels Effect on Frontier AI: 18 Months of Evidence, Tested
Deep Dive
· 9 min read

The Brussels Effect on Frontier AI: 18 Months of Evidence, Tested

Anu Bradford's Brussels Effect thesis holds that EU regulation forces global product convergence because multinationals cannot afford parallel versions. Eighteen months into frontier-model deployment under the AI Act's shadow, the evidence is messier, more instructive, and ultimately more consequential than the clean thesis suggests.

The Brussels Effect is not a prediction about intentions; it is a prediction about incentive structures, and eighteen months of frontier-model deployment data is finally enough to test it properly.

Columbia Law professor Anu Bradford's thesis, refined in her 2023 follow-up work on digital regulation, argues that the EU's large, wealthy consumer base makes it economically irrational for multinationals to maintain genuinely divergent product versions. The path of least resistance is to engineer to the highest standard and ship one product globally. Applied to AI, that logic should mean every capability gate imposed on European users of OpenAI, Anthropic, or Google DeepMind's frontier models eventually becomes a global gate, because duplicating inference pipelines, safety classifiers, and content filters by jurisdiction is expensive and operationally fragile.

Advertisement

The reality, as of mid-2025, is that the thesis is partially right, selectively wrong, and pointing to a genuinely important fork in the road for the next decade.

"Maintaining two watermarking regimes requires separate engineering pipelines, separate audit trails, and separate vendor contracts. That overhead grows non-linearly as model updates accelerate."
AI in Europe analysis

What Has Actually Been Gated

Start with the concrete product differences that exist today. OpenAI's operator documentation, published through its EU transparency obligations, distinguishes between capabilities that are restricted by default in the European Economic Area and those that are simply absent. Memory-persistent personalisation features in ChatGPT were rolled out in the United States in early 2024 and only reached European users in a modified, opt-in form months later, following negotiations with the Irish Data Protection Commission, which serves as OpenAI's lead supervisory authority under GDPR. That delay was not a Brussels Effect dynamic; it was a bilateral negotiation. The feature eventually arrived, shaped by European rules, but it arrived.

More structurally significant is the handling of biometric inference. Several capability demonstrations that OpenAI showed in GPT-4o's launch materials, including real-time emotion inference from voice tone, were not made available in the EU. The company has not publicly confirmed this is a permanent gate rather than a phased rollout, but as of May 2025 the feature remains absent from European product tiers. Anthropic's Claude models, deployed in Europe through Amazon Web Services' EU data regions, similarly omit certain agentic browsing behaviours that are available in US deployments, though Anthropic's public policy notes attribute this to its own internal safety decisions rather than regulatory compulsion.

Google DeepMind occupies a different position. As the developer of Gemini models used across Workspace and through the Gemini API, DeepMind's product decisions are partly made in London and partly in Mountain View, and the regulatory surface is correspondingly complex. Gemini's image generation features were suspended globally in February 2024 after output controversies, but the subsequent relaunch applied differential content policies in European deployments, with stricter default filters on political content. That is a Brussels-adjacent outcome, even if the proximate cause was reputational rather than legal.

Close-up editorial photograph of a software engineer's dual-monitor workstation showing two side-by-side API documentation pages with different feature flags highlighted in different colours, suggesti

What Has Not Converged

The Brussels Effect thesis is most strained where the economic cost of divergence is low and the capability in question is strategically sensitive for the provider. Take system prompt confidentiality. OpenAI's EU transparency documentation requires it to disclose general categories of system-level instructions to enterprise customers, but does not require prompt disclosure to end users, and the functional product is identical on both sides of the Atlantic. No global convergence has occurred here because the EU's requirement is disclosure-oriented rather than capability-restricting.

Watermarking is a clearer case. The AI Act's Article 50 obligations on synthetic content disclosure apply to EU-deployed models from August 2026 onward for general-purpose AI systems above the compute threshold. OpenAI and Google DeepMind are both building watermarking infrastructure, but neither has committed to deploying that infrastructure globally. Google DeepMind's SynthID watermarking technology, developed at its London facility and announced in 2023, is technically capable of global deployment; Google has chosen to position it as an optional tool for partners rather than a universal standard, which is precisely the kind of selective non-convergence Bradford's thesis predicts should be unstable over time.

The instability argument is worth taking seriously. Maintaining two watermarking regimes, one for EU-regulated deployments and one for everywhere else, requires separate engineering pipelines, separate audit trails, and separate vendor contracts. That overhead grows non-linearly as model updates accelerate. At current development velocity, the cost of divergence will likely force convergence, but on a five-year rather than an eighteen-month timescale.

Editorial photograph of a small group of researchers in a modern Paris or London AI lab, gathered around a whiteboard covered in diagrams of model architecture and regulatory compliance timelines, one

The Regulatory Counterweight: France, Germany, and the GPAI Code

The Brussels Effect assumes a unified EU regulatory signal, and in AI that assumption is under genuine stress. The General-Purpose AI Code of Practice, developed under the oversight of the European AI Office and published in its first draft form in late 2024, has drawn substantive pushback from French and German industry groups who argue its transparency obligations would disadvantage European frontier-model developers relative to US incumbents.

France's position is particularly telling. The government of Gabriel Attal, and subsequently that of Michel Barnier, has consistently argued through the French delegation to the AI Office that obligations calibrated for GPT-4-scale deployments create disproportionate compliance burdens for Mistral AI, the Paris-based lab that is Europe's most credible frontier-model developer. Mistral's chief executive Guillaume Lample has been publicly critical of what he characterises as asymmetric regulation that locks in the advantage of incumbents who can absorb compliance costs. That is not the Brussels Effect in action; it is an attempt to modify the signal before it propagates.

Germany's Bundesnetzagentur, which has been designated as a competent authority under the AI Act for certain market surveillance functions, has taken a more procedural stance, focused on interoperability requirements and documentation standards rather than capability gates. The practical effect is that the EU regulatory signal on frontier models is not a single, clean transmission; it is a negotiated output from at least three distinct institutional voices, the European AI Office, national data protection authorities, and sector-specific regulators, each with partially conflicting priorities.

The scale of European AI deployment, and the regulatory obligations now attaching to it, can be read through a set of concrete figures that illuminate where the Brussels Effect is strongest and where it remains aspirational.

The Next Decade: Convergence, Fragmentation, or Something Else

Bradford's thesis is ultimately a thesis about power asymmetries and their resolution over time. The EU has market power; US firms want access to that market; therefore US firms will engineer to EU standards globally. The mechanism is clean, but it assumes that EU standards are stable, enforceable, and technically precise enough to serve as engineering targets.

On frontier AI, none of those conditions fully holds yet. The AI Act's provisions on general-purpose AI systems were finalised with significant ambiguity about what counts as a systemic risk threshold, what transparency obligations practically require, and how conformity assessments will be conducted for models that update continuously. The European AI Office has twelve months from the Act's full entry into force to produce implementing acts that clarify these questions. Until those implementing acts exist, the engineering target is blurry, and multinationals have a rational incentive to wait before committing to global product architectures.

What is more likely over the next decade is a tiered convergence. Some categories, data retention limits, output watermarking, disclosure of AI-generated content, will converge globally because the cost of divergence outweighs the benefit of maintaining separate systems. Others, particularly those involving agentic autonomy, real-time biometric inference, and politically sensitive content moderation, will remain regionally differentiated for longer, because the political and legal risk of applying European standards in US or other markets is genuinely asymmetric in the other direction.

The case of Mistral AI is the variable that Bradford's original thesis, developed primarily from analysis of consumer goods and data privacy, could not anticipate. If the EU produces competitive frontier-model developers, the Brussels Effect dynamic changes fundamentally. A globally competitive Mistral is not subject to the same logic as a foreign multinational seeking market access; it is a domestic actor with a direct interest in shaping the regulatory signal rather than merely adapting to it. That shift from rule-taker to rule-maker is the real test of European AI sovereignty, and eighteen months in, it remains genuinely unresolved.

THE AI IN EUROPE VIEW

The Brussels Effect thesis remains the most intellectually honest framework for analysing how EU AI regulation propagates globally, but it needs significant updating to fit the frontier-model era. Bradford's mechanism assumes passive foreign firms adapting to an external signal. What we actually see is active lobbying from US incumbents to soften the signal, active lobbying from European developers to avoid being disadvantaged by it, and a regulatory apparatus, the European AI Office, the Irish DPC, the Bundesnetzagentur, that is still building the institutional capacity to enforce it coherently.

The partial convergence we have documented over eighteen months is real but fragile. Watermarking will probably go global; emotion inference from voice probably will not, at least not on EU terms. The decisive variable is not the AI Act's text but its implementation: if the European AI Office produces technically precise, enforceable obligations by 2026, the convergence pressure increases substantially. If it produces aspirational guidance that firms can satisfy with documentation rather than product changes, the Brussels Effect on frontier AI will be remembered as a missed opportunity. The EU has the market power to set the standard. Whether it has the institutional capacity to do so is the question that should be keeping policymakers up at night.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 4 terms
agentic

AI that can independently take actions and make decisions to complete tasks.

inference

When an AI model processes input and produces output. The actual 'thinking' step.

API

Application Programming Interface, a way for software to talk to other software.

compute

The processing power needed to train and run AI models.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment