Skip to main content
EU AI Act: Brussels Blinks, but the Framework Is Not Dead Yet

EU AI Act: Brussels Blinks, but the Framework Is Not Dead Yet

The EU's landmark AI Act is showing early signs of strain, with Brussels reportedly considering a one-year grace period for high-risk systems and delaying fines until August 2027. For European financial services firms and AI developers, the mid-course adjustments signal both relief and a warning that regulatory uncertainty is here to stay.

The EU AI Act, once held up as the gold standard for comprehensive AI governance, is already wobbling. Just months after key provisions took effect in August 2024, the European Commission is reportedly floating a 12-month grace period for high-risk AI systems and considering postponing financial penalties until August 2027. That represents a dramatic climb-down from the original enforcement timeline, which had most obligations kicking in during 2026. For financial services firms in Brussels, Frankfurt, Amsterdam, and London, this is consequential news, and not entirely unwelcome.

The pressure driving the reconsideration is coming from multiple directions at once. Meta Platforms and Alphabet have publicly signalled that overly rigid compliance deadlines risk limiting European users' access to frontier AI capabilities. US government officials have added geopolitical weight by raising concerns about trade friction. The combination of commercial lobbying, competitive anxiety, and diplomatic pressure has forced Brussels into a posture it clearly did not anticipate needing quite this soon.

Advertisement

What Is Actually on the Table

The proposed changes are not cosmetic. Sources close to the Commission suggest several substantive modifications under active consideration. A 12-month compliance grace period for operators of high-risk AI systems would push requirements that were expected in 2026 well into 2027. Postponing financial penalties to August 2027 gives enterprises building compliance infrastructure considerably more runway. Specific areas under review include conformity assessment requirements for high-risk systems, data governance and training-data documentation, transparency obligations for generative AI, and specific duties on general-purpose AI models, including foundation models.

Drazen Luksic, a Brussels-based regulatory affairs partner who has advised multiple EU financial institutions on AI Act readiness, has noted publicly that the conformity assessment infrastructure was never going to be ready on the original schedule. Third-party assessment bodies, technical standards, and certification processes have all developed more slowly than the Commission's original timeline assumed. That is not a failure of ambition; it is a failure of sequencing.

The reconsideration does not signal abandonment of the Act's core principles. Risk-based regulation, specific obligations for high-risk applications, and protections for fundamental rights remain the foundation. What is shifting is implementation pace, technical specifications, and specific compliance mechanisms. The Commission's AI Act portal continues to publish updates on timeline adjustments as they are confirmed.

Infrastructure That Was Not Ready

The implementation challenges that forced this reconsideration were, in hindsight, predictable. CEN-CENELEC, the European standards body responsible for AI Act technical standards, has been working on specific requirements covering risk management, data governance, transparency, human oversight, and accuracy. Standard development is inherently slow, and the Act's requirements ran ahead of what was actually available in several critical areas. Without harmonised standards, even well-resourced organisations were left guessing at what compliant behaviour actually looked like in practice.

Professor Virginia Dignum of Umea University, one of Europe's most cited AI ethics researchers and a former member of the EU's High-Level Expert Group on AI, has consistently argued that regulatory ambition must be matched by implementation infrastructure. Her position, expressed in multiple public forums, is that rushing conformity assessment before the underlying standards exist creates legal uncertainty that actually harms the people the Act is meant to protect.

Small and medium enterprises have borne a disproportionate share of that uncertainty. Compliance costs that large financial institutions can absorb become existential burdens for AI startups. The result has been a competitive drag on European AI development at precisely the moment when Mistral AI in Paris and Aleph Alpha in Heidelberg are trying to establish themselves as credible alternatives to US hyperscalers.

Wide-angle editorial photograph taken inside a contemporary Brussels office building near the Schuman roundabout, showing a compliance team gathered around a large screen displaying a regulatory timel

Financial Services: Better Placed, Still Under Pressure

Of all the sectors grappling with AI Act obligations, financial services is arguably the best positioned. Banks, insurers, and asset managers already operate under layers of EU regulation, from MiFID II to the Digital Operational Resilience Act. Internal compliance functions are staffed, documented processes exist, and regulators are familiar faces. AI Act requirements, while additive, slot more naturally into existing governance frameworks than they do in, say, healthcare or manufacturing.

That does not mean the burden is trivial. Credit-scoring models, anti-money-laundering systems, and algorithmic trading tools that fall into the high-risk category face conformity assessment requirements that are still not fully specified. Financial institutions that had begun building compliance infrastructure on the original timeline now face the uncomfortable question of whether to pause, continue, or accelerate, not knowing which version of the rules will ultimately apply.

The European Banking Authority has been active in clarifying how AI Act obligations interact with existing prudential requirements, but gaps remain. Until the Commission finalises the grace-period arrangements and publishes clear guidance on the postponement of penalties, legal teams are effectively working with a moving target.

Competitive Dynamics and European Sovereignty

The AI Act's turbulence has reopened the debate about European AI sovereignty. US providers including OpenAI, Anthropic, Google, and Microsoft have engaged constructively with EU regulators while also making clear that specific requirements could affect which product versions European customers receive. The prospect of a two-tier AI market, where European users access less capable tools than their counterparts elsewhere, is a genuinely uncomfortable political reality for the Commission.

European firms are caught in their own bind. Mistral AI and Aleph Alpha face the same compliance costs as their US competitors but with smaller revenue bases to absorb them. The foundation-model obligations under the Act, which apply regardless of downstream application, have been particularly controversial precisely because they target the infrastructure layer where European firms are trying hardest to compete.

The UK's position adds a further dimension. Post-Brexit, the UK's AI regulatory approach under the current Labour government has leaned towards sector-specific, principles-based guidance rather than a horizontal legislative framework. The Department for Science, Innovation and Technology has explicitly positioned this as a competitive advantage, arguing that the UK can move faster and attract AI investment that finds the EU environment too prescriptive. Whether that argument holds as the AI Act softens its edges remains to be tested.

What Regulators and Industry Should Take From This

Several durable lessons are emerging from the AI Act's implementation difficulties. First, comprehensive horizontal regulation requires supporting infrastructure to be built in parallel, not in sequence. Launching obligations before standards bodies, assessment providers, and certification processes are operational creates bottlenecks that harm everyone, particularly smaller firms with fewer resources to manage ambiguity.

Second, iterative regulation is not a sign of weakness. The Commission's willingness to reconsider specific implementation arrangements reflects legitimate learning from early compliance experience. Regulators that treat initial rules as immutable tend to produce worse outcomes than those prepared to adjust as evidence accumulates. The question is whether adjustments are made transparently and predictably, or in ways that generate their own uncertainty.

Third, timelines must be grounded in realistic assessments of what the market can actually do. AI technology and deployment are evolving rapidly, and regulatory calendars drafted in 2021 may simply not map onto the technical realities of 2025. Building explicit flexibility into frameworks is preferable to the current situation, where informal grace periods are floated through press briefings.

For financial services firms specifically, the practical implication is that regulatory uncertainty over EU AI governance is a durable feature of the next two to three years, not a temporary disruption. Strategic planning should assume continued evolution of specific requirements, with conformity assessment processes, foundation-model obligations, and data-governance standards all likely to shift before they stabilise. Maintaining adaptable compliance architectures, rather than point-in-time solutions, is the only approach that makes sense in this environment.

The AI Act is not dead. The risk-based framework, the high-risk classifications, the fundamental-rights protections: all of that remains intact. What Brussels is doing is buying time to build the infrastructure that should have been built first. Whether that time is used well will determine whether the EU ends up with world-leading AI governance or a cautionary tale about regulatory overreach, and that question will not be answered in the next few months.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Marie Lefèvre" (marie-lefevre) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 3 terms
generative AI

AI that creates new content (text, images, music, code) rather than just analyzing existing data.

runway

How long a startup can operate before running out of money.

AI governance

The policies, standards, and oversight structures for managing AI systems.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment