What Prescriptive Rules Actually Cost
The EU AI Act creates direct liability for providers of high-risk AI systems. In financial services, that category is broad: credit scoring, insurance underwriting, fraud detection, and investment risk tools all sit within scope. The compliance burden is not theoretical. Early-stage AI companies in the EU are now reporting that compliance activities consume up to 40 per cent of their pre-revenue budgets. That capital does not go to compute, talent, or customer acquisition. It goes to lawyers and auditors.
Andrea Renda, senior research fellow at the Centre for European Policy Studies (CEPS) in Brussels, has argued consistently that the Act's conformity assessment requirements for high-risk systems impose costs that are disproportionate for smaller providers, and that the technical documentation obligations alone will disadvantage European start-ups relative to large US and Chinese incumbents who can absorb those costs at scale. He is right. Regulatory moats built from compliance complexity do not protect consumers; they protect incumbents.
The European AI Office, established in February 2024 within the European Commission, is tasked with overseeing general-purpose AI model rules and coordinating enforcement across member states. Its remit is significant, but its challenge is equally significant: 27 member states, divergent national enforcement capacities, and implementation timelines that are still being negotiated. By the time the enforcement machinery reaches operational maturity, the innovation window will have narrowed.
The Principles-Based Alternative Europe Is Ignoring
Contrast the EU's approach with that of Switzerland, which sits outside the AI Act's direct jurisdiction and has opted for a principles-based framework built on existing sectoral regulation. The Swiss Financial Market Supervisory Authority (FINMA) applies existing financial services law to AI use cases rather than constructing a parallel AI-specific compliance architecture. Swiss fintech firms are not filing conformity assessments before shipping credit risk tools. They are shipping, gathering feedback, and iterating. That is how software works.
The United Kingdom took a similar position. The previous government's pro-innovation AI regulation strategy, carried forward by the current administration, places responsibility on existing sectoral regulators, including the Financial Conduct Authority, rather than creating a new horizontal AI regulator. The FCA's AI Lab and its ongoing engagement with firms through the regulatory sandbox allow financial services companies to test AI applications under regulatory supervision without triggering the full compliance overhead of the EU model. Kirsty Nathwani, who leads the FCA's data and technology function, has been explicit that the authority wants to enable responsible innovation, not litigate it into paralysis before it starts.
Neither the Swiss nor the UK approach is permissive. Both maintain binding obligations for genuinely high-risk applications. The difference is that neither approach assumes every AI system in financial services is high-risk by default, or that compliance documentation is a substitute for actual harm prevention.
Financial Services: Where the Trade-Offs Are Sharpest
Financial services is the sector where the EU AI Act's bite is most immediate. Credit scoring systems must meet transparency and explainability requirements. Fraud detection tools used by banks require registration in the EU database for high-risk AI systems. AI-driven investment tools face conformity assessments before deployment. These requirements are not unreasonable in isolation. Their cumulative effect on a fintech building across multiple EU jurisdictions is substantial.
The specific challenge for generative AI in financial services is that the Act's general-purpose AI model rules, which apply to models above a 10^25 FLOP training threshold, introduce obligations around systemic risk that are still being interpreted. A European bank deploying a large language model for client-facing document summarisation or internal compliance review faces genuine legal uncertainty about where its obligations begin and end. That uncertainty has a cost: delayed deployment, conservative use cases, and a preference for buying from large US providers whose legal teams have already absorbed the compliance costs, rather than building on European open-source models.
This is the paradox. The AI Act was partly designed to prevent European dependence on non-European AI providers. Its compliance architecture is accelerating exactly that dependence, because only the largest non-European providers can afford to pre-clear their systems for EU deployment at scale.
What Success Actually Looks Like
The metrics that will determine whether Europe's regulatory approach was correct are not white papers or enforcement actions. They are startup funding velocity in AI-enabled financial services, time-to-market for AI products, talent retention in technical roles, and whether founders with optionality choose to build in the EU or register elsewhere and comply later.
The UK's AI sector is already showing signs of benefiting from regulatory differentiation. London remains Europe's largest fintech hub, and the FCA's sandbox model has been replicated in over 50 jurisdictions globally because it demonstrably works. Switzerland's financial AI ecosystem is growing faster than its EU neighbours in several verticals, particularly in wealth management automation and regulatory technology. These are not accidents. They are the outputs of frameworks that prioritise real-world feedback over theoretical compliance architectures.
The EU's counter-argument is that early binding rules prevent lock-in, establish consumer trust, and create a level playing field for European companies in global markets. That argument has merit for some contexts. It has less merit when the compliance costs fall asymmetrically on the smallest and fastest-growing companies, and when the rules are being interpreted inconsistently across 27 enforcement jurisdictions. A level playing field that only the largest players can afford to stand on is not level.
Three Questions the EU Needs to Answer
First: if compliance costs for high-risk AI systems in financial services genuinely consume 40 per cent of early-stage budgets, what evidence does the Commission have that this cost produces proportionate consumer protection outcomes, rather than simply shifting market share to incumbents?
Second: the UK and Switzerland are both operating lighter-touch frameworks for financial AI without the consumer harm events that prescriptive rules were designed to prevent. What is the Commission's account of why those jurisdictions are not experiencing the harms its rules are designed to address?
Third: the general-purpose AI model rules introduce systemic risk obligations for large models. European financial institutions are already choosing US-hosted models partly to avoid compliance uncertainty. How does the Commission reconcile this outcome with the Act's stated goal of strengthening European AI sovereignty?
These are not rhetorical questions. They are the questions that the European AI Office, the Commission's DG CONNECT, and the financial services committees of the European Parliament should be asking publicly, with published answers. The AI Act is not yet fully in force. There is still time to calibrate implementation in ways that reduce the compliance asymmetry without abandoning the framework's legitimate consumer protection goals.
But the window is narrowing. The jurisdictions that built for emergence rather than control are already compounding their advantage. Europe should measure that, honestly, before it is too late to adjust.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.