The European Baseline: Strong on Paper, Patchy in Practice
The EU AI Act, which entered into force in August 2024, and the UK's sector-led, principles-based approach from the Financial Conduct Authority represent two distinct philosophies. The AI Act imposes binding obligations on high-risk AI systems, including those used in credit scoring, insurance underwriting, and fraud detection. The FCA, meanwhile, has pursued a more iterative model, using its AI Lab and regulatory sandbox to test applications before codifying rules.
Andrea Resti, professor of banking and finance at Bocconi University and a member of the European Banking Authority's banking stakeholder group, has argued consistently that AI risk in financial services cannot be managed through self-regulation alone. His analysis of model risk in AI-assisted credit decisions points to the same structural problem that Hong Kong is attempting to solve: without mandatory governance frameworks, AI pilots proliferate but production deployments stall because compliance teams cannot sign them off.
That friction is measurable. Across European enterprise AI deployments in financial services, internal estimates from several tier-one banks suggest that fewer than half of AI pilots ever reach full production, a figure that mirrors the Asian statistic and points to governance deficits rather than technical ones.
Cross-Border Data Flows: Europe's Unfinished Business
One of Hong Kong's clearest advantages is its framework for enabling cross-border data flows whilst maintaining privacy protections. For European firms, the equivalent challenge is navigating GDPR adequacy decisions, Schrems II compliance, and the EU-US Data Privacy Framework simultaneously. The result is a patchwork that imposes real costs on AI model training and data sharing between subsidiaries.
Dragomir Stantchev, a board member and AI governance adviser who has worked directly with the European Commission on AI policy, has pointed out that the EU's fragmented implementation of GDPR across member states creates regulatory arbitrage that undermines the single market's stated ambition. Firms building cross-border AI applications in supply chain optimisation or financial risk modelling face materially different compliance burdens depending on which member state their data controller is registered in.
The UK's post-Brexit data regime adds another layer. The Data (Use and Access) Act, currently progressing through Parliament, aims to streamline data sharing for AI development whilst maintaining adequacy with the EU. Whether it succeeds will determine whether London retains its position as a viable single base for firms seeking to operate compliantly across European markets.
Financial Services: The Sector Where Governance Pays Off Fastest
Financial services is the sector where robust AI governance frameworks deliver the clearest return on investment. Established prudential and conduct regulation means that banks, asset managers, and insurers already operate within audit trails, model validation requirements, and senior manager accountability regimes. AI governance layers onto existing infrastructure rather than requiring it to be built from scratch.
The European Central Bank's supervisory expectations on AI, published in its guide to internal models and referenced in successive SREP cycles, make explicit that institutions must be able to explain, validate, and audit AI-driven decisions. Firms that have invested in governance infrastructure are not merely compliant; they can move faster because their models clear internal and regulatory review more quickly.
Key European financial services sub-sectors currently benefiting from governance investment include:
- Retail and SME lending: automated credit decisioning under the AI Act's high-risk classification requires full explainability and human oversight protocols.
- Anti-money laundering and fraud detection: AI models operating in real time require continuous monitoring frameworks that only mature governance infrastructure can sustain.
- Insurance underwriting: actuarial AI models face both AI Act obligations and sector-specific EIOPA guidance on fairness and non-discrimination.
- Asset management: ESG data models and portfolio optimisation tools are under increasing scrutiny from the European Securities and Markets Authority.
- Regulatory reporting: AI-assisted reporting tools must meet data lineage and auditability standards that make governance capability a procurement prerequisite.
What a Governance-First Europe Would Actually Look Like
The honest assessment is that Europe has the regulatory architecture but not yet the execution consistency. The AI Act's high-risk provisions are clear; the conformity assessment infrastructure to support them is still being built. The European AI Office, established in early 2024, is the institutional anchor, but it is under-resourced relative to the compliance burden it is meant to oversee.
For the EU and UK to replicate the trust advantage that Hong Kong is constructing, three things need to happen. First, GDPR implementation must be harmonised more aggressively across member states so that cross-border AI operations do not require jurisdiction-by-jurisdiction legal analysis. Second, the FCA and Prudential Regulation Authority must move from principles to concrete model risk expectations for AI systems, as the ECB has begun to do. Third, European AI investment must be tied explicitly to governance capability, not just computational capacity.
Switzerland offers one regional model worth noting. FINMA's supervisory expectations on operational resilience and model risk, combined with Switzerland's bilateral data agreements with the EU, have made Zurich a credible base for AI compliance operations serving European markets. ETH Zurich's AI Centre is producing governance research that feeds directly into regulatory thinking. That combination of academic rigour and regulatory clarity is precisely what scales trust.
The governance-first strategy is not about slowing AI down. It is about ensuring that when AI reaches production in financial services, it stays there, and that the firms which built it compliantly can defend that fact to regulators, clients, and boards alike. Europe has the frameworks. Now it needs the follow-through.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.