Deploying Generative AI in a European Firm: The Compliance Playbook Your Board Actually Needs
From scoping your system under the AI Act's risk tiers to resolving the inevitable collision between GDPR and new EU AI rules, this step-by-step guide gives European enterprises the practical framework to deploy generative AI without landing in a regulator's crosshairs.
Getting generative AI into production inside a European enterprise is not primarily a technology problem; it is a governance problem, and most firms are tackling it in the wrong order.
The instinct is to pilot first and ask compliance questions later. That approach worked reasonably well under a GDPR-only world, where legal teams could retrofit a lawful-basis argument and publish a privacy notice. It will not work under the EU AI Act, which imposes obligations that must be designed in from the moment a system is scoped, not bolted on after a vendor demo. The Act entered into force on 01/08/2024, with prohibitions applying from 02/02/2025 and obligations for high-risk systems phasing in through 2026. Enterprises that are still treating AI governance as a post-deployment audit exercise are already behind.
Advertisement
This guide walks through five sequential steps: risk classification, data-protection assessment, vendor due diligence, board reporting cadence, and resolving the GDPR-AI Act collision. Each step is concrete and actionable.
"Deployers of high-risk AI systems cannot outsource their compliance obligations to their vendors. If the documentation does not exist, the deployment cannot proceed lawfully."
Bird and Bird AI Act commentary, 2024
Step 1: Classify Your System Before You Write a Single Line of Procurement
The AI Act's four-tier risk architecture sounds straightforward until you try to apply it to a real deployment. Most generative AI use cases inside enterprises sit in one of two categories: high-risk (if the system affects employment decisions, credit, education, or essential services) or limited-risk (if it merely generates text, images, or code for internal productivity).
The classification is not self-evident. A large language model used to draft HR performance summaries could qualify as a high-risk system under Annex III, point 4, which covers AI used for recruitment, promotion, task allocation, and monitoring of employees. French data-protection authority CNIL published an AI compliance guide in 2023 that explicitly flags HR-facing generative AI as an area requiring heightened scrutiny, citing the risk of automated inference about workers' characteristics. Firms should pull that document and run every proposed use case through its decision tree before touching a vendor contract.
For general-purpose AI models (GPAI), the Act introduces a separate regime. If your firm is deploying a GPAI model with systemic risk designation, your vendor carries obligations, but you carry deployment obligations too. Bird and Bird's AI Act tracker, updated quarterly, maps which GPAI providers have self-classified under the systemic-risk threshold and which have not. That tracker should sit on every procurement team's desktop.
Document your classification decision in writing, with named signatories and a version date. Regulators will ask for it.
Step 2: Run a Data-Protection Impact Assessment, and Do It Properly This Time
If your use case is high-risk under the AI Act, you almost certainly need a Data Protection Impact Assessment under Article 35 of the GDPR. The European Data Protection Board's joint opinion on AI Act supervisory authorities, published in June 2024, makes explicit that the two regimes must be read together and that DPIA obligations are not superseded or replaced by AI Act conformity assessments; they stack.
A proper DPIA for a generative AI deployment must go beyond boilerplate. It needs to address: the training data provenance of the underlying model (was personal data used, and was there a lawful basis for that use); the inference risks (can the model reconstruct personal data from prompts or outputs); the retention logic (do prompts sent to a cloud API get stored, and for how long); and the automated-decision risk (is any output feeding into a decision that produces legal or similarly significant effects on individuals).
CNIL has been particularly active here. Its 2024 guidance on generative AI and personal data sets out six lawful bases and explains why legitimate interests is harder to rely on than most firms assume when the processing involves novel, large-scale inference. Any European firm using a third-party API should read CNIL's guidance even if it is not French-headquartered; the analysis is the most detailed published by any EU supervisory authority to date.
Step 3: Vendor Due Diligence Is Not a Checkbox, It Is a Contractual Architecture
Most enterprise AI deployments involve at least three parties: the foundation model provider, a middleware or fine-tuning layer, and the enterprise itself. Each layer creates obligations, and the contracts between them must reflect that.
Under the AI Act, deployers of high-risk systems must ensure their providers supply technical documentation, logs, and instructions for use sufficient to discharge the deployer's own obligations. If your vendor cannot provide that documentation, you cannot legally deploy. This is not a theoretical risk: several European enterprises have paused deployments after discovering that their chosen API provider's terms explicitly disclaim the obligations Article 13 places on providers of high-risk systems.
Due diligence must cover at minimum: the provider's Article 13 technical documentation; their data-processing agreement and sub-processor list; their breach-notification SLAs against GDPR's 72-hour window; their model-card disclosures on training data; and their approach to output logging for post-market monitoring purposes. Bird and Bird's team has noted in published commentary that standard cloud-vendor DPAs were not drafted with AI Act Article 25 human-oversight requirements in mind, and need bespoke addenda.
Insist on contractual audit rights. A right to audit is only valuable if it is specific: name the systems in scope, the frequency, the standard being audited against (ISO 42001 is the emerging benchmark), and the consequence of failure.
Step 4: Set a Board Reporting Cadence Before Deployment, Not After an Incident
European corporate governance frameworks, including the UK Corporate Governance Code and the forthcoming EU Corporate Sustainability Reporting Directive's digital-risk disclosures, are converging on the expectation that boards own material technology risks. Generative AI is now material for most large enterprises.
A sensible board reporting cadence for AI governance has three layers. Monthly operational reporting should go to the CTO or Chief Risk Officer: model-performance metrics, prompt-injection incidents, data-quality flags, and any near-miss events where the system produced outputs that required human override. Quarterly board reporting should cover the risk register, compliance posture against the AI Act timeline, and any open regulatory correspondence. Annual strategic review should address the firm's AI risk appetite, the evolving regulatory landscape, and whether the portfolio of AI systems still sits within agreed risk parameters.
The board needs a named AI governance owner, not a committee. Committees diffuse accountability. Name a Chief AI Officer or assign the role explicitly to the CRO or General Counsel, with a formal mandate and board-level access.
Step 5: When GDPR and the AI Act Collide, the GDPR Usually Wins, But Not Always
The most practically difficult compliance question in European AI deployment right now is what happens when the two regimes pull in opposite directions. The most common collision point is data retention.
The AI Act's post-market monitoring obligations for high-risk systems require deployers to keep logs sufficient to enable incident investigation and regulatory audit. GDPR's storage-limitation principle under Article 5(1)(e) requires that personal data not be kept longer than necessary for the original purpose. If your logs contain personal data, as they almost always will in any customer-facing or employee-facing deployment, you have a tension that cannot be resolved by citing one rule over the other.
The EDPB's June 2024 joint opinion addresses this directly, noting that the two regimes are complementary and that pseudonymisation of logs is the preferred technical solution. Firms should architect logging systems to strip or hash personal identifiers at ingestion, retaining only the metadata needed for model monitoring. That architecture needs to be documented in the DPIA and in the technical documentation submitted for AI Act conformity purposes.
A second collision arises around transparency. GDPR Article 22 gives individuals the right not to be subject to solely automated decisions. The AI Act's limited-risk transparency obligations require disclosure when users are interacting with an AI system. These are not the same disclosure, and they are not triggered by the same threshold. Firms need two separate disclosure frameworks, one triggered by automated-decision logic and one triggered by AI-interaction logic, and they need to run in parallel.
## By The Numbers
The scale of the compliance challenge becomes clear when you look at adoption rates alongside enforcement preparedness. European enterprises are deploying generative AI at pace while simultaneously acknowledging that their governance infrastructure has not kept up. The numbers below frame the gap between deployment ambition and compliance readiness across the continent.
What to Do This Week
If your firm has any generative AI deployment in flight or in planning, three actions are non-negotiable in the short term. First, run every current deployment through the AI Act's risk classification process and document the outcome. Second, pull your existing DPIAs for any AI-adjacent processing and update them against the EDPB's June 2024 joint opinion. Third, send your top three AI vendors a formal request for their Article 13 technical documentation and their current approach to GPAI systemic-risk classification.
The AI Act's enforcement architecture places primary liability on deployers for most obligations. Ignorance of what your vendor is or is not doing does not shift that liability. Start the documentation trail now, because when a supervisory authority comes knocking, they will ask for records, not intentions.
THE AI IN EUROPE VIEW
The compliance conversation around European AI deployment has been dominated by abstraction for too long. Frameworks, principles, guidelines, opinions: the paper trail is extensive and the practical guidance has been thin. That is beginning to change. CNIL's generative-AI guidance is genuinely useful. The EDPB's joint opinion is more substantive than its predecessors. Bird and Bird's AI Act tracker gives practitioners a working tool rather than a think-piece. But the fundamental problem remains: European enterprises are being asked to comply with an AI Act that is still phasing in, on top of a GDPR that was never designed with large language models in mind, using supervisory authorities that are under-resourced and still working out their own jurisdictional boundaries under the new regime. The firms that will navigate this best are not the ones with the largest compliance teams; they are the ones that have made governance a design constraint from the first day of any AI project, not a remediation exercise after the first incident. That requires a cultural shift at board level, not just a policy update. European regulators have broadly got the architecture right. What they have not yet demonstrated is the enforcement consistency that would make the compliance investment feel essential rather than optional. Until that changes, expect the gap between deployment ambition and governance reality to widen further before it narrows.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "James Whitfield" (james-whitfield) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article6 terms
foundation model
A large AI model trained on broad data, then adapted for specific tasks.
fine-tuning
Training a pre-built AI model further on specific data to improve its performance on particular tasks.
inference
When an AI model processes input and produces output. The actual 'thinking' step.
parameters
The internal settings an AI model learns during training. More parameters generally means more capable.
generative AI
AI that creates new content (text, images, music, code) rather than just analyzing existing data.
API
Application Programming Interface, a way for software to talk to other software.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.