With less than fourteen weeks until the European AI Office begins enforcement of the AI Act's general-purpose model provisions on 2 August 2026, the General-Purpose AI Code of Practice has stopped being a discussion document and started becoming an operational checklist. This explainer sets out what the Code actually requires, where it sits inside the AI Act, and which provider obligations European compliance teams should treat as binding from now.
What the Code is, and what it is not
The Code of Practice was developed under Article 56 of the AI Act and finalised in mid-2025. The European Commission and the AI Board confirmed its adequacy in July 2025, after which signatory providers were given until 2 August 2026 to align. The Commission's own page describes the Code as a voluntary tool that demonstrates compliance with Articles 53 and 55 of the AI Act, and the AI Office has been clear that signing the Code is the path of least resistance for providers.
It is not law. Adherence is voluntary. Non-signatory providers can still meet AI Act obligations through their own documented processes. In practice, every meaningful general-purpose model provider with European exposure has signalled an intent to sign or align, because the alternative is a bilateral compliance dialogue with the AI Office, conducted in writing, against a clock.
The three chapters, in plain language
The Code has three chapters. The first two apply to all providers of general-purpose AI models. The third applies only to providers of models judged to carry systemic risk under the Article 51 thresholds.
Chapter 1, Transparency. Providers must publish a model documentation pack covering training data sources, evaluation methodology, intended uses, and known limitations. The Code references the AI Act's Annex XII as the minimum disclosure schema. The reading is that the Annex XII fields are mandatory; the Code adds expected presentation and update cadence.
Chapter 2, Copyright. Providers must publish a sufficiently detailed summary of training data and operate a policy that respects EU copyright law, with specific attention to the opt-out mechanism in Article 4 of the Copyright in the Digital Single Market Directive. The Code commits signatories to honouring machine-readable opt-outs (including robots.txt directives and standardised metadata signals) and to maintaining a rightsholder contact mechanism. This is the most operationally demanding chapter for non-EU providers, because compliance turns on systems they may have to build, not just describe.
Chapter 3, Safety and Security. Applies to GPAI models with systemic risk, defined in Article 51 as models trained with more than 10^25 floating-point operations. Signatories commit to a pre-deployment risk assessment, ongoing model evaluation against agreed risk taxonomies, an incident reporting line to the AI Office, and a documented mitigation framework. The Code references published technical standards, with the EU's standardisation request to CEN-CENELEC working group JTC 21 producing the harmonised standards that will eventually anchor compliance.
The August 2026 enforcement clock
From 2 August 2025, the AI Act's GPAI provisions were already in force. From 2 August 2026, the European AI Office can act on them. The Commission has been clear that enforcement begins with information requests and structured dialogues, escalating to model recalls and fines under Article 101 (up to 3 per cent of worldwide turnover, or €15 million, whichever is higher) only where compliance fails. For European compliance teams, the practical implication is that a signed Code commitment with documented evidence of alignment is the cheapest available enforcement insurance.
What still has to land before August
Three workstreams remain genuinely incomplete. The first is the Transparency template, expected from the AI Office in finalised form in May. The second is the harmonised standards bundle from CEN-CENELEC JTC 21, which is on a slower clock and will not be fully published before the enforcement date. The third is the AI Office's signatory taskforce deliverables, which include Q&A on edge cases such as fine-tuned variants of base models and where downstream provider obligations begin.
None of those gaps is a reason to wait. The Code's binding commitments are clear enough now to drive provider documentation, copyright compliance work, and risk-assessment scaffolding. Compliance teams that delay until the May template lands will find themselves competing for legal and technical capacity in June and July.
The view from European providers
Mistral, Aleph Alpha (now under the Cohere umbrella), and the larger US labs have all signalled intent to sign. Several smaller European labs have argued the Code is disproportionately costly for sub-systemic-risk models. The AI Office's reply, repeated in its public meetings, is that Chapters 1 and 2 scale down naturally for smaller providers; Chapter 3 only bites at the systemic-risk threshold. That is the right framing, but it does not eliminate the documentation burden for a small provider running a 7-billion-parameter European-language model.
THE AI IN EUROPE VIEW
The Code is a quietly successful piece of regulatory engineering. By making compliance opt-in but operationally cheap, the Commission has narrowed enforcement risk for providers willing to engage and concentrated regulatory attention on those who refuse. European procurement teams should treat a vendor's signed Code commitment as a baseline contractual ask from May onwards, and demand evidence of alignment with all three chapters where systemic-risk models are involved. The danger is not that the Code overreaches. It is that providers treat its voluntary character as licence to delay until enforcement starts.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.