Seven Pillars of AI Accountability
The Convention builds upon seven core principles that governments must weave into national AI policy: human dignity, transparency, accountability, equality, privacy protection, reliability, and safe innovation. Signatories retain flexibility in domestic implementation, which allows legislators to tailor rules to local legal traditions whilst meeting the treaty's baseline requirements.
The framework specifically targets AI systems capable of affecting human rights, democratic processes, or the rule of law. Crucially, it covers both public sector deployments and private sector applications that fall within those critical categories. That dual scope is broader than many observers initially anticipated and places obligations on commercial operators as well as government agencies.
UK Justice Secretary Shabana Mahmood was unequivocal at the signing: "This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law." Her framing signals that the UK government views this as a complement to, rather than a replacement for, its existing domestic AI safety work.
Implementation Challenges and Criticism
Legal experts are already raising hard questions about enforceability, and the criticism is not trivial. Francesca Fanucci, Legal Expert at the European Centre for Not-for-Profit Law, has been direct: "The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability."
Fanucci's concern centres on the treaty's broad language and the volume of built-in exemptions. National security carve-outs are particularly problematic. The convention permits countries to exclude AI systems deployed for defence or security purposes, a loophole large enough to swallow significant portions of state AI activity in practice.
There is also a structural asymmetry in oversight. The framework subjects government deployments to stronger scrutiny requirements than it applies to commercial operators. Critics argue this creates a two-tier system in which the private sector, responsible for developing and deploying the majority of AI systems touching citizens' lives, faces a lighter compliance burden than public bodies.
Enforcement itself relies on diplomatic pressure, international cooperation mechanisms, and reputational costs rather than direct sanctions. Compliance monitoring occurs through regular reporting requirements and peer review among signatory nations. Whether that proves sufficient when commercial or geopolitical interests conflict with treaty obligations remains an open and uncomfortable question.
Where This Sits in the Broader European Regulatory Picture
The Council of Europe Convention arrives alongside the EU AI Act, which became effective in August 2024. Together, they represent the most ambitious attempt at structured AI governance anywhere in the world. The table below shows how the major frameworks align:
- Council of Europe Convention: Binding treaty with a human rights focus, signed September 2024
- EU AI Act: Risk-based regulation with a market and safety focus, effective August 2024
- US Executive Orders: Federal agency coordination framework, October 2023
- UK AI Safety Summit: International cooperation framework established November 2023
The interaction between the Convention and the EU AI Act will be a defining compliance challenge for technology companies operating across European jurisdictions. Organisations must now map their AI systems against the risk classifications of the AI Act, the human rights obligations of the Convention, applicable national legislation, and sector-specific rules. That is a formidable stack, and firms that have treated AI governance as a box-ticking exercise will find themselves badly exposed.
Valentina Pavel, AI Policy Researcher at AlgorithmWatch in Berlin, has argued that the Convention's human rights framing fills a gap the EU AI Act does not fully address. Where the Act is primarily a market regulation instrument focused on product safety and risk categories, the Convention anchors obligations directly to fundamental rights. The two instruments are, in principle, complementary; in practice, their interaction will take years of case law to clarify.
Key Implementation Priorities
For public sector bodies and technology companies working through what compliance actually requires, five implementation areas will demand immediate attention:
- Risk assessment methodologies for AI systems affecting human rights
- Transparency requirements for algorithmic decision-making processes
- Appeals mechanisms for individuals affected by AI system decisions
- Cross-border cooperation frameworks for investigation and enforcement
- Regular review and updating processes to address technological evolution
What Comes Next
The treaty is open to accession by any country, not solely Council of Europe members. That design choice reflects an ambition to establish a genuinely global baseline rather than a European club standard. How many non-Western nations ultimately accede will determine whether the Convention becomes a true international framework or a transatlantic agreement with honorary membership.
The real test arrives during the implementation phase. Divergent national interpretations of the Convention's broad principles could undermine its effectiveness and create regulatory arbitrage opportunities for companies willing to route operations through whichever jurisdiction interprets obligations most leniently. Consistent, rigorous implementation across signatory states is not guaranteed, and the absence of hard sanctions makes coordination harder to sustain over time.
From autonomous vehicles to predictive healthcare and welfare benefit eligibility systems, AI already influences consequential decisions affecting millions of people across Europe every day. The Council of Europe Convention represents a serious attempt to govern that reality through binding international law. Whether it succeeds depends entirely on the political will of signatories to translate principles into legislation that genuinely protects citizens rather than merely satisfying a diplomatic checklist.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.