Skip to main content
World's First Legally Binding AI Treaty Puts Human Rights at the Centre of European AI Policy

World's First Legally Binding AI Treaty Puts Human Rights at the Centre of European AI Policy

The Council of Europe Framework Convention on Artificial Intelligence, signed on 05/09/2024 by the US, UK, EU and seven other nations, establishes seven enforceable principles covering human dignity, transparency, and accountability. It marks a turning point in global AI governance, but enforcement gaps and national security exemptions are already drawing sharp criticism from legal experts.

The world's first legally binding international AI treaty is now a concrete legal reality. The Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law was signed on 05/09/2024, and it sets a global precedent that European policymakers, technology companies, and civil society groups cannot afford to treat as background noise.

The United States, United Kingdom, European Union, and seven further nations have committed to this framework. Unlike voluntary guidelines, industry codes, or non-binding recommendations, this convention carries genuine legal weight and creates enforceable obligations for signatory states. That distinction matters enormously in a landscape already crowded with principles that companies quietly shelve.

Advertisement

Seven Pillars of AI Accountability

The Convention builds upon seven core principles that governments must weave into national AI policy: human dignity, transparency, accountability, equality, privacy protection, reliability, and safe innovation. Signatories retain flexibility in domestic implementation, which allows legislators to tailor rules to local legal traditions whilst meeting the treaty's baseline requirements.

The framework specifically targets AI systems capable of affecting human rights, democratic processes, or the rule of law. Crucially, it covers both public sector deployments and private sector applications that fall within those critical categories. That dual scope is broader than many observers initially anticipated and places obligations on commercial operators as well as government agencies.

UK Justice Secretary Shabana Mahmood was unequivocal at the signing: "This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law." Her framing signals that the UK government views this as a complement to, rather than a replacement for, its existing domestic AI safety work.

Editorial photograph taken inside the Council of Europe's Palais de l'Europe in Strasbourg: a wide-angle view of the hemicycle chamber, empty of delegates, with the Council of Europe flag and natural

Implementation Challenges and Criticism

Legal experts are already raising hard questions about enforceability, and the criticism is not trivial. Francesca Fanucci, Legal Expert at the European Centre for Not-for-Profit Law, has been direct: "The formulation of principles and obligations in this convention is so overbroad and fraught with caveats that it raises serious questions about their legal certainty and effective enforceability."

Fanucci's concern centres on the treaty's broad language and the volume of built-in exemptions. National security carve-outs are particularly problematic. The convention permits countries to exclude AI systems deployed for defence or security purposes, a loophole large enough to swallow significant portions of state AI activity in practice.

There is also a structural asymmetry in oversight. The framework subjects government deployments to stronger scrutiny requirements than it applies to commercial operators. Critics argue this creates a two-tier system in which the private sector, responsible for developing and deploying the majority of AI systems touching citizens' lives, faces a lighter compliance burden than public bodies.

Enforcement itself relies on diplomatic pressure, international cooperation mechanisms, and reputational costs rather than direct sanctions. Compliance monitoring occurs through regular reporting requirements and peer review among signatory nations. Whether that proves sufficient when commercial or geopolitical interests conflict with treaty obligations remains an open and uncomfortable question.

Where This Sits in the Broader European Regulatory Picture

The Council of Europe Convention arrives alongside the EU AI Act, which became effective in August 2024. Together, they represent the most ambitious attempt at structured AI governance anywhere in the world. The table below shows how the major frameworks align:

  • Council of Europe Convention: Binding treaty with a human rights focus, signed September 2024
  • EU AI Act: Risk-based regulation with a market and safety focus, effective August 2024
  • US Executive Orders: Federal agency coordination framework, October 2023
  • UK AI Safety Summit: International cooperation framework established November 2023

The interaction between the Convention and the EU AI Act will be a defining compliance challenge for technology companies operating across European jurisdictions. Organisations must now map their AI systems against the risk classifications of the AI Act, the human rights obligations of the Convention, applicable national legislation, and sector-specific rules. That is a formidable stack, and firms that have treated AI governance as a box-ticking exercise will find themselves badly exposed.

Valentina Pavel, AI Policy Researcher at AlgorithmWatch in Berlin, has argued that the Convention's human rights framing fills a gap the EU AI Act does not fully address. Where the Act is primarily a market regulation instrument focused on product safety and risk categories, the Convention anchors obligations directly to fundamental rights. The two instruments are, in principle, complementary; in practice, their interaction will take years of case law to clarify.

Key Implementation Priorities

For public sector bodies and technology companies working through what compliance actually requires, five implementation areas will demand immediate attention:

  • Risk assessment methodologies for AI systems affecting human rights
  • Transparency requirements for algorithmic decision-making processes
  • Appeals mechanisms for individuals affected by AI system decisions
  • Cross-border cooperation frameworks for investigation and enforcement
  • Regular review and updating processes to address technological evolution

What Comes Next

The treaty is open to accession by any country, not solely Council of Europe members. That design choice reflects an ambition to establish a genuinely global baseline rather than a European club standard. How many non-Western nations ultimately accede will determine whether the Convention becomes a true international framework or a transatlantic agreement with honorary membership.

The real test arrives during the implementation phase. Divergent national interpretations of the Convention's broad principles could undermine its effectiveness and create regulatory arbitrage opportunities for companies willing to route operations through whichever jurisdiction interprets obligations most leniently. Consistent, rigorous implementation across signatory states is not guaranteed, and the absence of hard sanctions makes coordination harder to sustain over time.

From autonomous vehicles to predictive healthcare and welfare benefit eligibility systems, AI already influences consequential decisions affecting millions of people across Europe every day. The Council of Europe Convention represents a serious attempt to govern that reality through binding international law. Whether it succeeds depends entirely on the political will of signatories to translate principles into legislation that genuinely protects citizens rather than merely satisfying a diplomatic checklist.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 2 terms
AI governance

The policies, standards, and oversight structures for managing AI systems.

AI safety

Research focused on ensuring AI systems behave as intended without causing harm.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment