Skip to main content
The Hague Blueprint: How Europe Helped Write the Rules for Military AI
· 6 min read

The Hague Blueprint: How Europe Helped Write the Rules for Military AI

A landmark summit co-hosted by the Netherlands and the United Kingdom brought over 90 nations together to establish responsible guidelines for artificial intelligence in warfare. With autonomous weapons proliferating and defence budgets surging, Europe is now at the centre of efforts to govern military AI before the technology outpaces diplomacy.

Europe is no longer a bystander in the race to govern military artificial intelligence. A landmark summit co-hosted by the Netherlands, the United Kingdom, South Korea, Singapore, and Kenya drew together representatives from more than 90 nations to establish a shared blueprint for responsible AI in warfare. The gathering, held in September, is the most ambitious international attempt yet to place guardrails around autonomous weapons systems, and European co-leadership signals that Brussels and The Hague intend to shape those rules, not merely follow them.

[[KEY-TAKEAWAYS:The Netherlands and UK co-hosted a 90-nation summit setting voluntary military AI standards|Europe's defence budgets are rising sharply, accelerating autonomous systems procurement|Human oversight of lethal decisions is the summit's core non-negotiable principle|Voluntary guidelines lack enforcement teeth; binding agreements remain politically distant]]

Advertisement

What the Summit Actually Agreed

The summit's output is a non-binding blueprint, but its scope is broader than any predecessor initiative. Delegates addressed four critical areas that will define how nations deploy AI on the battlefield over the next decade:

  • Legal review processes to ensure compliance with international humanitarian law
  • Mandatory human oversight so that autonomous weapons do not make life-or-death decisions independently
  • Civilian protection protocols governing AI-enabled military operations
  • The deeply contested role of AI in nuclear weapons management

The Netherlands' co-hosting role was not incidental. The country is home to the International Court of Justice and the International Criminal Court, and Dutch officials have consistently argued that existing international law already applies to autonomous weapons. For The Hague, anchoring military AI governance in legal frameworks already accepted by most states is a more pragmatic path than negotiating an entirely new treaty.

Wide-angle photograph inside a formal multilateral conference chamber, delegates seated at curved rows of desks bearing national nameplates, soft overhead lighting, a large projection screen displayin

Europe's Own Military AI Expansion

It would be convenient for European policymakers to frame this summit purely as an exercise in responsible restraint. The reality is more complicated. European defence spending is rising sharply, and AI-enabled systems are a central part of that expansion. Ukraine's deployment of AI-guided drones against Russian forces, supplied and supported in part by European partners, has demonstrated both the operational value and the accountability risks of autonomous battlefield technology.

Marietje Schaake, international policy director at the Stanford Cyber Policy Center and a former member of the European Parliament with deep expertise in AI regulation, has argued consistently that Europe cannot credibly export AI governance norms while simultaneously loosening procurement rules for the same technologies at home. That tension sits at the heart of the summit's ambitions.

Within the EU itself, the AI Act explicitly carves out military and national security applications from its scope. That exemption has attracted criticism from civil society groups and some academics who argue it creates a significant blind spot in what is otherwise the world's most comprehensive AI regulatory framework. Paul Timmers, a senior research fellow at Oxford's Internet Institute and a former European Commission director, has noted that the absence of military AI from the AI Act means member states are effectively operating under no common standard, a gap the summit's blueprint attempts, imperfectly, to fill.

The Governance Architecture

Key principles emerging from the summit discussions represent a starting point rather than a finished architecture. The agreed framework includes:

  • Mandatory human oversight for lethal autonomous weapons systems
  • Compliance verification mechanisms aligned with international humanitarian law
  • Transparency requirements for autonomous decision-making processes
  • Regular review cycles to adapt guidelines as technology evolves
  • Multi-stakeholder consultation frameworks involving industry, academia, and civil society

The summit builds on existing United Nations Convention on Certain Conventional Weapons discussions on lethal autonomous weapons systems, which have been grinding along since 2014 without producing binding obligations. The complementary approach taken in September reflects a pragmatic acceptance that a legally binding global treaty on military AI is not achievable in the near term, and that voluntary norms, consistently applied, may be the realistic alternative.

Private Sector Entanglement

One of the summit's more candid acknowledgements was the complex relationship between commercial AI development and military deployment. The technology underpinning autonomous weapons systems, from computer vision to large language model-based decision support, originates almost entirely in the private sector. European defence primes such as Leonardo, Thales, and Rheinmetall are integrating commercially developed AI into weapons platforms at pace. Yet governments retain formal authority over deployment decisions.

This creates a governance gap that traditional military procurement models are poorly equipped to handle. A drone's targeting algorithm may have been trained on open-source data, refined by a university spin-out, licensed to a defence contractor, and integrated into a system operated by a national military, all without any single point of clear accountability. The summit's blueprint attempts to assign that accountability to governments, but the mechanisms for enforcing it remain underdeveloped.

What Comes Next for European Policy

For EU and UK policymakers, the summit raises an immediate question: what domestic legislative steps follow from co-hosting an international governance initiative? The UK's Ministry of Defence has published a responsible AI strategy, but it lacks statutory force. The European Defence Agency has begun mapping member state approaches to autonomous systems, but coordination remains patchy.

The most concrete near-term deliverable is likely to be a series of bilateral and multilateral confidence-building measures, information sharing on AI testing protocols, joint exercises involving human oversight of autonomous systems, and harmonised export controls on military AI components. None of these require new treaties. All of them require sustained political will, which is precisely what has been lacking in previous arms control efforts.

The September summit has set a foundation. Whether European governments build on it, or allow it to become another aspirational framework gathering diplomatic dust, will determine whether the Continent's claim to lead on AI governance has any substance on the battlefield as well as in the boardroom.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
AI Terms in This Article 5 terms
computer vision

AI that can analyze and understand images and videos.

responsible AI

Developing and deploying AI with consideration for ethics, fairness, and safety.

AI governance

The policies, standards, and oversight structures for managing AI systems.

guardrails

Safety constraints built into AI systems to prevent harmful outputs.

regulatory framework

A set of rules and guidelines governing how something can be used.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment