Europe's Stake in the Framework
The Netherlands' co-hosting role is no accident. The Hague is home to the International Court of Justice and the International Criminal Court, and Dutch foreign policy has consistently prioritised the codification of international norms around emerging military technologies. For the Netherlands, embedding human oversight requirements into a multilateral framework is both a principled position and a strategic one: it shapes the rules of a game in which European defence firms are already competing.
Marietje Schaake, international policy director at the Cyber Civil Rights Initiative and former Member of the European Parliament, has argued publicly that the European Union must treat military AI governance as inseparable from its civilian AI regulatory agenda. Writing in 2023, she warned that without explicit carve-ins and carve-outs, the EU AI Act risks creating a patchwork where defence applications operate in a regulatory vacuum. That concern is now directly relevant: if the summit's voluntary principles are ever translated into binding obligations, the EU's existing legislative architecture will need to accommodate them.
The conflict in Ukraine has made this theoretical debate very practical. Ukrainian forces have deployed AI-enabled drone systems against Russian positions at scale, demonstrating both the battlefield effectiveness of autonomous targeting assistance and the accountability gaps that emerge when algorithmic decision-making intersects with lethal force. European governments supplying Ukraine have had to grapple, quietly but urgently, with the legal and ethical dimensions of the systems they are transferring.
The Double-Edged Technology
Military leaders have long recognised the contradictory character of AI in warfare. The technology enhances operational speed, logistics optimisation, intelligence processing, and predictive maintenance. It also introduces novel failure modes, adversarial vulnerabilities, and accountability ambiguities that traditional command structures are poorly equipped to manage.
Paul Scharre, vice president and director of studies at the Center for a New American Security and author of Four Battlegrounds: Power in the Age of Artificial Intelligence, has described autonomous weapons as a governance challenge that outpaces current international law. His framing, widely cited in European policy circles, is that the question is not whether AI will be used in warfare but whether the humans deploying it retain meaningful control over outcomes. The summit's insistence on human oversight for lethal decisions reflects exactly this concern.
European defence budgets are rising. Germany's Sondervermogen, the 100-billion-euro special defence fund, is channelling procurement towards networked, AI-assisted systems. France's Direction Generale de l'Armement has explicitly included AI in its long-term equipment programme, the LPM 2024-2030. Poland, the Baltic states, and the Nordic countries have all increased spending sharply since February 2022. The question is not whether European militaries will adopt AI; they are already doing so. The question is whether they will do it inside a coherent governance framework or outside one.
Governance Architecture: What the Summit Produced
The summit's blueprint, though non-binding, establishes a set of principles that will influence national procurement standards, export controls, and ultimately the development roadmaps of European AI companies supplying the defence sector. The key commitments include:
- Mandatory human oversight for lethal autonomous weapons systems
- Compliance verification mechanisms aligned with international humanitarian law
- Civilian protection protocols for AI-enabled military operations
- Transparency requirements for autonomous decision-making processes
- Regular review cycles to adapt guidelines as technology evolves
- Multi-stakeholder consultation frameworks involving industry, academia, and civil society
The multi-stakeholder model is significant. Traditional military procurement is government-to-industry, opaque, and slow. AI development is commercial-first, fast, and often dual-use by design. Companies such as Palantir, which has major European operations and defence contracts across the UK, France, and Germany, sit precisely at this intersection. So does Leonardo, the Italian defence group, which has been developing AI-assisted surveillance and targeting systems. Getting these firms into governance conversations early, rather than retrospectively imposing standards on deployed systems, is the logic behind the summit's inclusive format.
The Private Sector Complication
The relationship between commercial AI innovation and government deployment authority is the central tension in military AI governance everywhere, and Europe is no exception. Most AI breakthroughs originate in commercial research environments, many of them in the United States or China, and are adapted for military use after the fact. This creates procurement dynamics that existing regulatory frameworks struggle to handle.
The EU AI Act classifies certain AI systems used in critical infrastructure and law enforcement as high-risk, requiring conformity assessments and human oversight. Defence applications are largely excluded from the Act's scope under national security exemptions. This gap is precisely what critics such as Schaake have identified: the civilian governance architecture is rigorous in principle but porous at the boundary with military use.
For UK-based firms, the picture is similarly complicated. The UK government's 2023 AI Safety Summit at Bletchley Park addressed frontier AI risks but did not produce sector-specific guidance for defence applications. The Defence and Security Accelerator has funded AI projects, but coherent doctrine on autonomous weapons governance remains a work in progress. The summit's blueprint at least provides an external reference point against which UK policy can be benchmarked.
From Principles to Practice
Voluntary frameworks have a poor track record of constraining state behaviour in security competition. The Wassenaar Arrangement on export controls for dual-use technologies is routinely circumvented. The Convention on Certain Conventional Weapons has been discussing lethal autonomous weapons systems for over a decade without producing a treaty. Scepticism about the summit's enforceability is entirely warranted.
But the alternative to imperfect multilateral engagement is not a cleaner bilateral arrangement; it is no arrangement at all. European nations, as mid-sized military powers with strong industrial bases and significant AI research capacity at institutions such as ETH Zurich and the Alan Turing Institute, have more to gain from a rules-based framework than from an unconstrained technological race dominated by the United States and China. The summit's significance is less about what it compels and more about what it normalises: the expectation that states will explain, justify, and review their military AI deployments against shared standards.
That normalisation, if sustained, will shape European industrial policy, export licensing, and the research agendas of universities and think tanks for the next decade. Dismissing it as mere declaratory politics mistakes process for product. The process is the point.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.