Skip to main content
AI Monopolisation Is a Clear Threat to Europe's Technological Sovereignty, Experts Warn
· 5 min read

AI Monopolisation Is a Clear Threat to Europe's Technological Sovereignty, Experts Warn

Fields Medal winner Terence Tao has warned that concentrating AI in the hands of a few corporations poses fundamental risks to society. With deepfake detection falling behind creation speeds and open-source models lagging years behind commercial rivals, European regulators and researchers face urgent choices about how to build resilient, competitive AI ecosystems.

Allowing two or three corporations to control artificial intelligence is not a market inefficiency; it is a civilisational risk. That is the blunt assessment of Terence Tao, Professor of Mathematics at UCLA and one of the most decorated mathematicians alive, whose warnings about AI monopolisation are landing with particular force in Europe as the EU AI Act moves from legislation into enforcement.

Tao's concern is straightforward: open-source AI models currently lag two to three years behind their commercial counterparts in capability. That gap creates a structural dependency on proprietary systems, one that is especially dangerous for governments and critical infrastructure operators who cannot afford to have their decision-making pipelines controlled by a foreign boardroom.

Advertisement

The Monopoly Problem and What It Means for the EU

"It's not good for something as important as AI to be a monopoly held by one or two companies," Tao has said publicly. For European audiences, this is not an abstract complaint. The EU is already wrestling with the consequences of digital dependency on non-European platforms, and AI is set to deepen that dependency unless deliberate countermeasures are taken.

Margrethe Vestager, formerly the EU's Executive Vice-President for A Europe Fit for the Digital Age and the architect of the bloc's competition enforcement against Big Tech, has consistently argued that market concentration in digital infrastructure undermines both innovation and democratic accountability. Her framework applies directly to AI: if foundation models remain the preserve of a small number of US and Chinese hyperscalers, European firms and public bodies will be building on foundations they do not control and cannot audit.

The European response has included significant public investment in open-source alternatives. Mistral AI, headquartered in Paris, has emerged as the most prominent European challenger, releasing open-weight models that regulators and enterprises can inspect and modify. Yet even Mistral's leadership acknowledges that closing the capability gap with GPT-4-class systems requires sustained, coordinated funding that no single European company can provide alone.

Editorial photograph taken inside a modern European data centre, rows of illuminated server racks receding into the distance, a lone engineer in a high-visibility jacket reviewing a tablet display. Th

Deepfakes, Elections, and the Detection Gap

Tao's concerns extend beyond market structure into electoral integrity. His mathematical analysis of Venezuela's 2024 presidential election found statistically impossible patterns in the reported vote percentages, figures so precisely rounded that, in his words, "there is only a one in 100 million chance that the observed result of having extremely round percentages would have occurred" without manipulation.

The methodology is directly transferable to European contexts. With EU Parliament elections held every five years and national elections running continuously across 27 member states, the attack surface for AI-enabled electoral interference is substantial. Deepfake technology is accelerating the threat. Detection accuracy, which stood at roughly 95 per cent for basic synthetic video in 2020, has declined to around 73 per cent against 2024-quality deepfakes, according to industry benchmarks, while creation time has dropped from 48 hours to under 30 minutes. Projections suggest detection accuracy will fall further as near-perfect synthesis becomes achievable within two years.

Hany Farid, a leading digital forensics researcher at UC Berkeley whose work is widely cited by European law enforcement agencies including Europol, has noted that the asymmetry between creation and detection is now the defining challenge for platform integrity teams. European regulators building out the Digital Services Act's enforcement regime will need to grapple with this asymmetry directly: obligations to detect and label synthetic media are only meaningful if detection tools can keep pace.

AI Weaponisation: Five Vectors Europe Must Address

Beyond elections, the potential weaponisation of AI presents a broader set of security challenges that European defence and intelligence communities are actively mapping. The key threat vectors include:

  • Autonomous weapons systems capable of operating without meaningful human oversight
  • AI-powered mass surveillance technologies that could enable authoritarian governance models
  • Cyber warfare tools designed for sophisticated, targeted infrastructure attacks
  • Information warfare platforms built to manipulate public opinion at scale
  • Economic disruption through AI-enabled market manipulation in financial systems

The European Defence Agency has been expanding its AI-related research portfolio, and the NATO AI principles adopted in 2021 set out a framework for responsible military AI use. However, translating principles into enforceable standards, particularly across 27 member states with divergent defence postures, remains unfinished work.

Building a Resilient European AI Ecosystem

The structural response to monopolisation risk requires more than regulation. It requires a deliberate industrial policy that treats AI infrastructure as a public good, or at minimum a strategic asset, rather than a pure market outcome.

Researchers at ETH Zurich, one of Europe's foremost technical universities, have argued for years that foundational AI research must remain anchored in academic institutions with open publication norms, rather than migrating entirely into corporate R&D labs where findings are proprietary. The same logic underpins the EU's investment in ELLIS (the European Laboratory for Learning and Intelligent Systems), a network of AI research institutes spanning 17 countries designed to retain talent and share knowledge across borders.

Effective implementation of a resilient ecosystem depends on several interlocking priorities:

  • Robust regulatory frameworks under the EU AI Act that encourage competition whilst enforcing safety standards for high-risk applications
  • Sustained public investment in open-source AI research, building on initiatives like Mistral and the BigScience BLOOM project
  • International cooperation on AI governance standards, including with the UK post-Brexit and with Switzerland as a non-EU research hub
  • Education programmes that build technical literacy across the workforce, not just within specialist AI teams
  • Transparent audit and oversight mechanisms for AI deployed in critical sectors including energy, healthcare, and finance

The open-source gap is not insurmountable. Estimates from researchers familiar with the field suggest that five to seven years of coordinated, adequately funded development could close the capability distance between open and proprietary models. The question is whether European governments and institutions will treat that timeline as urgent or merely aspirational.

Tao's warnings, delivered from a mathematician's perspective rather than a technologist's, carry a particular kind of credibility: they are grounded in probabilistic reasoning rather than competitive positioning. Europe would do well to apply the same rigour to its own AI strategy before the dependency gap becomes structural and irreversible.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 6 terms
AI-powered

Uses artificial intelligence as part of its functionality.

at scale

Applied broadly, to a large number of users or use cases.

ecosystem

A network of interconnected products, services, and stakeholders.

robust

Strong, reliable, and able to handle various conditions.

AI governance

The policies, standards, and oversight structures for managing AI systems.

open-weight

Models whose learned parameters are shared, but training code may not be.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment