Skip to main content
Why Big Tech's $725bn AI Spend Reshapes Europe's Sovereignty Plan

Why Big Tech's $725bn AI Spend Reshapes Europe's Sovereignty Plan

Google, Meta, Microsoft and Amazon now plan roughly $725bn of AI capex in 2026. Here is what that wave does to Europe's compute sovereignty agenda.

When Google reported its first quarter on Wednesday evening, it did more than beat earnings estimates. It signed off on a 2026 capital spending plan that takes the four leading US hyperscalers' combined investment in AI infrastructure to roughly $725 billion for the year, according to a count by the Financial Times. Microsoft on the same day told investors it was adding another $25 billion to its 2026 budget, citing component price rises across memory and accelerator silicon.

For European policymakers, the headline figure is large enough to be disorientating. It is bigger than the EU's annual research budget by an order of magnitude, more than the entire Recovery and Resilience Facility envelope for digital projects, and roughly equivalent to a year's worth of the bloc's combined defence spending. So what does a number like that actually mean for Brussels' sovereignty agenda? The short answer: it changes the maths in three ways.

Advertisement

Three shifts the $725bn number forces

The first shift is in raw capacity. Most of the spend will not produce papers, models or partnerships in Europe. It will produce data centres, primarily in the United States, Latin America and Southeast Asia, with European builds making up a minority share. Microsoft's marginal capex hike alone is comparable to the United Kingdom's planned ten-year AI compute pledge. When American firms add capacity at this rate, every European workload that runs on a sovereign cloud carries a higher opportunity cost in latency, performance and price.

The second shift is in the bill of materials. Microsoft was unusually frank about why its budget is rising: components are getting more expensive, memory in particular. SK Hynix, Samsung and Micron are sold out of high-bandwidth memory through 2027. Nvidia's Blackwell shipments are gated by advanced packaging capacity at TSMC. ASML, the Veldhoven lithography group whose machines anchor the entire roadmap, has already lifted its 2026 revenue guidance to between 36 and 40 billion euros on AI chip demand. Europe sits at the input layer of this market, but it does not capture the margins of the buyer or the operator.

Hyperscaler data centre construction site

The third shift is in the energy stack. Stargate, the joint venture vehicle now central to OpenAI's compute strategy, has been redrawn around US sites with cheap firm power. The same logic explains why OpenAI's $500 billion data centre plan in the United Kingdom was paused this month, with the company citing energy costs and planning issues. Hyperscaler capex follows electrons. Europe's dearer megawatt-hours and slower grid connections are now an acquisition cost for any global AI build.

How Europe's sovereignty plan should adjust

Three honest answers help.

  • Pick the layers you can actually own. Frontier compute at hyperscaler scale is no longer a realistic target. Sovereignty for sensitive workloads, mid-scale national clouds, and edge inference are. Every country with a credible compute story, whether France with Bull-Sequana, Germany with Aleph Alpha and SAP, or the UK with its forthcoming hardware plan, has been narrowing its ambitions in this direction over the past nine months.
  • Treat existing strengths as critical infrastructure. ASML's tooling is in every leading-edge fab. Trumpf's lasers are in every EUV machine. Imec's Leuven research base trains the supply chain for both. Brussels' EU Chips Act funding has so far prioritised new fabs; the next tranche needs to protect and deepen the parts of the chain that already work.
  • Make energy policy AI policy. The Commission's recent sovereign AI strategy mentioned compute, talent and data, almost in passing the question of where the electricity comes from. The $725 billion number forces a different conversation. If a European AI gigafactory cannot guarantee competitive industrial tariffs, it will lose to a Texan or Malaysian site every time. Capacity Mechanism reform, faster grid connection rules, and dedicated power purchase agreements for AI infrastructure now look like AI policy, not just energy policy.

The hyperscaler capex story is not a warning sign so much as a measurement tool. It tells European leaders, in dollars, which layers of the stack are unwinnable, which are defensible, and which are being neglected.

THE AI IN EUROPE VIEW

The instinct in Brussels and London this week will be to treat $725 billion as a competitive threat to be matched. It is not, and trying will not work. It is a price tag for capabilities that European democracies cannot, and probably should not, fully replicate. The harder, more useful exercise is to ask which workloads truly need to sit on European soil, and then build the energy, chip and cloud strategy that makes those specific pieces credible. The Commission's instinct, judging by the leaked drafts, is to spread effort across all three layers. We think that is exactly the wrong response.

We expect the realistic outcome of the coming Continental AI strategy to look much narrower than its early drafts. Doing fewer things at hyperscaler-level seriousness will get Europe further than doing many things at half-strength. The bigger the American number gets, the more disciplined the European response must be. The open question is whether the political appetite for that discipline survives contact with member-state lobbying. We are less confident than we were six months ago that it will. Member-state energy ministers, in particular, will need to move faster than they currently expect to. The deeper risk is that the political instinct to spread funding evenly across capitals will produce a portfolio of underweight commitments rather than two or three credible programmes. That is the failure mode worth pre-empting, not the absence of an answer to the American spend.

AI Terms in This Article 4 terms
inference

When an AI model processes input and produces output. The actual 'thinking' step.

compute

The processing power needed to train and run AI models.

hyperscaler

A massive cloud computing provider like AWS, Azure, or Google Cloud.

sovereign AI

National initiatives to develop domestic AI capabilities independent of foreign providers.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment