European AI Factories Are Right in Principle and Wrong in Scale: Here Is How to Fix the Budget
Europe's AI Factories programme, built atop the EuroHPC Joint Undertaking, is the most credible attempt yet to build sovereign compute at continental scale. The direction is correct. The money is not even close to sufficient. Three specific budget reallocations could change that, and the window to act is narrowing fast.
Europe's AI Factories programme is the right answer to the right question, underfunded by an order of magnitude, and running out of time to correct that before the compute gap with the United States and China becomes structurally permanent.
That is not a comfortable conclusion, but it is the honest one. Since the European Commission formalised the AI Factories concept in early 2024, positioning selected EuroHPC supercomputing sites as dedicated hubs for AI training workloads, the political narrative has been admirably clear: Europe cannot build sovereign AI on rented American cloud infrastructure. Dependency on a handful of hyperscalers for the most strategically sensitive computational resource of the decade is a geopolitical vulnerability, not merely a procurement inconvenience. The Commission is right about that. Member states, broadly, agree. The problem is that agreement at the level of principle has not translated into commitment at the level of capital.
Advertisement
The EuroHPC Joint Undertaking, which forms the institutional backbone of the whole enterprise, operates on a combined public budget from the EU and participating states of roughly 7 billion euros across its current multi-annual financial framework period. That sounds large until you compare it with what a single large American AI laboratory spends on compute in a calendar year, or with the announced capital expenditure plans of any of the three major US hyperscalers for 2025 alone. The numbers do not flatter Europe.
"Training a frontier large language model at the scale of those released by leading laboratories in 2024 requires somewhere between 10,000 and 100,000 high-end GPUs running continuously for weeks to months. The AI Factories programme, across all designated sites, does not collectively approach that GPU count for dedicated AI workloads."
AI in Europe analysis
The existing EuroHPC flagship machines, including MareNostrum 5 at the Barcelona Supercomputing Centre, the JUPITER system at Forschungszentrum Jülich, and the capacity maintained by GENCI in France, represent genuine technical achievements. MareNostrum 5, which came into full operation in 2024, delivers significant general-purpose high-performance computing. JUPITER, the first European exascale-class machine, represents a landmark in raw floating-point performance. GENCI has consistently pushed French national capacity upward and has been one of the more aggressive voices inside European HPC policy circles arguing for accelerated GPU procurement. These are not vanity projects. They are real infrastructure, staffed by serious engineers, doing serious science.
But science and frontier AI training are not the same workload. The AI Factories overlay on EuroHPC is an attempt to carve out dedicated GPU-accelerated capacity for model training and fine-tuning, making it accessible to European startups, research labs, and mid-sized enterprises that cannot afford to build proprietary clusters. The intent is correct. The execution is being throttled by the gap between what the programme costs and what it has actually been allocated.
The Scale Problem in Concrete Terms
Let us be specific about what "underfunded by an order of magnitude" actually means in practice. Training a frontier large language model at the scale of those released by leading American and Chinese labs in 2024 requires somewhere between 10,000 and 100,000 high-end GPUs running continuously for weeks to months. The AI Factories programme, across all designated sites, does not collectively approach that GPU count for dedicated AI workloads. The sites are time-shared, often heavily, with existing HPC user communities whose scientific computing needs are entirely legitimate but consume capacity that is not, therefore, available for long AI training runs.
The CEA in France, which operates the Jules Horowitz Reactor and maintains substantial computational infrastructure relevant to both nuclear and AI research, has in published briefings acknowledged the tension between existing scientific computing obligations and the new demands of large-scale AI training. That tension is not unique to France. It is structural across the EuroHPC ecosystem. The machines were specified, procured, and funded for a generation of workloads that predates the transformer revolution. Retrofitting them as AI factories is possible but inefficient, and it requires either additional dedicated hardware or genuine ring-fencing of existing capacity, both of which cost money that has not been committed.
The GPU question is particularly acute. The most capable training accelerators, predominantly NVIDIA H100 and H200 class hardware, are priced at tens of thousands of euros per unit, require specialised high-bandwidth interconnects, consume enormous amounts of power, and generate heat loads that older data centre facilities were not designed to handle. Several EuroHPC sites have had to undertake significant facility upgrades simply to house the hardware they have already acquired. The capital requirement for scaling to genuine frontier-training capability is not incremental. It is transformative.
The figures below contextualise the scale of the investment challenge facing the AI Factories programme and EuroHPC, drawing on public capacity disclosures, programme documents, and announced capital plans from comparable institutions and competitors.
Three Specific Budget Reallocations That Would Close the Gap
Identifying the problem is the easy part. The harder and more useful exercise is identifying where the money should come from and how it should be redirected. Europe is not short of public funds in aggregate; it is short of public funds allocated to the right purpose in the right vehicle. Three specific reallocations would meaningfully change the trajectory.
First: Redirect a Substantial Portion of Cohesion Fund Digital Envelopes
The EU's cohesion and structural funds include substantial digital transformation envelopes directed at member states, with a stated ambition of closing regional digital divides. A meaningful share of this spending ends up funding broadband rollout in areas where commercial operators were always going to build eventually, digitisation grants for SMEs that generate modest additionality, and national e-government projects of questionable strategic value. None of this is worthless. Some of it is genuinely useful. But in a period when the strategic computing gap between Europe and its principal competitors is widening at pace, continuing to allocate cohesion digital funds on the existing basis represents a straightforward misallocation of strategic capital.
A reallocation of 15 to 20 percent of cohesion fund digital envelopes, through a reformed matching mechanism that requires member states to co-invest in EuroHPC AI Factory capacity rather than purely national infrastructure, would generate several billion euros of additional compute investment over the current financial framework without requiring any increase in the overall EU budget. The political resistance would be real, particularly from net recipient member states that benefit most from cohesion flows. It would need to be structured carefully to maintain the additionality logic of cohesion policy. But the technical mechanism exists and the strategic case is overwhelming.
Second: Consolidate Fragmented National AI Programmes Into EuroHPC Co-Funding
France has its national AI strategy, Germany has its AI Made in Germany initiative, Spain has its National Artificial Intelligence Strategy, the Netherlands has its AI Coalition, and so on across 27 member states. These programmes collectively spend billions of euros annually on AI-related investment, the majority of which is directed at domestic priorities, domestic champions, and domestic institutions with limited interoperability or shared benefit. The Barcelona Supercomputing Centre, Forschungszentrum Jülich, and GENCI are all beneficiaries of national funding streams that run parallel to, rather than through, EuroHPC. The result is coordination failure at continental scale.
A structured programme to consolidate 25 to 30 percent of national AI compute investment into EuroHPC co-funding agreements, with guaranteed access rights for contributing member states' researchers and companies, would both increase total capacity and reduce unit costs through procurement scale. The objection that member states will resist surrendering control over national assets is legitimate but answerable: EuroHPC already operates on exactly this co-funding and co-ownership model for its flagship machines. Extending that model to a broader set of national AI compute investments is a political negotiation, not a technical impossibility. Forschungszentrum Jülich's experience co-hosting the JUPITER system demonstrates that the governance frameworks for this kind of arrangement are mature and functional.
Third: Establish a European Compute Bond Instrument Under the EIB
The European Investment Bank has the institutional capacity, the balance sheet, and the existing mandate to finance long-duration infrastructure of strategic importance. AI compute infrastructure, specifically large GPU clusters with operational lives of five to seven years before significant technology refresh is required, fits the EIB's infrastructure financing model better than most people in the HPC community have recognised. The capital costs are front-loaded, the depreciation curve is steep, and the return is primarily in the form of economic externalities rather than direct revenue, which is precisely the profile that development bank instruments are designed to address.
An EIB-backed European Compute Bond instrument, structured to raise 10 to 15 billion euros from institutional investors with an EU budget guarantee covering first-loss exposure, would allow EuroHPC to procure GPU capacity at a scale that approximates, though does not yet match, what frontier AI training requires. The guarantee exposure for the EU budget would be a fraction of the face value. The precedent exists in the form of the European Fund for Strategic Investments and its successor programmes. The political will to extend it to compute infrastructure is the missing ingredient, not the financial architecture.
Why the Principle Is Right Even If the Execution Is Lacking
It would be easy, given the scale of the funding gap, to conclude that the AI Factories strategy is wishful thinking dressed up as industrial policy. That conclusion would be wrong. The principle of building shared, sovereign, publicly accessible AI compute infrastructure is correct for reasons that go beyond national pride or industrial strategy in the narrow sense.
European AI research and European AI companies need access to compute that is not subject to export controls, that cannot be switched off by a foreign government's policy decision, that does not embed surveillance or data extraction by a foreign commercial entity, and that is governed by legal frameworks consistent with European law. All four of those requirements are currently unmet for European organisations that rely predominantly on US cloud providers for AI training capacity. The AI Factories programme, whatever its current limitations, is an attempt to address a structural vulnerability that is real and growing.
The Barcelona Supercomputing Centre has argued publicly that European AI capability depends on compute sovereignty, not merely regulatory sovereignty, and that the two are not substitutes for each other. That argument is correct. Legislating rules for AI systems trained on American infrastructure is not the same as having the capacity to train competing systems on European infrastructure. The AI Act is necessary but not sufficient. Compute access is the other half of the equation, and the AI Factories programme is the only credible European vehicle for delivering it.
The question is not whether to fund it. The question is whether Europe will fund it at a scale that makes it strategically meaningful before the window closes. On current trajectories, the answer is no. With the three reallocations outlined above, the answer could be yes. The political will to execute them is the only variable that genuinely remains uncertain.
THE AI IN EUROPE VIEW
The AI Factories programme deserves credit for clarity of ambition, and the EuroHPC Joint Undertaking has delivered more than its critics acknowledge. MareNostrum 5, JUPITER, and the network of associated sites are genuine assets, built by serious people, operating to serious technical standards. None of that changes the fundamental arithmetic: the programme is underfunded by a factor of ten or more relative to what frontier AI training actually costs, and the gap is widening, not closing.
The three reallocations we have proposed, cohesion fund redirection, national programme consolidation, and an EIB compute bond instrument, are not radical ideas. They use existing institutions, existing legal instruments, and existing funding flows. What they require is political will to prioritise strategic compute over the comfortable distribution of digital transformation grants and the equally comfortable fiction that 27 parallel national AI strategies add up to something coherent at the European level.
If the Commission and member states are serious about compute sovereignty, they need to fund it seriously. Half-measures will produce a half-built factory, and a half-built factory is not a strategic asset. It is an expensive symbol. Europe can afford better than that, and the AI Factories programme, properly funded, could deliver it. The time to decide is now, not at the next multi-annual financial framework review.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article6 terms
fine-tuning
Training a pre-built AI model further on specific data to improve its performance on particular tasks.
transformer
The neural network architecture behind most modern AI language models.
GPU
Graphics Processing Unit, the powerful chips that AI models run on.
transformative
Causing a major change in form, nature, or function.
ecosystem
A network of interconnected products, services, and stakeholders.
digital transformation
Adopting digital technology across a business.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.