Skip to main content
The £670 Million Lesson: What Europe Must Learn from the UK's Looming AI Textbook Gamble

The £670 Million Lesson: What Europe Must Learn from the UK's Looming AI Textbook Gamble

A catastrophic AI textbook rollout abroad has handed European policymakers a stark warning. With the UK and several EU member states actively exploring AI-powered learning tools, the collapse of a state-backed digital textbook programme within four months offers a blueprint for exactly what not to do.

Rushed political timelines, poor quality control, and undertrained teachers brought an $850 million AI textbook initiative crashing down within four months, and Europe's education policymakers would be foolish to look away.

The programme in question launched in March 2025, promising personalised learning, reduced teacher workloads, and lower dropout rates. Seventy-six AI-powered textbooks were rolled out across mathematics, English, and coding subjects to thousands of schools, backed by partnerships with a dozen publishing companies. By October 2025, the whole thing had been quietly reclassified as "supplemental materials", giving schools tacit permission to abandon it entirely. By December, they largely had.

Advertisement

The story matters acutely in Europe right now. The UK government has been actively courting EdTech investment as part of its wider AI opportunity agenda, and several EU member states, including France and the Netherlands, are piloting AI-assisted learning tools in state schools. The question is not whether AI has a role in education. It clearly does. The question is whether governments are being honest with themselves about the conditions required for that role to deliver results.

What Actually Went Wrong

The failures were neither subtle nor slow to emerge. From the first weeks of rollout, students and teachers reported technical glitches that brought lessons to a halt. Teachers described content quality as poor and hastily assembled, with factual errors undermining the credibility of the material. The AI personalisation features, the central selling point of the entire programme, malfunctioned regularly. Publishers who had been promised faster production cycles through AI found themselves experiencing significant delays instead.

Teacher training was catastrophically inadequate. An estimated 98.5% of educators received insufficient preparation before the tools arrived in their classrooms. The result was predictable: teachers spent more time troubleshooting than teaching, and student frustration mounted rapidly.

The programme's political architecture made things worse. Mandatory status was declared by the education minister, then reversed under public pressure to voluntary pilot status, then quietly killed off following broader political upheaval. Adoption rates during the mandatory phase varied wildly, from 98% in politically sympathetic regions to as low as 8% in others. The technology was never the deciding factor; political compliance was, which is precisely the wrong foundation for an educational reform.

A wide-angle editorial photograph taken inside a modern British secondary school classroom. Several students sit at desks with open laptops displaying educational software interfaces, while a teacher

A Deterioration in Four Stages

The programme's collapse followed a clear and depressingly predictable arc. At launch in March 2025, usage stood at 37% of enrolled schools. By August, following a policy reversal that stripped the mandatory requirement, usage had dropped to 25%. October's reclassification to supplemental status pushed that figure to 19%. By December, effective abandonment was widespread, with usage estimated at 15% and falling.

Publishers, who had collectively invested the equivalent of roughly £450 million of the total government commitment, were left exposed. A hastily formed industry emergency committee filed a constitutional petition demanding the government reverse course. It is a reminder that when governments rush technology to market for political reasons, private sector partners absorb much of the financial pain when the inevitable correction arrives.

Why This Is Europe's Problem Too

Professor Rose Luckin, a leading AI in education researcher at UCL's Knowledge Lab in London, has consistently argued that AI tools in classrooms must be co-designed with teachers, not handed to them as finished products. Her research highlights that the effectiveness of any educational technology depends almost entirely on how well it is integrated into existing pedagogical practice, not on the sophistication of the underlying model.

That view is gaining traction at a regulatory level as well. The EU AI Act, which entered into force in August 2024, classifies AI systems used in education as high-risk, triggering requirements for conformity assessments, transparency obligations, and human oversight mechanisms before deployment. Dragiša Pešić, a Brussels-based policy analyst at AlgorithmWatch, has noted that the Act's high-risk classification for educational AI is one of its more consequential provisions, precisely because it creates a structural barrier against the kind of rushed, under-tested rollout that derailed the programme described here.

The UK, operating outside the EU AI Act, is following a different regulatory path under its sector-led, principles-based approach. That flexibility has advantages, but it also means there is no mandatory conformity assessment forcing a pause before large-scale deployment in schools. The Department for Education's appetite for AI-driven efficiency gains is real, and the risk of procurement decisions being driven by ministerial enthusiasm rather than evidence is not hypothetical.

The Publisher Risk Is Structural

One underappreciated dimension of this failure is the financial exposure it created for private sector partners. Publishers invested heavily on the basis of government mandates, then found those mandates revoked. This is not simply a cautionary tale about due diligence. It is a structural problem with how governments procure and deploy unproven technology at scale.

European EdTech companies and publishers considering government partnerships on AI-powered learning tools should treat this episode as a serious stress test of their contractual protections. What happens to their investment if a programme is reclassified or discontinued before the contracted period ends? In the UK, where the government is actively inviting private sector co-investment in public sector AI projects, these questions need clear answers before contracts are signed.

Five Lessons European Policymakers Cannot Afford to Ignore

  • Pilot before you mandate. Voluntary, time-limited pilots with robust evaluation frameworks are not weakness; they are the only responsible path to evidence-based scaling.
  • Teacher training is not optional. Any deployment plan that does not put teacher preparation at its centre before launch is setting itself up to fail.
  • Quality control must precede political announcements. Launching on a political timetable rather than a technical one is how you end up with factual errors in publicly funded educational materials.
  • Measure outcomes, not outputs. The number of textbooks deployed is not a success metric. Learning outcomes, teacher confidence scores, and usage consistency are.
  • Protect private sector partners contractually. If governments want industry investment in public sector AI, they must honour the terms of that partnership even when political winds shift.

The EU AI Act provides a partial structural safeguard for member states, but regulation alone does not substitute for political discipline in procurement. The UK, in particular, needs to resist the temptation to treat AI in education as a showcase for its post-Brexit innovation credentials at the expense of the students and teachers who will live with the consequences of a botched rollout.

An $850 million failure is an expensive lesson. Europe did not pay for it. That does not mean Europe can afford to ignore it.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "James Whitfield" (james-whitfield) to Intelligence Desk per editorial integrity policy.
  • Slug regenerated from ai-textbooks-experiment-flops-saudi-arabia to the-670-million-lesson-what-europe-must-learn-from-the-uks-looming-ai-textbook-gamble to match the rewritten Europe title per editorial integrity policy.
AI Terms in This Article 4 terms
AI-powered

Uses artificial intelligence as part of its functionality.

AI-driven

Primarily guided or operated by artificial intelligence.

at scale

Applied broadly, to a large number of users or use cases.

robust

Strong, reliable, and able to handle various conditions.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment