Rishi Sunak Tells His Daughters to Master AI or Be Left Behind
Former UK Prime Minister Rishi Sunak is urging the next generation to combine AI literacy with empathy and critical thinking. His advice, shaped by advisory roles at Microsoft and Anthropic, lands as European schools struggle to embed coherent AI education pathways from primary level upwards.
Rishi Sunak has a clear message for his teenage daughters, and by extension for every secondary school student in Britain: learn to work with AI agents now, or risk being overtaken by those who do. The former Prime Minister, speaking at Bloomberg's New Economy Forum, drew on his current advisory roles with Microsoft and Anthropic to argue that AI literacy is no longer optional. It is the baseline skill of the next decade's workforce.
[[KEY-TAKEAWAYS:Sunak warns teenage daughters to master AI literacy or face career disadvantage|Advisory roles at Microsoft and Anthropic shape his views on human-AI collaboration|UK secondary schools reach 79% AI lesson coverage but primary provision remains thin|European voices including Mazzucato and Enria urge ethical grounding alongside technical skill|Empathy and critical thinking identified as the human skills AI cannot replicate]]
The remarks arrive at a genuinely uncomfortable moment for British and European educators. Schools across the UK, Germany, and France are still debating where AI literacy sits in the curriculum, whether it belongs in computing lessons, PSHE, or woven through every subject. Sunak's framing cuts through that debate with parental bluntness: the children who will thrive are those who can direct AI agents, interrogate their outputs, and bring human judgement to bear on the results.
Advertisement
The Balance Between Technical Fluency and Human Qualities
Sunak was direct about what he tells his daughters. They must become adept at managing AI agents, the autonomous software programmes that handle discrete tasks on a user's behalf. But they must not sacrifice the capabilities that remain distinctively human.
"We're never going to lose the importance of being able to think, to reason, to question critically, so I think those skills will be incredibly important for our young people to develop," he said, citing data from Stanford economists and LinkedIn workforce research.
That framing resonates with researchers who have spent years pushing back against the idea that coding skills alone will future-proof a career. Mariana Mazzucato, professor of the economics of innovation and public value at University College London, has argued consistently that the green and digital transitions demand workers who can apply ethical reasoning and systems thinking, not merely run prompts. Her work on mission-oriented innovation, widely cited in European Commission policy documents, supports exactly the kind of human-AI partnership model Sunak describes.
Similarly, Lucilla Sioli, director of artificial intelligence and digital industry at the European Commission's DG CONNECT, has emphasised in public statements that the EU's approach to AI competence must combine technical upskilling with what she calls "AI awareness", an understanding of how systems make decisions, where they fail, and what that means for the humans relying on them.
Europe's Schools: Strong at Secondary, Thin at Primary
The coverage data makes uncomfortable reading for policymakers. Research from Digital Promise, cited in the original analysis of Sunak's remarks, shows a persistent gap between secondary and primary provision:
Primary level: Limited integration, mostly basic digital awareness with no structured AI literacy pathway
Middle school equivalent (ages 11-14): Roughly 73% of students receive some AI literacy lessons, focused on responsible use principles
Secondary level (ages 14-18): 79% receive lessons with a practical application focus
Kelly Mills, senior director of powerful learning research at Digital Promise, identified the structural problem plainly: "The gap between the proportion of secondary students receiving AI literacy lessons and the proportion of primary students receiving them points to the need to develop what we would call AI literacy learning pathways."
In European terms, that gap is replicated and arguably wider. The UK's Department for Education updated its computing curriculum guidance in 2023 but has yet to mandate a coherent AI strand from Key Stage 1. France's Plan IA for schools, launched in early 2024, addresses secondary provision first. Germany's federal structure means provision varies sharply by Bundesland. The result is that the children of parents who think the way Sunak does will arrive at university with AI fluency; those from less informed households will not.
Workforce Readiness and the Skills That Will Not Be Automated
Sunak's list of career-resilient capabilities maps closely to what workforce analysts across Europe are now recommending. The skills he highlights for his daughters are not abstract virtues. They are competitive advantages in a labour market where, as Anthropic CEO Dario Amodei has warned, entry-level white-collar roles face the sharpest displacement risk.
The capabilities that hold their value are:
Critical evaluation of AI-generated content and the confidence to override it when wrong
Emotional intelligence and interpersonal communication, particularly in client-facing or care-adjacent roles
Creative problem-solving that combines human intuition with AI-generated options
Ethical reasoning to navigate AI's societal implications, including bias and accountability
Adaptability to tooling that will look entirely different within five years
This is not a novel list. What is notable is that a former head of government, now embedded inside the organisations building these systems, is repeating it to his own children. That is a data point in itself.
Regulation, Collaboration, and What Sunak's Advisory Roles Tell Us
Sunak's philosophy on AI governance did not evaporate when he left Downing Street. He continues to argue that governments should work directly with AI laboratories to assess risks rather than lead with restrictive legislation. He pointed to the UK's inaugural AI Safety Summit at Bletchley Park in November 2023 as a model: convene the builders, the governments, and the civil society voices together before rules are written.
That posture sits in productive tension with the EU's approach. The EU AI Act, now entering its implementation phase, takes a decidedly more prescriptive line, classifying systems by risk tier and placing hard obligations on providers of high-risk applications including those used in education and employment. Critics within the European AI sector argue the Act will slow adoption in schools precisely because vendors are uncertain about compliance obligations for classroom tools. Supporters counter that students deserve protection from poorly tested systems.
Neither side is entirely wrong, but the debate underlines why Sunak's framing matters for European parents and educators: the regulatory environment will shape which AI tools reach classrooms and how, but it will not resolve the underlying question of whether children are taught to interrogate those tools or simply accept their outputs.
What Schools and Parents Should Do Now
The practical implications of Sunak's argument are not complicated, even if implementing them is. Schools across the UK and EU should treat AI literacy not as a standalone computing module but as a thread running through every subject. Historians should ask students to fact-check AI-generated timelines. Science teachers should ask students to spot where a model's confidence exceeds its evidence base. English teachers should analyse the stylistic flatness of AI prose and ask why that matters.
For parents, the advice is equally concrete: encourage children to use AI tools, then interrogate what the tools produce. Ask where the answer came from. Ask what it missed. Ask what a person would have done differently. That habit of mind, built early, is the real asset.
Sunak's daughters are, by most accounts, growing up in a household where these conversations happen naturally. The policy challenge for Europe is ensuring that advantage does not become another dimension of educational inequality.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
AI Terms in This Article4 terms
future-proof
Designed to remain useful as technology changes.
AI governance
The policies, standards, and oversight structures for managing AI systems.
AI safety
Research focused on ensuring AI systems behave as intended without causing harm.
bias
When an AI system produces unfair or skewed results, often reflecting prejudices in training data.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.