The AI Honeymoon Is Over: Why European Workers Are Using More and Trusting Less
Usage is up, confidence is down, and the gap between AI's marketing promises and workplace reality is widening fast. A striking new dataset shows that more European workers than ever are using AI tools daily, yet fewer believe in them. The honeymoon period has ended, and what happens next is a people problem, not a technology one.
Workers across the UK and Europe are using AI more than ever before, yet their confidence in the technology has plummeted by 18% in a single year. That is not a paradox. It is a reckoning.
[[KEY-TAKEAWAYS:AI usage rose 13% globally but worker confidence fell 18% in the same period|56% of workers report receiving no recent AI training from their employer|Only 28% of organisations can translate AI tool usage into measurable business outcomes|64% of workers plan to stay in current roles, partly driven by automation anxiety|European businesses deploying AI broadly without training are accelerating the confidence crisis]]
Advertisement
A January 2026 study from ManpowerGroup delivered findings that should alarm every boardroom on the continent. For the first time in three years, worker confidence in AI has declined, even as adoption accelerates. The data signals that the easy wins are behind us and the hard work of genuine integration has barely begun.
The Global Confidence Crash, Felt Acutely in Europe
ManpowerGroup's research tracked a 13% year-on-year jump in AI usage, with 45% of the global workforce now using the technology regularly. Yet confidence in that same technology dropped 18%. More people are using AI than ever, and fewer of them trust it.
Mara Stefan, Vice President of global insights at ManpowerGroup, was direct about the consequences: "You can't have an intimidated workforce and be fully productive. That anxiety is going to cause real problems."
The numbers extend further. While 89% of workers feel confident in their current roles, 43% now fear automation could replace their job within two years. That figure is up 5% from 2025. This anxiety is driving what ManpowerGroup calls "job hugging," with 64% of workers planning to stay with their current employer rather than risk the uncertainty of a move.
For UK-based practitioners, this is not abstract data. Tabby Farrar, head of search at Candour, a UK-based SEO and web design agency, captures the daily reality precisely. Her team is genuinely motivated to adopt AI, but the experience is deeply uneven.
"As a manager, I'm trying to get the team more on board with AI stuff, because it's the future of so many industries," Farrar said. "There's just so many people going, 'I have lost two hours of my day trying to make this thing work.'"
For every workflow where AI delivers a clear time saving, there are several more where the tool creates friction rather than removing it. A prompt that nails a product description in seconds is followed, days later, by two hours of wrestling with a different task only to abandon the tool entirely and do it manually.
The Training Void at the Heart of the Problem
The EY Work Reimagined report, published in November 2025, found that while roughly nine in ten employees are now using AI at work, only 28% of organisations can translate that usage into meaningful business outcomes. The tools are being handed out. The capability to use them well is not.
ManpowerGroup's data makes the structural failure explicit:
56% of respondents reported receiving no recent AI training from their employer
57% said they had no access to mentorship on AI use
Workers are being given powerful tools with almost no structured guidance
Anna Thomas, co-founder of the Institute for the Future of Work, a UK-based research body that advises policymakers and businesses on technology transitions, has argued consistently that this gap between tool deployment and human readiness is the defining failure of current AI strategy. Organisations are measuring success by the number of licences purchased rather than by whether workers can actually use those licences to improve their output.
The mismatch between vendor demonstrations and workplace reality is a significant driver of the confidence drop. Those polished product demos make everything look frictionless. The reality involves significant trial and error, and workers who lack a support structure to navigate that period are left feeling that the technology has failed them, even when the real failure is organisational.
As Kristin Ginn, founder of trnsfrmAItn, has observed: "If you're now starting to look at how you can use AI for the same task, you all of a sudden have to put a lot more mental effort into trying to figure out how to do this in a completely different way. That loss of the routine, the confidence of how I'm doing it, that can also just go back to the human nature to avoid change."
A Harvard Business Review analysis adds a further dimension that is rarely discussed honestly. When employees gain access to AI tools, they do not simply work faster. They work broader, take on more tasks, and extend into longer hours. AI is not always reducing the burden of work. In some cases, it is intensifying it.
The Emergence of the AI Gatekeeper
In organisations where senior leaders are paying close attention, a new informal role has emerged: the AI gatekeeper. Randall Tinfow, CEO of REACHUM, estimates he spends roughly 20 hours of his 70-hour working week vetting AI tools and partners before they reach his team. His explicit goal is to protect his staff from the noise.
"There's so much noise, and I don't want our team to get distracted by that, so I'm the one who will take a look at something, decide whether it is reasonable or garbage, and then give it to the team to work with," Tinfow said.
This pattern is visible across UK and European businesses grappling with rapid AI adoption driven by competitive pressure and, in some cases, government incentives. Someone in the organisation needs to act as a filter. Without that role, the result is frustration, wasted time, and precisely the confidence erosion that ManpowerGroup's data is now capturing at scale.
The UK's AI Safety Institute, which operates under the Department for Science, Innovation and Technology, has begun addressing the workplace readiness dimension in its published guidance, though its primary mandate remains evaluating frontier model risks. The EU AI Act's requirements around human oversight and transparency in high-risk deployments add a regulatory layer that should, in theory, force organisations to think harder about how workers interact with AI systems. In practice, compliance is being treated as a legal checkbox rather than a genuine change management programme.
What Rebuilding Confidence Actually Requires
Candour's team has developed practical internal strategies that are worth examining as a model. Rather than broad deployment, they have focused on specific use cases where AI delivers demonstrable value. They built a Gemini Gem trained on brand guidelines that generates quotes clients can approve for media use. The result is a narrow, well-defined application that works reliably, rather than a sweeping mandate to use AI for everything.
Farrar's team's operational approach includes:
Building additional time into project schedules to account for the AI learning curve and potential failures
Framing experiments explicitly as "test and learn" to reduce pressure for immediate perfect results
Appointing dedicated AI champions within the team to stay current with developments and share knowledge
Running regular training sessions alongside honest check-ins about frustrations and blockers
Focusing deployment on specific use cases with clear, measurable value rather than across-the-board mandates
Farrar remains clear-eyed. The wins are real, but so are the failures, and pretending otherwise is precisely what destroys worker trust.
The Broader European Picture
The UK and European context has additional complexity that differentiates it from other markets. The EU AI Act is creating compliance obligations that affect how organisations can deploy AI in hiring, performance management, and other high-stakes workplace contexts. The Act's tiered risk framework means that the tools most likely to generate worker anxiety, those that touch job allocation, monitoring, or evaluation, face the heaviest scrutiny.
Yoshua Bengio, scientific director of Mila and one of the most prominent academic voices on AI governance, has repeatedly argued that the speed of deployment has outrun the field's understanding of how these tools affect human cognition and work patterns. His position, backed by substantial research, is that the confidence drop ManpowerGroup is measuring is not irrational. Workers are correctly identifying that AI tools are unreliable in ways that vendor marketing does not acknowledge.
The companies that will come out ahead are not those deploying the most tools. They are those investing in their people alongside the technology. That means structured training, accessible mentorship, psychological safety to experiment and fail without career consequences, and leadership willing to be honest that AI is not magic. It is a tool that requires skill, patience, and ongoing refinement.
The gap between AI adoption rates and genuine capability building has become one of the defining business challenges of 2026. For UK and European organisations, closing that gap is not optional. The regulatory environment, the labour market, and the competitive landscape all demand it. The honeymoon is over. The work starts now.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
AI Terms in This Article3 terms
at scale
Applied broadly, to a large number of users or use cases.
AI governance
The policies, standards, and oversight structures for managing AI systems.
AI safety
Research focused on ensuring AI systems behave as intended without causing harm.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.