The problem is well documented in academic literature. Research by Gefen and colleagues found that anthropomorphism, meaning how human-like an AI agent appears, is actually a stronger predictor of technology acceptance and adoption than trust itself. Surface-level human characteristics override rational assessment. That is a striking finding, and it should alarm every regulator overseeing AI deployment in sensitive sectors.
Professor Sandra Wachter of the Oxford Internet Institute has consistently argued that opaque and misleading AI design choices undermine users' ability to give meaningful, informed consent. Her position is unambiguous: language that implies consciousness or empathy where none exists is not a trivial marketing choice; it shapes the decisions real people make about real risks. The EU AI Act, which entered into force on 01/08/2024, explicitly requires providers of high-risk AI systems to ensure transparency about system capabilities and limitations, a requirement that anthropomorphic marketing actively subverts.
Real Risks in European Healthcare
The consequences extend far beyond marketing confusion. When people believe AI systems possess human-like understanding, they are more likely to seek guidance from them in situations that demand genuine clinical expertise. This is acutely concerning in healthcare, where AI health assistants and mental health chatbots are being deployed at pace across EU member states and the United Kingdom.
Consider the linguistic patterns that fuel these misperceptions. Companies train their models to replicate human communication styles, including conversational markers that suggest empathy and emotional awareness. A system learns to say "I understand how frustrating that must be" not because it experiences frustration, but because this pattern appeared frequently in its training data. This sophisticated mimicry creates what researchers call the "stochastic parrot" problem: the AI generates human-sounding responses based on statistical relationships in text, not genuine comprehension. Yet users consistently interpret these outputs as evidence of emotional intelligence.
Researcher Hermann, whose work on anthropomorphic design has been widely cited in human-computer interaction studies, has warned that designing AI to seem more human effectively increases acceptance but can lead to dangerous over-reliance on flawed systems. Simple features such as human names and avatar images amplify these effects considerably, even when users are explicitly told they are interacting with an artificial system.
Technical Reality Versus Marketing Fantasy
The architecture of large language models lays bare the gap between perception and reality. These systems operate through transformer networks that predict the most probable next token in a sequence. They do not "think" about responses; they calculate probability distributions across vast vocabularies. When ChatGPT appears to "consider" different options before responding, it is running parallel computations to determine optimal output sequences. The apparent deliberation is a byproduct of processing time, not conscious reflection.
The contrast between marketing language and technical reality is stark:
- "AI confesses mistakes" describes what is, technically, an error-reporting mechanism. The word "confess" implies guilt and self-awareness that the system does not possess.
- "Model learns from feedback" describes parameter adjustment via gradient descent. The word "learns" implies conscious improvement.
- "AI creativity and imagination" describes novel recombinations of training patterns. Framing it as creativity attributes artistic inspiration to a statistical process.
- "System understands context" describes pattern matching in high-dimensional space. "Understands" suggests genuine comprehension that is simply not there.
Mistral AI, the Paris-based foundation model company and one of Europe's most prominent AI developers, has at least been more restrained in its public communications than its American counterparts, describing its systems in terms of capabilities rather than quasi-human traits. That restraint is not universal, however, and even European providers are under constant commercial pressure to make their products feel more relatable to end users.
The Trust Distortion and Its Feedback Loop
Anthropomorphic framing creates a dangerous feedback loop. Users who perceive AI as human-like demonstrate increased trust and reliance on these systems, and that elevated trust routinely exceeds the actual capabilities and reliability of the technology. The phenomenon is particularly acute in mental health applications, where AI chatbots are marketed as "empathetic listeners" or "caring counsellors." Users may share sensitive personal information or make significant decisions based on advice from systems that lack any genuine understanding of human psychology or individual circumstances.
The implications extend to broader AI adoption patterns across Europe. If users develop unrealistic expectations because of anthropomorphic marketing, disillusionment when these systems fail in complex scenarios is not just likely, it is inevitable. That disillusionment carries its own risks: a backlash that discredits genuinely useful AI applications alongside the irresponsible ones.
The EU's AI Office, established in February 2024 to oversee implementation of the AI Act, has indicated that transparency and human oversight provisions will be interpreted strictly for high-risk categories including medical devices and education. There is a reasonable argument that systematically anthropomorphic product design, by obscuring system limitations, constitutes a transparency failure under that framework. Enforcement has yet to catch up with the marketing departments, but the legal foundation to act is now in place.
What Responsible Communication Looks Like
The alternative to anthropomorphic language is not dry technical jargon that alienates ordinary users. It is honest, plain-language description of what these systems actually do. Describe AI as a pattern recognition system, a statistical model, or a prediction engine. Focus on demonstrated capabilities rather than implied mental states. Acknowledge limitations explicitly rather than burying them behind warm, human-sounding prose.
For European users and clinicians, developing AI literacy means maintaining critical thinking when interacting with these systems. Understanding that sophisticated language generation does not equal consciousness or genuine comprehension is the single most important corrective to the distortions that commercial incentives keep reinforcing. As AI systems become more capable of mimicking human communication, the temptation to anthropomorphise will only intensify. The question is whether regulators, clinicians, and an informed public will insist on clarity before the damage to patient trust and patient safety becomes irreversible.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.