Skip to main content
The Shared Imagination of AI: Why European Policymakers Should Pay Attention

The Shared Imagination of AI: Why European Policymakers Should Pay Attention

AI models hallucinate in remarkably similar ways, achieving 54% accuracy when answering each other's fictional questions. The phenomenon, dubbed shared imagination, raises urgent questions for European regulators and organisations building AI governance frameworks under the EU AI Act.

Different AI systems are dreaming the same dreams, and that is a problem Europe cannot afford to ignore. New research confirms that leading generative AI models hallucinate in strikingly similar patterns, achieving a 54% accuracy rate when answering each other's entirely made-up questions. This is not random noise; it is evidence of something far more structural, and its implications for AI governance, information integrity, and technological sovereignty are directly relevant to regulators and enterprises across the EU and UK.

The Science Behind AI's Collective Hallucinations

The study in question, titled "Shared Imagination: LLMs Hallucinate Alike", tested 13 generative AI applications across four major model families. Researchers presented fictitious scenarios to one AI system, then asked competing models to verify or expand on those fabricated concepts. The results were striking.

Advertisement

When OpenAI's GPT-4 invented details about a fictional scientific principle, Anthropic's Claude and Meta's LLaMA would frequently provide complementary information, as if drawing from a shared knowledge base that does not actually exist. The AI systems were not simply making the same factual errors; they were constructing consistent fictional frameworks that aligned across entirely different platforms and architectures.

"This shared imagination suggests fundamental similarities between AI models, likely acquired during pre-training on similar datasets," explains Dr Yilun Zhou, lead author of the study. The phenomenon moves well beyond simple factual inaccuracies into something that could be described as coordinated creativity.

For anyone working in AI development, compliance, or policy, the implications are immediate. Verification methods that rely on consensus across multiple AI systems are, in this light, deeply unreliable. If three independent models agree that a fictional physics concept is real, that agreement tells you almost nothing about whether the concept actually exists.

A wide-angle editorial photograph inside a modern European AI research facility, such as the ELLIS Institute or an ETH Zurich computing lab, showing researchers examining multi-screen visualisations o

Why This Matters for Europe's AI Governance Agenda

Europe is currently in the thick of implementing the EU AI Act, the world's most comprehensive binding framework for artificial intelligence. The shared imagination phenomenon cuts directly across two of the Act's central concerns: transparency and systemic risk.

Dragos Tudorache, the Romanian MEP who co-led the European Parliament's negotiations on the EU AI Act, has consistently argued that governance frameworks must account not just for individual model behaviour but for the systemic effects of AI deployment at scale. Shared hallucination patterns are precisely the kind of systemic risk that current conformity assessments are not well equipped to detect, because they emerge from the interaction between models rather than from any single system in isolation.

Equally relevant is the perspective of Yoshua Bengio, the Turing Award-winning AI researcher whose work on deep learning underpins much of today's large language model architecture, and who has become one of the most cited voices in European AI safety discussions. Bengio has warned repeatedly that the convergence of training data and architectural choices across major AI developers creates systemic fragility. Shared imagination is a concrete, measurable manifestation of exactly that fragility.

The Double-Edged Nature of Model Convergence

It would be dishonest to present shared imagination as entirely negative. The research also points to genuine opportunities. The structural similarities between models could accelerate development through more effective model merging and collaborative training approaches. Ensemble methods, where multiple AI systems work together by leveraging their aligned internal representations, could yield more robust applications, particularly in creative and generative contexts where consistent world-building is an asset rather than a liability.

The challenge is that these same properties make it considerably harder to build truly independent AI systems. European organisations investing in domestic AI capabilities, whether through initiatives such as France's support for Mistral AI or the pan-European ELLIS network of AI research institutes, face a structural headwind: if all major models are trained on broadly similar corpora using broadly similar architectures, the resulting systems will share not only capabilities but also failure modes.

This has direct consequences for organisations attempting to satisfy EU AI Act requirements around robustness and accuracy for high-risk applications. A compliance strategy that uses multiple AI models for cross-verification assumes those models are genuinely independent. Shared imagination research suggests that assumption may be wrong in practice.

What European Organisations Should Do Now

The research points to several concrete priorities for enterprises and regulators operating under European law.

  • Diversify training data sources: Organisations commissioning or fine-tuning AI models should actively seek training corpora that differ substantively from the dominant web-scraped datasets used by US hyperscalers. European public sector data, multilingual corpora, and domain-specific archives all offer meaningful differentiation.
  • Audit verification pipelines: Any internal process that uses AI consensus as a proxy for factual accuracy needs urgent review. Cross-model agreement is not a reliable signal when the models share imaginative frameworks.
  • Push for architectural diversity in procurement: Public sector buyers and large enterprises tendering for AI services should include architectural diversity as a scored criterion, not merely a nice-to-have.
  • Engage with the EU AI Office: The newly established EU AI Office, which sits within the European Commission and holds supervisory authority over general-purpose AI models, is developing codes of practice that could incorporate shared hallucination risk as a transparency disclosure requirement. Industry input on this is open and actively sought.

The Sovereignty Dimension

There is a broader strategic point worth making plainly. If the dominant AI models in global use share not only strengths but also systematic blind spots and fictional tendencies, then any nation or bloc that relies exclusively on those models inherits those blind spots. This is not an abstract concern for Europe. It is a concrete argument for continued investment in European-developed foundation models, genuinely diverse training pipelines, and research into architectural approaches that produce meaningfully different internal representations.

The shared imagination phenomenon is a reminder that diversity in AI is not merely a social good or a regulatory checkbox. It is a technical requirement for building systems that are genuinely robust and independently trustworthy. Europe's regulatory ambition on AI is well established. The question now is whether that ambition translates into the kind of technical investment that produces real independence rather than the appearance of it.

The research is available via the study's primary authors and has been presented at major machine learning venues. Further reading is available through the arXiv preprint server and the proceedings of recent NeurIPS and ICLR conferences.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
  • Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article 6 terms
fine-tuning

Training a pre-built AI model further on specific data to improve its performance on particular tasks.

deep learning

Machine learning using neural networks with many layers to learn complex patterns.

machine learning

Software that improves at tasks by learning from data rather than being explicitly programmed.

generative AI

AI that creates new content (text, images, music, code) rather than just analyzing existing data.

hallucination

When AI generates confident-sounding but factually incorrect information.

at scale

Applied broadly, to a large number of users or use cases.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment