This is not a minor churn event. It is a signal that users, and increasingly enterprise procurement teams, are applying ethical criteria to AI platform selection with the same rigour they once reserved for functionality and price. For public-sector bodies across the EU and UK, already operating under the EU AI Act and the UK's emerging AI governance framework, the question is no longer simply which model performs best. It is which vendor's values and contractual commitments are compatible with public accountability obligations.
Why Users Are Leaving
The catalyst is specific. OpenAI secured contracts to deploy AI models in classified US government networks and separately worked with Immigration and Customs Enforcement. For a company founded on the stated mission of developing AI for humanity's broad benefit, these partnerships struck a significant portion of the user base as a fundamental betrayal. Combined with high-profile leadership donations to political figures and a series of governance controversies at board level, the trust deficit has become too large to ignore.
Dr Lilian Edwards, Professor of Law, Innovation and Society at Newcastle University and one of the UK's foremost scholars on AI accountability, has long argued that the governance structures of AI companies are inseparable from the safety properties of their products. The OpenAI situation illustrates precisely that thesis: when a company's institutional behaviour shifts, users reassess the product itself.
The European dimension here is material. The EU AI Act, which entered into force in August 2024 and is rolling out obligations through 2025 and 2026, explicitly requires deployers of high-risk AI systems to conduct due diligence on providers. That due diligence now plausibly includes assessing whether a provider's government contracts in third countries create risks of data access or mission drift incompatible with European fundamental rights standards.
How the Migration Works in Practice
For individual users and smaller teams, the mechanics of switching are more straightforward than many assume. OpenAI provides a data export function that packages conversation histories, custom instructions, and stored preferences into a downloadable archive. The process typically completes within 24 to 48 hours, though complex accounts may take longer. The steps are as follows:
- Navigate to ChatGPT settings and submit a full data export request.
- Wait for the confirmation email containing a secure download link.
- Verify the archive is complete before taking any further action.
- Review and delete sensitive or outdated content from the export.
- Use Anthropic's recommended migration prompt to extract stored memories and preferences for import into Claude.
One important caveat: OpenAI's own documentation acknowledges that deleted chats may take up to 30 days to be fully purged from its systems, and that some data may be retained for security or legal reasons without specifying the precise scope. For European users subject to GDPR, that ambiguity is not merely an inconvenience. It is a compliance question. Any organisation using ChatGPT for tasks involving personal data should be scrutinising its data processing agreement with OpenAI regardless of whether it plans to switch platforms.
Anthropic has actively reduced switching friction by publishing detailed guidance on importing ChatGPT data into Claude. The comparison below illustrates how the two platforms handle transferred information:
- Personal preferences: ChatGPT limits these to the settings menu; Claude preserves full context on import.
- Conversation history: A complete archive is available from ChatGPT, though memory extraction is required for Claude compatibility.
- Custom instructions: Exported directly from ChatGPT settings and supported for import by Claude.
- Project context: Requires manual extraction from ChatGPT; Claude can process the output automatically.
Users should review extracted memories carefully before importing. Outdated details, redundant preferences, and any sensitive personal information should be removed to ensure Claude begins with accurate, current context rather than accumulating legacy noise from a previous platform.
The Ethics Argument and Its European Resonance
Anthropic's positioning as an AI safety company, built around its constitutional AI methodology and explicit limitations on government access to user data, has found particularly fertile ground among privacy-conscious European users. The contrast with OpenAI's recent trajectory is stark enough to show up in download statistics.
Dragoș Tudorache, the Romanian MEP who co-chaired the European Parliament's negotiations on the AI Act, has consistently argued that trustworthiness in AI is not a soft value but a hard commercial and regulatory requirement. The market movement now visible in the ChatGPT exodus vindicates that framing: users are demonstrating that ethical alignment is a switching trigger, not merely a procurement checkbox.
This does not mean Anthropic is beyond scrutiny. Both companies face structural tensions between commercial growth, investor expectations, and the safety research missions they publicly espouse. European deployers would be unwise to treat Claude as a values-free alternative simply because it is currently the preferred destination for departing ChatGPT users. The appropriate response is rigorous vendor assessment of both, not a reflexive migration driven by the same social contagion that is producing the exodus in the first place.
What the episode does confirm, clearly and usefully, is that AI platforms are acquiring the stickiness of social networks. Users build personalised context, workflows, and institutional memory inside these systems over months and years. Data portability therefore becomes a condition of healthy competition, not an optional feature. The EU's Data Act and the AI Act's interoperability provisions are relevant here, and European regulators should be watching how both OpenAI and Anthropic handle portability requests in practice.
What European Organisations Should Do Now
Whether or not you are considering switching platforms, this moment presents a useful forcing function for any European organisation using large language model services. The practical steps are clear:
- Audit existing AI vendor contracts for data retention, government access, and jurisdiction clauses.
- Assess whether your current provider's government partnerships in third countries create risks under GDPR or the AI Act.
- Test your data export and portability rights with any AI platform you rely on, before you need them urgently.
- Require vendors to specify, in writing, the scope and duration of any data retention for security or legal purposes.
- Include ethical governance criteria, not just technical performance benchmarks, in future AI procurement evaluations.
The 1.5 million users currently voting with their feet are mostly individuals making personal choices. European public-sector bodies and regulated enterprises have less flexibility but arguably more at stake. The frameworks to act on these concerns already exist. The question is whether procurement and legal teams will use them.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.