Skip to main content
ChatGPT Exodus: Why 1.5 Million Users Are Switching to Claude and What European Professionals Should Know

ChatGPT Exodus: Why 1.5 Million Users Are Switching to Claude and What European Professionals Should Know

More than 1.5 million ChatGPT subscribers are abandoning OpenAI's platform over its military and immigration enforcement contracts, with Anthropic's Claude emerging as the primary destination. European professionals face real questions about data portability, AI governance, and which platform's values best align with the EU's tightening regulatory framework.

A user revolt of significant scale is under way at OpenAI. More than 1.5 million ChatGPT subscribers have left the platform in recent weeks, citing ethical objections to its partnerships with the U.S. Department of Defence and Immigration and Customs Enforcement. Anthropic's Claude has emerged as the primary beneficiary, topping App Store charts and surpassing ChatGPT in daily downloads. For European users and enterprises already navigating the EU AI Act, this migration raises pointed questions about whose values they are embedding into their daily workflows.

[[KEY-TAKEAWAYS:Over 1.5 million ChatGPT users have switched platforms citing military and enforcement contracts|Anthropic's Claude now leads daily App Store downloads in several markets|EU AI Act obligations make vendor ethics a compliance question, not just a preference|Data export from ChatGPT can take up to 30 days to fully purge|Memory portability between platforms remains technically imperfect but is improving]]

Advertisement

Why Users Are Leaving Now

The immediate catalyst is OpenAI's pivot toward U.S. government contracts, including deploying AI models in classified defence networks and providing tools to Immigration and Customs Enforcement. For a company founded on the mission of developing AI for humanity's broad benefit, these decisions represent a material shift that a significant slice of its user base refuses to accept.

The timing matters for European observers. The EU AI Act entered into force on 01/08/2024, with obligations for high-risk AI systems and general-purpose AI models phasing in through 2025 and 2026. Kilian Gross, Head of the AI Unit at the European Commission's DG CONNECT, has consistently emphasised that transparency and fundamental-rights compatibility are not optional extras under the new framework. When a major AI vendor's partnership strategy directly implicates surveillance and military use, European procurement officers and compliance teams cannot simply look away.

Anthropic's positioning as an AI safety company, built around its Constitutional AI methodology and stated limits on government access, has resonated strongly with privacy-conscious users on both sides of the Atlantic. The contrast with OpenAI's recent trajectory is stark enough to drive genuine switching behaviour at scale.

A documentary-style photograph taken inside a modern European open-plan office, showing a professional at a standing desk reviewing two AI chat interfaces on a dual-monitor setup. Natural light comes

Securing Your ChatGPT Data Before You Leave

Users planning to move face a practical challenge: preserving months or years of conversational history, custom instructions, and personalised AI memory. OpenAI does provide a data export mechanism, but the process is neither instant nor without ambiguity.

The steps for a complete and orderly departure are as follows:

  1. Navigate to ChatGPT Settings and request a full data export.
  2. Wait for the confirmation email containing a secure download link; standard processing takes 24 to 48 hours, though complex accounts may take longer.
  3. Verify the archive is complete before taking any further action on your account.
  4. Delete individual chats for privacy purposes, but only after confirming the export is in hand.
  5. Proceed with account deletion if desired, noting that OpenAI states deleted data can take up to 30 days to be fully scrubbed, and that some data may be retained for security or legal obligations.

That last point deserves scrutiny. OpenAI's own support documentation acknowledges retention obligations without specifying their scope. For European users, this intersects directly with GDPR Article 17 rights to erasure. If you are leaving over privacy concerns, the vagueness of OpenAI's retention policy is not reassuring, and you may wish to submit a formal Subject Access Request alongside your export to establish a documentary record.

Lilian Edwards, Professor of Law, Innovation and Society at Newcastle University and one of the UK's most cited authorities on platform data governance, has argued publicly that AI platforms must be held to the same data portability standards as social networks under existing EU and UK data protection law. The current opacity around AI conversation data retention is, in her assessment, legally precarious for vendors operating in European jurisdictions.

Moving Your AI Memory to Claude

Anthropic has actively reduced friction for incoming migrants by publishing guidance on importing ChatGPT data into Claude. The process is not a simple file upload; it requires extracting stored memories and preferences from your ChatGPT archive using a structured prompt, then feeding that context into Claude's memory system.

The key data categories and their portability status are worth understanding before you begin:

  • Personal preferences: Limited to the settings menu in ChatGPT; Claude accepts full context via direct import.
  • Conversation history: A complete archive is available from ChatGPT, but memory extraction is required for meaningful use in Claude.
  • Custom instructions: Exported in ChatGPT's settings data and directly importable into Claude.
  • Project context: Requires manual extraction from ChatGPT; Claude can process it automatically once provided.

A word of caution before importing: the extracted memory should be reviewed carefully. Outdated information, sensitive personal details, and irrelevant context should be removed before you hand Claude a profile built up over potentially years of interaction. Starting with a clean, accurate snapshot serves you better than dragging across accumulated noise.

An editorial photograph of a data privacy workshop in a glass-walled meeting room at a European technology company, with participants gathered around a large screen displaying a data export workflow d

The Broader Signal for European AI Governance

This migration is more than a consumer preference story. It signals that ethical positioning has become a genuine competitive variable in the AI platform market, one that European regulators and enterprise buyers have been arguing for since before the AI Act was drafted.

Philipp Lorenz-Spreen, a researcher at the Max Planck Institute for Human Development in Berlin who studies digital platform behaviour and user autonomy, has noted in recent work that users increasingly demonstrate what he terms "values-based switching" when platform behaviour diverges from stated principles. The ChatGPT exodus fits that pattern precisely. It is not primarily about functionality; Claude and ChatGPT remain broadly comparable on core tasks. It is about whose side the platform appears to be on.

For European enterprises, the implications extend beyond individual preference. Under the EU AI Act, deploying a general-purpose AI model in a high-risk context requires documented due diligence on the provider's governance practices. A vendor whose public partnerships raise questions about military and enforcement applications is a harder due-diligence case to close. Procurement teams at large organisations are already factoring this into vendor assessments.

The migration also validates a long-standing argument from digital rights advocates: AI platforms are becoming as sticky as social networks, and data portability is therefore a public-interest issue, not a niche technical concern. The relative ease with which users can now transfer context from ChatGPT to Claude is a small but meaningful step toward a more competitive, user-controlled AI ecosystem. European policymakers pushing for interoperability under the Data Act should take note.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
AI Terms in This Article 6 terms
embedding

Converting text or images into numbers that capture their meaning, so AI can compare them.

at scale

Applied broadly, to a large number of users or use cases.

ecosystem

A network of interconnected products, services, and stakeholders.

pivot

Fundamentally changing a business strategy or product direction.

AI governance

The policies, standards, and oversight structures for managing AI systems.

AI safety

Research focused on ensuring AI systems behave as intended without causing harm.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment