ChatGPT Now Remembers Conversations From a Year Ago
OpenAI has rolled out a significant memory upgrade for ChatGPT Plus and Pro subscribers, enabling the AI to recall conversations spanning up to 12 months. The enhancement transforms ChatGPT from a session-based tool into a persistent digital assistant, though European data protection rules are already shaping how and where the feature lands.
OpenAI has delivered its most consequential ChatGPT upgrade in months, giving Plus and Pro subscribers access to conversations stretching back a full year. The change is not cosmetic. It shifts ChatGPT from a tool you reset every session into something closer to a persistent professional colleague that remembers your projects, preferences, and past decisions without being prompted.
The rollout is global, but European users are already encountering a familiar wrinkle: GDPR and UK GDPR constraints mean the feature faces regional restrictions in parts of the EU and the United Kingdom, echoing the pattern seen when OpenAI delayed its GPT-4o voice mode for European markets in 2024.
Advertisement
How the Memory System Actually Works
ChatGPT's memory operates through semantic indexing, storing the most relevant, most recent, and most frequently referenced information from your conversation history. The system works within token limits, with GPT-4-turbo models handling up to 128,000 tokens of stored context. When you query a past discussion, ChatGPT displays a brief "remembering" prompt before surfacing summaries, complete with direct links back to the original chat threads via a Sources panel.
The practical upshot is significant. Users can recover a chilli sauce recipe discussed six months ago, retrieve a block of code written last spring, or pick up a research thread that stalled over winter, without manually scrolling through browser history. The AI retrieves the top five to twenty most contextually relevant entries depending on the specificity of your query.
The Sources Integration
The memory upgrade integrates directly with ChatGPT's Sources feature. Recalled conversations surface with hyperlinks to original chat logs, making long-term project management considerably more coherent. For researchers, lawyers, consultants, or anyone managing multi-month workstreams, the ability to jump straight back into the context of an older thread without losing momentum is genuinely useful rather than a novelty feature.
Memory Feature
Before Update
After Update
Conversation Recall
Session-based only
Up to 12 months
Context Retrieval
Manual browser search
AI-powered semantic search
Source Links
Not available
Direct links to original chats
Token Capacity
Limited to session
128,000 tokens for GPT-4-turbo
The "Context Rot" Problem
Enhanced memory is not without its risks, and European AI researchers have been among the most vocal about the failure modes of persistent large language model context. The core concern is what critics call "context rot": the gradual accumulation of stale preferences, outdated assumptions, and outright contradictions that quietly degrade response quality over time.
Researchers at ETH Zurich studying long-horizon AI task performance have flagged that persistent memory systems require active curation rather than passive accumulation; a model that simply remembers everything begins to behave less reliably as contradictory signals compound. The European AI Office, established under the EU AI Act framework and now operational in Brussels, has signalled that transparency in memory management will be a compliance consideration for general-purpose AI systems operating in the bloc.
OpenAI has built in user controls to address this. Subscribers can view, edit, and delete specific memories, and can disable the memory function entirely for sensitive conversations. Whether users will actually manage their memory stores regularly enough to prevent degradation is a separate question.
Privacy and Regulatory Dimensions for European Users
For EU and UK users, the privacy calculus here is not trivial. Storing up to 12 months of conversational data ties directly into obligations under GDPR Articles 5 and 13, covering data minimisation and transparency respectively. OpenAI's privacy policy governs data retention, but the interaction between that policy and member-state data protection authority guidance is still being worked through.
Andrea Jelinek, former chair of the European Data Protection Board, has previously emphasised that AI systems storing personal data over extended periods must provide users with genuinely meaningful controls, not checkbox compliance. The memory feature, on paper, offers those controls. Whether the implementation satisfies regulators across all 27 EU member states remains to be seen.
Practical Applications
Long-term project management, with ChatGPT retaining goals, preferences, and progress updates across months
Research continuity across multiple sessions, building cumulatively on previous findings
Creative writing projects, where the AI maintains character details, plot points, and stylistic preferences
Learning programmes that adapt to a user's comprehension level and preferred explanation styles
Professional development tracking, remembering career goals and skill-building conversations
Availability and Tier Restrictions
The extended memory feature is exclusive to ChatGPT Plus and Pro subscribers. Free-tier users remain on session-based memory, which resets with each new conversation thread. No manual update is required; the feature rolls out automatically, though European users in certain jurisdictions may find it unavailable or restricted pending regulatory review.
The competitive implications are clear. Memory depth is fast becoming a primary differentiator among AI assistants, alongside model capability and integration breadth. Mistral AI, the Paris-based frontier lab, has not yet announced an equivalent persistent memory feature for its Le Chat assistant, though the pressure to match OpenAI's personalisation depth will only increase as enterprise adoption accelerates across the continent.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
AI Terms in This Article2 terms
tokens
Small chunks of text (words or word fragments) that AI models process.
AI-powered
Uses artificial intelligence as part of its functionality.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.