Skip to main content
OpenAI leak suggests ChatGPT is becoming a full productivity suite
· 6 min read

OpenAI leak suggests ChatGPT is becoming a full productivity suite

Internal testing at OpenAI points to sweeping upgrades for ChatGPT, including a task management system codenamed 'Salute', location-aware AI model selection, enterprise-grade security tunnelling, and inline code editing. The changes signal a decisive push to compete with established workplace tools across the EU and UK.

OpenAI is quietly testing a set of enhancements to ChatGPT that would transform it from a conversational tool into a genuine workplace platform, rivalling the likes of Notion, Jira, and Google Workspace. Leaked interface data and internal testing reports, first circulated among developer communities in late May 2025, point to four major capability areas: structured task management, location-optimised AI model selection, enterprise security tunnelling, and inline technical editing. If the rollout proceeds as suggested, ChatGPT's competitive footprint in the European enterprise market will expand considerably.

[[KEY-TAKEAWAYS:OpenAI is internally testing a task management feature codenamed 'Salute' inside ChatGPT|A new 'Secure Tunnel' feature aims to simplify ChatGPT integration for corporate IT teams|Location-aware model selection could challenge Google's local search dominance in EU markets|Inline code and maths editing blocks are expected to reduce friction for technical users]]

Advertisement

Task Management Revolution With the 'Salute' Feature

The most ambitious addition in the leaked data is a feature codenamed 'Salute', which introduces full task management capabilities directly inside ChatGPT. According to reports, users will be able to:

  • Create and label discrete tasks within an ongoing conversation thread
  • Attach relevant files and reference documents to each task
  • Monitor progress and status without leaving the ChatGPT interface
  • Transition fluidly from idea generation to implementation tracking

The feature positions ChatGPT as a lightweight project management alternative, particularly attractive to sole traders, freelancers, and small businesses across the EU and UK who currently juggle multiple subscriptions. Analysts who track the productivity software market will recognise the pattern: Salesforce absorbed Slack, Microsoft embedded Copilot into Teams, and now OpenAI appears to be making its own push for the daily-driver slot on European desktops.

A wide-angle editorial photograph taken inside a contemporary European open-plan office, showing a professional at a standing desk reviewing a laptop screen displaying a chat interface with structured

Location Intelligence and Enterprise Security

A second significant change involves how ChatGPT selects its underlying models for location-specific queries. Internal code references an "is model preferred" flag that would allow the system to automatically route queries about local businesses, transport options, restaurant recommendations, and service directories to models fine-tuned for geographic relevance. For European users, where local language nuance and hyper-local business data matter enormously, this is a meaningful upgrade over today's generic responses.

Luca Bertuzzi, a Brussels-based technology policy reporter at MLex who closely follows the European AI regulatory environment, has noted that location-aware AI services operating in the EU will need to reconcile this kind of personalised data processing with obligations under the General Data Protection Regulation. Any model that infers a user's location to deliver tailored results must handle that inference transparently and give users meaningful opt-out rights. OpenAI has not publicly addressed how the location flag would be governed under GDPR, which is likely to draw scrutiny from data protection authorities in Ireland, Germany, and France once the feature goes live.

On the enterprise side, OpenAI is developing a 'Secure Tunnel' capability for Model Context Protocol servers. The mechanism uses outbound-only HTTPS connections, which means businesses can link their internal infrastructure to ChatGPT without opening inbound firewall ports. For IT security teams at European banks, insurers, and public-sector bodies, that distinction matters enormously: outbound-only architectures are far easier to audit and approve than bidirectional integrations. Benedikt Fuchs, a cloud security architect at the Munich-based consultancy msg systems, told colleagues at a recent enterprise AI roundtable that the absence of a secure tunnelling option had been one of the primary barriers to wider ChatGPT adoption among his financial services clients.

An editorial photograph of a data centre corridor at a European technology facility, styled to resemble the clean-room aesthetic of an ASML facility or a major cloud provider's EU hub. A technician in

Inline Code and Maths Editing for Technical Users

Developers and researchers stand to benefit most from a fourth feature area: inline editable code blocks and mathematical expression fields. At present, refining a code snippet inside ChatGPT requires copying it out, editing it in an external editor, and pasting the corrected version back into the conversation. The new blocks would allow direct editing within the chat, functioning similarly to the rich-text tools already available for prose formatting.

For EU-based AI research institutions, including those affiliated with ETH Zurich and the ELLIS network of excellence labs, this kind of friction reduction is not trivial. Researchers who use ChatGPT as a drafting aid for technical papers or for rapid prototyping would be able to iterate on equations and code fragments without breaking their working context. The feature does not replace a proper integrated development environment, but it closes a genuine usability gap that has long frustrated technical users.

The table below summarises the four capability areas and their expected impact:

  • Task management ('Salute'): moves ChatGPT from conversation-only to project tracking, targeting small businesses and freelancers
  • Location-optimised model selection: improves relevance for local search queries, with GDPR compliance implications for EU deployments
  • Secure Tunnel for MCP servers: simplifies corporate integration by removing inbound firewall requirements
  • Inline code and maths blocks: reduces workflow interruption for developers and technical researchers

Strategic Context: A European Market Under Pressure

The timing of these enhancements is not accidental. Anthropic's Claude has made notable inroads among European enterprise customers since early 2025, partly on the strength of its longer context window and its reputation for more cautious outputs. Google is aggressively integrating Gemini into Workspace, a suite that already holds strong market share in European mid-market companies. OpenAI's response is to make ChatGPT stickier by expanding the surface area of tasks it can handle without the user leaving the platform.

From a regulatory standpoint, the expanded feature set will almost certainly attract attention under the EU AI Act, which began phasing in its obligations during 2024 and 2025. Task management and enterprise integration features that touch personal data, corporate workflows, or professional decision-making processes may require providers to publish conformity assessments or register systems in the EU database for high-risk AI applications. OpenAI has a Brussels office and has engaged with the AI Office, but the pace of its product expansion is testing the bandwidth of both regulators and corporate compliance teams.

The local search improvements, in particular, position ChatGPT more directly against Google in a space where the European Commission has already shown willingness to intervene. A platform that combines conversational AI with location-aware recommendations and task management starts to look less like a chatbot and more like a general-purpose service that regulators may wish to scrutinise under the Digital Markets Act as well as the AI Act.

Updates

  • published_at reshuffled 2026-04-29 to spread distribution per editorial directive
AI Terms in This Article 2 terms
inference

When an AI model processes input and produces output. The actual 'thinking' step.

context window

The maximum amount of text an AI can consider at once.

Advertisement

Comments

Sign in to join the conversation. Be civil, be specific, link your sources.

No comments yet. Start the conversation.
Sign in to comment