230 Million Weekly Health Queries Later, OpenAI Launches ChatGPT Health. Europe Has Serious Questions.
OpenAI has launched ChatGPT Health, a dedicated healthcare section enabling users to link medical records and wellness app data, built with input from over 260 physicians. With 230 million weekly health queries already flowing through ChatGPT, the product arrives amid sharp scrutiny from European regulators over AI safety, data sovereignty, and clinical liability.
OpenAI has moved decisively into healthcare, launching ChatGPT Health as a dedicated section within its chatbot platform, positioning it as a personal "healthcare ally" for the roughly 230 million people who already use ChatGPT weekly for health and wellness queries. The product is real, the demand is documented, and the regulatory and safety questions it raises for European users are equally real and cannot be wished away.
The feature, developed over two years with input from more than 260 physicians, is currently available via waitlist and is set to roll out more broadly to web and iOS users. It allows secure integration with medical record services such as b.well Connected Health, as well as popular wellness applications including Apple Health, MyFitnessPal, Function, and Weight Watchers.
Advertisement
What ChatGPT Health Actually Does
ChatGPT Health operates as a compartmentalised environment within the broader ChatGPT interface. It features enhanced encryption and isolation from the main chat history, and critically, conversations held within it are not used by default to train OpenAI's foundational models. That last point matters enormously for users concerned about sensitive health data feeding back into commercial AI pipelines.
Fidji Simo, OpenAI's CEO of Applications, framed the product around access and continuity: "We're addressing existing issues in the healthcare space, like cost and access barriers, overbooked doctors, and a lack of continuity in care." Simo also shared a personal account of how ChatGPT helped her identify a potentially dangerous drug interaction following a hospital stay, positioning AI as a complement to, rather than a replacement for, clinical professionals.
The platform is explicitly not designed for diagnosis or treatment. It is programmed to direct users towards healthcare professionals when conversations take a concerning turn. That guardrail is the right instinct, though whether it is sufficient is a different question entirely.
The Compliance Gap That European Users Must Understand
ChatGPT Health is not HIPAA compliant, a distinction that matters less in Europe than it might in the United States, but the underlying concern translates directly to the General Data Protection Regulation. GDPR classifies health data as a special category requiring explicit consent, strict purpose limitation, and in many cases a Data Protection Impact Assessment before processing can begin. OpenAI has confirmed that data can still be disclosed when legally mandated through court orders or emergency situations, meaning the compartmentalisation, while meaningful, is not absolute.
The European Data Protection Board has previously issued guidance making clear that AI systems processing health data must demonstrate a lawful basis under Article 9 of GDPR, a bar that consumer wellness products frequently struggle to clear. Andrea Jelinek, former chair of the EDPB, has consistently argued that "the sensitivity of health data demands the highest standards of transparency and user control," a standard that OpenAI's current framework only partially meets.
Equally relevant is the EU AI Act, which came into force in stages from 2024. AI systems intended to influence health decisions sit in a risk category that attracts significant compliance obligations, including transparency requirements and human oversight mandates. Kai Zenner, head of office and digital policy adviser to MEP Axel Voss and one of the architects of the AI Act's technical provisions, has noted that consumer-facing health AI tools will face increasing scrutiny as national competent authorities begin enforcement. OpenAI's product, however carefully designed, will not be immune to that scrutiny.
Safety Concerns Are Not Hypothetical
The timing of the launch is notable. Documented cases of harm from AI health advice are accumulating. A case reported in August 2025 involved a man hospitalised after allegedly acting on ChatGPT's suggestion to substitute salt with sodium bromide. Google's AI Overview feature has separately faced criticism for producing unsafe medical recommendations, including misleading guidance on liver function tests and dietary advice for pancreatic cancer patients. A study from Mount Sinai, also published in August 2025, concluded that widely used AI chatbots are "highly vulnerable" to disseminating harmful health information.
OpenAI is aware of this landscape. Simo stated plainly: "We are not designed for diagnosis or treatment. We've done extensive work to fine-tune the model to ensure we provide information without being alarmist." That is a necessary statement to make. It is not, on its own, a sufficient quality assurance framework for a product used by tens of millions of people with real clinical needs.
Anthropic unveiled its own healthcare AI tools within days of OpenAI's announcement, underscoring that this is now a competitive market segment rather than an experimental side project. The intensity of that competition is itself a risk factor: product velocity and patient safety do not always move at the same pace.
What a Responsible European Deployment Looks Like
The more instructive comparison for European readers is not what competitors are doing in other markets but what structured, regulated health AI deployment looks like when it is done carefully. NHS England has been piloting AI-assisted triage and clinical decision support tools under formal procurement and clinical governance frameworks, with named clinicians retaining responsibility for every patient-facing output. That model is slower and more expensive than a consumer app waitlist, but it produces a clear audit trail and an accountable party when things go wrong.
ETH Zurich's AI Centre and several European hospital networks have been developing health AI frameworks that embed clinician oversight at the point of output rather than treating professional referral as a fallback for edge cases. The difference is architectural, not cosmetic, and it reflects a fundamentally different view of where liability sits.
The key considerations for any European deployment of a product like ChatGPT Health include: GDPR Article 9 compliance for special category health data; AI Act risk classification and conformity assessment; integration with existing national electronic health record systems; clear professional liability allocation when AI-generated information contributes to a clinical decision; and meaningful patient recourse mechanisms when harm occurs.
OpenAI's product addresses some of these, partially. It addresses none of them completely. That is not a reason to dismiss the product, but it is a reason for European users, employers, and healthcare commissioners to engage with it critically rather than simply adopt it because the demand signal is compelling. Demand and safety are not the same variable, and in healthcare, conflating them has consequences that show up in wards rather than dashboards.
Updates
published_at reshuffled 2026-04-29 to spread distribution per editorial directive
Byline migrated from "Sofia Romano" (sofia-romano) to Intelligence Desk per editorial integrity policy.
Advertisement
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.
Comments
Sign in to join the conversation. Be civil, be specific, link your sources.