OpenAI’s launch of ChatGPT Health, a new feature designed to answer health and wellness questions, is accelerating the debate over how artificial intelligence should fit into modern medical care. The tool arrives as millions of people already turn to AI for explanations of symptoms, lab results, and treatment options — a trend that experts say offers both promise and risk.
Dr. David Liebovitz, an artificial intelligence and clinical medicine specialist at Northwestern University, spoke with Medical News Today about how ChatGPT Health may reshape patient behavior, clinical workflows, and the doctor–patient relationship.
A Potential Boost for Patient Preparedness — With Caveats
Liebovitz says ChatGPT Health could help patients arrive at appointments better organized, with clearer questions and a more accurate understanding of their medical history. That preparation may allow clinicians to spend more time discussing values, preferences, and shared decision‑making rather than sorting through fragmented online searches.
But he warns of a growing risk: overconfidence.
Patients may assume AI-generated summaries are equivalent to clinical judgment, even though the system lacks access to physical exams, tone, nuance, and the full medical record.
“HCPs will need new skills,” Liebovitz notes — including validating what patients bring in, correcting inaccuracies, and identifying when AI has missed critical context.
How Clinicians Should Talk About AI With Patients
Liebovitz advises healthcare professionals to acknowledge the usefulness of AI tools while setting firm boundaries.
A suggested framing:
“It’s helpful for organizing your questions and understanding basic concepts, but it cannot replace what I assess in person.”
He cautions clinicians not to dismiss AI outright, as doing so may discourage open communication. Instead, he recommends asking patients what they found online and using those moments as teaching opportunities.
Safe Use: Three Principles for Patients
Liebovitz outlines three guidelines for responsible use of AI health tools:
1. Use AI for preparation, not diagnosis
ChatGPT Health can help explain terminology or identify topics to discuss — but should not be used to determine what condition a patient has or what treatment they should pursue.
2. Always verify with a clinician
Any AI-generated suggestion that could influence medical decisions must be confirmed by a healthcare professional.
3. Understand privacy limitations
ChatGPT is not covered by HIPAA. Sensitive information — including reproductive health, mental health, substance use, or legal matters — may not carry the same protections as conversations with a physician.
The Biggest Misunderstanding: AI Is Not a Second Opinion
Liebovitz stresses that large language models generate plausible text, not verified medical conclusions. They may hallucinate, omit key details, or present incorrect information with unwarranted confidence.
“Confidence from an AI tool does not mean correct,” he says.
Looking Ahead: AI as a Permanent Layer in Healthcare
Over the next five years, Liebovitz expects AI to become embedded in routine care — from documentation to surfacing relevant history to flagging potential issues. Patients, meanwhile, may increasingly rely on AI as a “persistent health assistant” that helps track trends, prepare for appointments, and navigate insurance or system barriers.
But the core of medicine — trust, judgment, and shared decision‑making — will remain human.
Clinicians who embrace AI, he predicts, will have more efficient and meaningful conversations with patients. Those who resist may find patients turning elsewhere or withholding what AI has told them.
Why ChatGPT Health Matters Now
OpenAI reports that 40 million people per day were already asking ChatGPT health-related questions. The new feature formalizes that behavior, adding encrypted spaces, medical record connections, and guardrails designed for health use.
The timing aligns with federal requirements that health systems provide patients access to their records through standardized APIs — a shift that AI tools can now help consumers navigate.
Strengths and Limitations
Where ChatGPT Health Excels
• Synthesizes information across sources
• Personalizes explanations using patient data
• Summarizes lab trends and potential medication interactions
• Helps patients prepare for appointments
Where It Falls Short
• Can hallucinate or provide inaccurate citations
• Lacks physical exam findings and clinical nuance
• May miss critical details known only to long‑term providers
• Has no accountability when wrong
How Patients Will Likely Use It
Liebovitz identifies five high‑value use cases:
• Pre‑visit preparation
• Post‑visit clarification
• Health check‑ins and reminders
• Insurance navigation
• General medical questions (excluding active symptoms, which require clinical care)
Privacy Risks: A Major Concern
Many users mistakenly believe AI conversations are protected like doctor–patient interactions. They are not.
Because ChatGPT is not a “covered entity,” health information shared with it could theoretically be accessed through legal processes or used in ways patients did not anticipate.
Liebovitz urges extra caution in areas such as:
• Mental health
• Reproductive health
• Substance use
• HIV status
• Genetic information
• Anything involving legal proceedings
Source Medical News Today

