Goodie

Get a Demo

Interested in trying Goodie? fill out this form and we'll be in touch with you.
Thank you for submitting the form, we'll be in touch with you soon.
Oops! Something went wrong while submitting the form.

AI in Healthcare: The Latest Updates from OpenAI, Google & Anthropic

Your source of news for AI in healthcare: monthly updates from the Goodie team about happenings in the AI and medical spaces. Read the latest.
Daria Erzakova
February 9, 2026
Table of Contents
This is some text inside of a div block.
Share on:
Share on LinkedIn

Decode the science of AI Search dominance now.

Download the Study

Meet users where they are and win the AI shelf.

Download the Study

Decode the science of AI Search Visibility now.

Download the Study

Win the Citation Game

Download the Study

Last Updated: January 2026

We’ve just started 2026, but the first half of January is once again shaping up to be a defining moment for AI, this time in the healthcare space. Within just days, three tech giants made moves that reveal where this industry is heading… and surprise, surprise, just like with almost anything regarding AI, it's all happening fast:

  • First, OpenAI launched ChatGPT Health, a dedicated space where users can connect their medical records and wellness apps for personalized health conversations.
  • Just a few days later, Anthropic countered with Claude for Healthcare, positioning itself as a more comprehensive solution for providers, payers, and patients.
  • Meanwhile, Google went against the grain and actually backtracked, removing AI Overviews from certain medical queries after concerns about misleading health information.

Here's what makes this significant: over 40 million users across the world are already asking ChatGPT health-related questions. And that’s a daily measurement just for ChatGPT. Most of these conversations are happening outside of normal clinic hours, when people can't easily reach their doctors.

AI has quietly become the first stop for health information, and these recent announcements show the industry racing to meet (and shape) that demand.

Whether you're in healthcare marketing, working with healthcare clients, or just trying to understand where this technology is headed, here's what you need to know.

Note: This article will be updated throughout 2026 with all major news regarding healthcare and AI.

Healthcare in AI: January 2026

Graphic showing the happenings of AI in healthcare in January 2026.

OpenAI Launches ChatGPT Health

On January 7, 2026, OpenAI unveiled ChatGPT Health, a dedicated, encrypted space within ChatGPT where users can connect their medical records and wellness apps for personalized health guidance.

What Users Can Connect:

  • Electronic health records (via b.well partnership covering ~2.2M U.S. providers)
  • Apple Health
  • Popular wellness apps like MyFitnessPal, Weight Watchers, Peloton, AllTrails, and Function

Key Features:

  • This is a separate, encrypted space isolated from regular ChatGPT chats
  • It has its own memory system; health info stays in Health and doesn't flow to other conversations
  • The content of these chats is not used to train AI models
  • Enhanced security (specifically for medical data)

How It Was Built: OpenAI worked with 260+ physicians across 60 countries who reviewed model outputs 600,000+ times. They created HealthBench, an evaluation framework that measures safety, clarity, and whether the AI appropriately encourages follow-up care with real doctors.

ChatGPT Health Use Cases:

  • Understanding lab results in plain language
  • Tracking health trends over time
  • Preparing questions before doctor appointments
  • Comparing insurance options
  • Getting personalized meal and workout suggestions
  • Flagging concerning patterns that need urgent attention

OpenAI's CEO of Applications, Fidji Simo, shared how ChatGPT caught a potentially dangerous antibiotic prescription that could have reactivated a serious infection from her past; something that the resident physician missed because health records aren't organized to easily show that context.

The Big Disclaimer: OpenAI has stated that ChatGPT Health isn't for diagnosis or treatment. It’s designed to support medical care (not replace it). Users are urged to think of it as a health assistant that helps you navigate the system.

Availability: Rolling out first to limited early users, then expanding to all ChatGPT Free, Go, Plus, and Pro users (outside the EEA, Switzerland, and UK) in the coming weeks.

Anthropic Responds With Claude for Healthcare

Five days after OpenAI's announcement, Anthropic launched Claude for Healthcare, and unlike OpenAI, they positioned it as more than just a patient-facing chatbot.

What Makes It Different: While ChatGPT Health focuses heavily on consumers, Claude targets the entire healthcare ecosystem: providers, payers, and patients. We’re seeing this as a B2B2C play.

The Enterprise Features: Claude connects to authoritative healthcare databases and systems:

The Big Use Case: Prior Authorization Automation: One standout feature is automating prior authorization (the process where doctors submit documentation to insurance companies to get treatments approved).

As Anthropic CPO Mike Krieger pointed out, "Clinicians often report spending more time on documentation and paperwork than actually seeing patients." This is exactly the kind of administrative burden that AI can genuinely help with.

For Different Users:

  • Providers: Reduced documentation, synthesized medical research, automated paperwork
  • Payers: Streamlined claims processing and authorization reviews
  • Patients: Similar capabilities to ChatGPT Health (understanding health info, appointment prep, system navigation)

Like OpenAI, Anthropic emphasizes that health data won't be used for model training and that Claude is designed to augment, not replace, healthcare professionals.

Google Pulls Back: AI Overviews Removed from Medical Queries

While OpenAI and Anthropic were pushing forward, Google was backpedaling. On January 11, 2026, following a Guardian investigation, Google removed AI Overviews from certain health searches.

The problem? The Guardian found that Google's AI Overviews provided health information that could mislead users. Let’s take an example: say a user searches for "normal range for liver blood tests,” and is given reference numbers that don’t account for things like age, sex, or nationality. There’s a huge potential there that could make someone think their normal results were concerning (or vice versa).

What Happened:

  • Google removed AI Overviews from specific queries like "normal range for liver function tests"
  • Variations of these queries could still trigger AI summaries, showing an inconsistent approach
  • Hours after the report, most health queries tested showed no AI Overviews

Google's Response: A Google spokesperson said they don't comment on specific removals but emphasized they "make broad improvements." An internal clinical team reviewed the flagged queries and found that "in many instances, the information was not inaccurate and was also supported by high-quality websites."

The Bigger Issue: As Vanessa Hebditch from the British Liver Trust noted, removing AI Overviews from one search is "excellent news," but the bigger concern is that "it's not tackling the bigger issue of AI Overviews for health."

The incident highlights a critical tension: even technically accurate information can be dangerous in healthcare without proper context. And even the most well-resourced companies with clinical teams struggle to get this right at scale.

How is AI Used Today in Healthcare?

Flywheel showing 5 ways in which AI is used in healthcare in 2026.

Now that we’ve covered the most recent developments in the healthcare space when it comes to AI, let’s zoom out a bit. Over the last year, AI has moved from experimental to mainstream across healthcare. Here's where you'll find it in action:

For Providers:

  • Clinical Decision Support: Analyzing patient history, symptoms, and labs to suggest diagnoses or flag drug interactions
  • Medical Imaging Analysis: Identifying patterns in X-rays, MRIs, CT scans, and pathology slides
  • AI Scribes: Automatically generating clinical notes from doctor-patient conversations (reducing the aforementioned documentation burden)
  • Predictive Analytics: Forecasting which patients are at risk for readmission or complications

For Payers:

  • Claims processing automation
  • Fraud detection
  • Prior authorization workflows
  • Coverage decision support

For Patients:

  • 24/7 health information access (subject to change given Google’s pullback from providing medical information without proper context)
  • Symptom assessment and triage guidance
  • Medication reminders and management
  • Wellness tracking via wearables and apps

For Pharma & Biotech:

  • AI-driven drug discovery (can reduce early discovery timelines by 70%+)
  • Clinical trial optimization and patient matching
  • Predicting drug interactions and side effects
  • Protein structure prediction (like AlphaFold)

For Researchers:

  • Rapid literature review and synthesis
  • Analysis of genomic, proteomic, and metabolomic data
  • Identifying patterns across large patient populations
  • Understanding disease mechanisms at molecular levels

Usage Patterns: Most health-related AI conversations happen outside of normal clinic hours, when people can't easily reach their doctors. This suggests AI is filling real access gaps in the healthcare system.

Benefits of AI in Healthcare

Graphic showing 4 benefits of using AI in healthcare.

For Patients

  • 24/7 access to health information when clinics are closed
  • Personalized guidance based on their actual health data, genetics, and lifestyle
  • Better health literacy, helping patients understand medical jargon and test results in plain language
  • System navigation help for things like comparing insurance, understanding bills, and coordinating care
  • Continuous monitoring via wearables with AI interpretation

For Healthcare Providers

  • Reduced Administrative Burden: Less time documenting, more time with patients
  • Enhanced Diagnostic Support: AI flags potential issues and suggests differential diagnoses
  • Access to Synthesized Research: Staying current with medical literature without drowning in papers
  • Improved Operational Efficiency: Better scheduling, resource allocation, and patient flow
  • Clinical Decision Support: Drug interaction alerts, treatment guideline reminders

For Healthcare Systems

  • Cost reduction through automation and efficiency
  • Better quality metrics from consistent application of guidelines
  • Predictive analytics, such as anticipating needs, preventing readmissions
  • Population health management at scale
  • Ability to serve more patients with existing resources

For Research & Innovation

  • Accelerated drug discovery (70%+ faster timelines in early stages)
  • Precision medicine insights from genomic and proteomic data analysis
  • Optimized clinical trials: Better patient matching, faster completion
  • Breakthrough research: AI-powered protein folding, disease mechanism understanding
  • Faster translation from research to clinical practice

Challenges & Concerns With AI in Healthcare

Graphic depicting the risks of AI in healthcare.

Accuracy & Safety Issues

AI models predict likely responses; they don't reason from verified medical knowledge. This creates real problems:

  • Hallucinations: Confident-sounding, but riddled with incorrect information (could be dangerous for users who don’t fact-check)
  • Lack of Context: When not provided with proper context, AI can provide generic advice that could be dangerous for specific individuals
  • High-Stakes Errors: Wrong guidance can delay treatment or cause harm
  • Detection Difficulty: Users without medical training can't easily spot AI mistakes

The Google incident is a great example of how even technically accurate info can still turn out to be misleading without proper context.

Privacy & Security

  • Not HIPAA-Covered: Consumer AI health tools lack legal protections that apply to doctors
  • Data Breach Risks: AI companies are prime hacking targets
  • Trust Questions: Can you trust companies to maintain privacy policies long-term?
  • Processing Requirements: Your data must flow through AI systems to generate responses

While ChatGPT Health and Claude offer enhanced encryption and isolation, at the end of the day, users are still trusting commercial companies with complete medical histories.

Regulatory Gaps

  • No Comprehensive Framework: Consumer AI health tools aren't adequately regulated in the U.S.
  • Liability Uncertainty: Who's responsible if AI advice causes harm?
  • Global Inconsistency: Different standards across markets (EU AI Act vs. U.S. approach)
  • Licensure Questions: Does AI health guidance constitute "practicing medicine"?

Ethical Concerns

  • Healthcare Equity: AI tools require smartphones, internet, and health literacy, which could widen disparities
  • Bias in Models: Training data gaps mean less accuracy for underrepresented populations
  • Over-Reliance Risks: Patients delaying professional care, clinicians trusting AI too much
  • Loss of Human Connection: At the end of the day, medicine isn't just data; empathy and relationships matter

Workforce Impact

  • Clinician Skepticism: Past failures with health IT systems create warranted wariness
  • Training Burden: Learning to use AI effectively takes time
  • Role Uncertainty: Shifting responsibilities can feel like devaluing expertise
  • Burnout Complexity: AI could help or hurt, depending on implementation quality

The Future of AI in Healthcare

Near-Term (2026-2027):

  • ChatGPT Health and Claude for Healthcare expand from a limited rollout to broad availability
  • Deeper EHR integration as AI continues to be embedded directly into clinical workflows
  • More healthcare organizations adopt AI as competitive pressure continues to increase
  • Regulatory frameworks start taking shape (FDA guidance, legislative action)
  • Diagnostic AI becomes standard practice in certain specialties

Emerging Technologies:

  • AI Agents: Autonomous systems that schedule appointments, order tests, and coordinate care
  • Digital Twins: Virtual patient models to test treatments before administering them
  • Advanced Drug Discovery: AI designing novel molecular structures for precision therapeutics
  • Earlier Disease Detection: Pattern recognition to catch cancer, heart disease, and diabetes years earlier
  • Synthetic Biology + AI: More efficient pharmaceutical production, reduced drug costs

Regulatory Evolution:

  • FDA frameworks for continuously learning AI systems
  • EU AI Act implementation setting global standards
  • HIPAA modernization for the AI era
  • Movement toward international harmonization of AI healthcare standards

We won’t lie: the timeline is aggressive. The technology is improving fast. But whether AI genuinely improves healthcare or creates new problems depends on choices being made right now by tech companies, healthcare organizations, regulators, and users.

How Goodie Can Help Navigate the AI Healthcare Landscape

Graphic showing 4 ways in which Goodie can help healthcare brands in the AI space.

As AI becomes the primary source for health information, healthcare organizations face a new challenge: ensuring accurate representation when millions ask AI about medical topics.

This is where Answer Engine Optimization matters. When someone asks ChatGPT Health about a condition your organization treats, are you cited as an authority? When Claude discusses treatment options, does it reference your research?

Goodie's AEO platform helps healthcare organizations monitor visibility across AI search engines, track brand mentions and sentiment, identify optimization opportunities, and measure how you compare to competitors in the AI ecosystem.

For healthcare marketers, Goodie provides the insights needed to develop effective AI visibility strategies, while our AEO Content Writer helps create content optimized for both traditional search and AI platforms.

In an era where 230M+ people weekly turn to AI for health guidance, maintaining visibility isn't optional. Learn more about AEO for healthcare.

AI in Healthcare: Frequently Asked Questions

What is AI in healthcare?

AI in healthcare uses machine learning, natural language processing, and other technologies to improve medical care, clinical workflows, and patient outcomes. It includes everything from diagnostic tools that analyze medical images to chatbots that answer health questions to algorithms that predict disease risk.

At its core, AI identifies patterns in large datasets (think medical images, health records, genomic sequences, and patient symptoms) to support diagnosis, treatment decisions, research, and patient care.

That being said, recent launches like ChatGPT Health and Claude for Healthcare represent a shift toward more conversational AI that integrates multiple data sources to provide personalized guidance.

What is an example of an AI system in healthcare?

Examples span the entire healthcare ecosystem:

  • Consumer-Facing: ChatGPT Health and Claude for Healthcare (personalized health guidance), symptom checkers and virtual health assistants, wearable device analytics
  • Clinical: Medical imaging analysis, clinical decision support, ambient AI scribes, predictive analytics (identifying high-risk patients)
  • Administrative: Prior authorization automation, claims processing, patient scheduling optimization
  • Research: Drug discovery platforms, protein structure prediction (like AlphaFold), clinical trial patient matching, literature synthesis tools

Is AI in healthcare safe?

AI in healthcare is sort of a gray area; whether it’s “safe” or not largely depends on the application. Clinical-grade AI tools with FDA clearance and physician oversight are generally safe for their intended purposes. On the other hand, consumer chatbots like ChatGPT Health are safer for general information but risky for high-stakes medical decisions.

Key safety concerns:

  • AI can "hallucinate" confident-sounding but incorrect information
  • Generic advice may be dangerous for specific individuals
  • Users without medical training struggle to spot AI errors
  • Consumer AI tools aren't HIPAA-covered

Companies like OpenAI and Anthropic have implemented safety measures: physician collaboration, enhanced privacy, explicit warnings against using AI for diagnosis. But these systems still work by predicting likely responses, not reasoning from verified medical knowledge.

Can I trust AI with my medical information?

Trust involves technical security, legal protections, and long-term data governance. ChatGPT Health and Claude for Healthcare implement strong encryption and data isolation, which is good. But consumer AI health tools aren't covered by HIPAA, so your data protection relies on company privacy policies that can change.

Dr. Danielle Bitterman from Mass General Brigham suggests: "The most conservative approach is to assume that any information you upload into these tools...will no longer be private."

How accurate are AI health chatbots like ChatGPT Health?

Accuracy varies based on the question type:

More reliable for:

  • Well-established medical facts
  • Common conditions and treatments
  • General health guidance
  • Medical terminology explanations

Less reliable for:

  • Rare conditions with limited research
  • Highly personalized medical decisions
  • Cutting-edge treatments
  • Context-dependent advice

OpenAI worked with 260+ physicians to improve accuracy and created HealthBench to evaluate responses. But AI still hallucinates occasionally, and users can't easily detect errors.

Important: Even when factually correct, AI advice can be contextually inappropriate for your specific situation. Always verify with healthcare providers for anything important.

How do I know if health information from AI is accurate?

Quick evaluation checklist:

✓ Check if AI acknowledges uncertainty and limitations
✓ Verify through multiple reputable sources (CDC, Mayo Clinic, specialty organizations)
✓ Be skeptical of very specific personal recommendations
✓ Look for appropriate encouragement to consult healthcare providers
✓ Try to verify specific statistics or study citations
✓ Use common sense (if it seems wrong, it might be)
✓ Compare responses across different AI platforms

Remember: AI is best for starting points and learning, not for authoritative medical advice. When in doubt, ask a real doctor.

Conclusion

AI in healthcare has real promise: reducing administrative burden, democratizing health information access, accelerating research, and supporting better clinical decisions. But the challenges (accuracy, privacy, regulation, equity) remain significant and unresolved.

What happens next depends on choices being made now by tech companies, healthcare organizations, regulators, and users. Will AI genuinely improve healthcare or exacerbate existing problems? Will it support clinicians or undermine them? Will it empower patients or mislead them?

The answers will emerge over the coming months and years. For healthcare organizations, staying informed isn't optional. Understanding how AI is reshaping everything from information access to care delivery is essential for remaining relevant in an AI-mediated healthcare landscape.

---

Some icons and visual assets used in this post, as well as in previously published and future blog posts, are licensed under the MIT License, Apache License 2.0, and Creative Commons Attribution 4.0. 

https://opensource.org/licenses/MIT

 https://www.apache.org/licenses/LICENSE-2.0 

https://creativecommons.org/licenses/by/4.0/

Decode the science of AI Search dominance now.

Download the Study

Meet users where they are and win the AI shelf.

Download the Study

Win the Citation Game

Download the Study

Decode the science of AI Search Visibility now.

Download the Study
Check out other articles
Enjoy the best AI Optimization newsletter on the internet - right in your inbox.
Thanks for subscribing! Your next favorite newsletter is on its way.
Oops! Something went wrong while submitting the form.
LinkedinInstagramYoutubeTikTok
© Goodie 2025
All Rights Reserved
Goodie logo
Goodie

AEO Periodic Table: Elements Impacting AI Search Visibility in 2025

Discover the 15 factors driving brand visibility in ChatGPT, Gemini, Claude, Grok, and Perplexity — based on 1 million+ prompt outputs.
Your visibility game just leveled up. We’ve sent the AEO Periodic Table: Elements Impacting AI Search Visibility in 2025 report to your inbox.



If you do not receive the email, please check your spam folder.
Oops! Something went wrong while submitting the form.
Goodie

AEO Periodic Table: Factors Impacting AI Search Visibility in 2025

Discover the 15 factors driving brand visibility in ChatGPT, Gemini, Claude, Grok, and Perplexity — based on 1 million+ prompt outputs.
Your visibility game just leveled up. We’ve sent the AEO Periodic Table: Elements Impacting AI Search Visibility in 2025 report to your inbox.



If you do not receive the email, please check your spam folder.
Oops! Something went wrong while submitting the form.
Goodie

The 14 Factor AI Shopping Visibility Study

Get the data behind how today’s leading AI models retrieve, score, and select products and what your brand must do to stay visible and purchasable.
Thanks for joining the next era of product discovery.
Check your inbox for the AI Shopping Visibility Study.

If you do not receive the email, please check your spam folder.
Oops! Something went wrong while submitting the form.
Goodie

The Complete Social Impact on AI Answers Study

Access the full analysis with month-by-month trends, platform-by-platform breakdowns, and strategic frameworks for building citation-resilient content portfolios across social, earned, and owned channels.
Thanks for joining the next era of product discovery.
Check your inbox for Citation Study.

If you do not receive the email, please check your spam folder.
Oops! Something went wrong while submitting the form.