Skip to Content

5 ways your doctor may be using AI chatbots — and why it matters

By Michal Ruprecht, CNN

(CNN) — Millions of Americans are turning to AI chatbots for health answers. Doctors are, too.

But the ways doctors are incorporating AI chatbots into their practice are surprising.

Specialized medical AI chatbots have quickly become a go-to source for many doctors and trainees. The CEO of one of these medical chatbot companies recently claimed that more than 100 million Americans were treated by a doctor who used their platform last year.

Popular chatbots like OpenAI’s ChatGPT don’t meet the bar for doctors, who say these platforms aren’t always accurate or up to date with the latest guidance. OpenAI’s usage policies state that users are not allowed to use its services for “tailored advice” without consulting a licensed health professional.

“ChatGPT is like your crazy uncle,” said Dr. Ida Sim, a professor at the University of California, San Francisco, who studies how to use data and technology to improve health care.

The edge, Sim says, is that medical chatbots are less prone to sycophancy and more likely to ground answers in peer-reviewed research and clinical guidelines. That’s why she says the uptake has been “tremendous.”

The most common use case

Millions of research papers are published every year — and keeping up with them all is impossible.

“You’d need like 18 hours a day to stay up to date,” said Dr. Jared Dashevsky, a resident physician at the Icahn School of Medicine at Mount Sinai.

But doctors are expected to stay current on new research and guidelines to maintain their licenses. Many say they now use medical chatbots as a reference tool to help them stay updated.

Rather than pulling information from the entire internet, specialized medical chatbots actively search medical literature, says Dr. Jonathan H. Chen, an associate professor at Stanford Medicine who leads his health system’s efforts to integrate AI into medical education.

That workflow provides doctors with more accurate answers that summarize and link to important papers and guidelines. Dashevsky, who writes about AI, says these features are especially helpful for trainees working long hours.

Uploading patient records to AI bots

Some health systems have adopted AI chatbots to improve patient care, promising doctors safety and privacy protections.

But many doctors use unauthorized chatbots called shadow AIs, according to doctors CNN spoke with. Some of these shadow AIs also advertise HIPAA compliance features.

HIPAA is a federal law that requires certain organizations that maintain identifiable health information — such as hospitals and insurers — to protect it from being disclosed without patient consent.

But language used by shadow AIs has led some doctors to believe that it’s safe to upload protected health information onto chatbots in exchange for more tailored answers. But Iliana Peters, a health care lawyer at the law firm Polsinelli who previously led HIPAA enforcement for the US Department of Health and Human Services, says that assumption is inaccurate.

“‘HIPAA compliance’ is not an accurate term to use by any company,” Peters said, explaining that the phrase should be used only by government regulators.

Despite that, Dr. Carolyn Kaufman — a resident physician at Stanford Medicine — and other doctors say that patient information is making its way into unauthorized chatbots, potentially opening the door to new ways of commodifying patient data.

“Data is money,” Kaufman said, noting that she has never uploaded HIPAA-protected information onto an unapproved chatbot. “If we’re just freely uploading those data into certain websites, then that’s obviously a risk for the individual patient and for the institution, as well.”

Drafting AI-generated notes

AI chatbots have also stepped in to help doctors draft summaries of patient visits and long hospital stays. These notes are viewable on online patient portals and help doctors track a patient’s course and communicate plans across the care team.

“It’s probably safer to have artificial intelligence review a hospital course and know everything happened, versus you as a human — with limited time, jumping between note to note — trying to put the pieces together,” Dashevsky said, arguing that although concerns over AI accuracy are valid, human-based summaries may also miss key details.

Writing letters to insurance companies

Administrative work can take up nearly nine hours a week for the average doctor, and the time doctors spend on insurance-related tasks costs an estimated $26.7 billion each year.

A feature that Dashevsky says has been a “game-changer” is chatbot-authored letters to insurance companies for prior authorizations and other correspondence, allowing him to field patient requests more quickly.

“I would have to figure out who this patient is, write the letter myself and review it. It took so much time,” he said. “Now, AI will produce for you a really good letter.”

Creating a list of possible diagnoses

When patients come to doctors with concerns, physicians have to figure out how to help them. Part of that process is considering a range of possible diagnoses. Many medical students and trainees use AI chatbots to help build that list, and some doctors beyond training use the feature, too.

“From a med student perspective … you’re seeing a lot of things for the first time,” said Evan Patel, a fourth-year medical student at Rush University Medical College. “AI chatbots sort of help orient me to what possibilities it could be.”

Kaufman says the bots provide the most accurate list when she includes every data point linked to patients, like lab results and imaging findings.

What patients need to know

All eight doctors and trainees CNN spoke with say they regularly use medical AI chatbots. And most have a positive outlook, viewing these tools as a way to offload certain cognitive and administrative tasks. But patient privacy concerns are valid, the doctors say.

As with any AI tool, Kaufman says, errors happen and information can be inaccurate. When she consults peers for second opinions, she says, they “almost never agree” with the AI chatbot’s answer.

“People treat AI like it’s magic,” Chen said. “It’s not magic. It can’t just do anything you want.”

He added: “You ask the same question 10 times, and it’ll give you 10 different answers.” That variability, Chen argues, highlights some of the surface-level limitations.

Medicine operates on three layers, Sims says: workflows, knowledge and expertise. AI is transforming the first two. But that last layer — core to the care patients receive — is harder to replicate and may be what matters most.

“If we just apply guidelines, then replace us,” Sim said. “It’s where you take the knowledge and apply it to an evolving set of conditions in the context of your life. That’s what medicine is. It’s in the context of people’s lives. And these machines don’t do that.”

The-CNN-Wire
™ & © 2026 Cable News Network, Inc., a Warner Bros. Discovery Company. All rights reserved.

Article Topic Follows: CNN - Health

Jump to comments ↓

Author Profile Photo

CNN Newsource

BE PART OF THE CONVERSATION

KVIA ABC 7 is committed to providing a forum for civil and constructive conversation.

Please keep your comments respectful and relevant. You can review our Community Guidelines by clicking here

If you would like to share a story idea, please submit it here.