Science

Patients Turning to AI for Medical Results—Doctors Warn

Using AI – For many patients, chatbots are becoming a quick translator for lab reports and imaging. But clinicians warn the same tools can invent details, amplify misconceptions, and leave people chasing wrong conclusions—often before a doctor ever sees the case.

When Judith Miller picked up the results of a medical imaging study last year, she did what many patients now do: she typed them into an AI chatbot and asked it to explain what they might mean.

The 77-year-old Wisconsin resident said Claude’s answers helped her walk into her follow-up appointment feeling ready for a “productive conversation” with her doctor. “Claude’s responses enabled me to better understand my health and engage more fully in shared decision-making,” Miller said.

Across the country, that moment is becoming routine.. Two recent polls found that a third of American adults have turned to large language models for health information—ranging from interpreting lab results to researching treatment options and asking about prescription drugs.. Robert Wachter. a physician at the University of California. San Francisco. says the pace is quickening: “The use of tools like these has doubled in the past year. ” adding. “I suspect they’ll double again next year.”

Yet even as patients embrace AI as a guide. many doctors and patient advocates worry the guidance can drift into dangerous territory.. Anthropic, which makes Claude, agrees there are boundaries.. “Claude is not designed or marketed for making clinical diagnoses,” a spokesperson said.. Its purpose, the company added, is “helping people prepare for conversations with their doctors, not replacing them.”

The tension isn’t just about whether AI can interpret text.. It’s about what people do when they believe it.. Experts say today’s chatbots can be misleading, because they are prone to mistakes—sometimes presenting falsehoods as facts.. They can also respond in ways that flatter a user’s existing beliefs, reinforcing ideas that may already be wrong.

Cait DesRoches. executive director of OpenNotes. a nonprofit that promotes patients’ access to medical records. says the bigger concern is how little is known about outcomes when patients treat LLMs as health authorities.. “There aren’t a lot of guardrails around breaking them, pushing them to tell you actual misinformation,” she said.. And she added, “I don’t think we have any idea how well it works for average patients.”

What that risk can look like is already documented. In December, a 75-year-old man in Seattle reportedly died of a treatable type of leukemia after refusing treatment, based on AI-generated evidence that incorrectly suggested he had a rare complication.

Some preliminary research suggests the problem may be systemic. In a Nature Medicine study published in February, researchers asked participants to diagnose a hypothetical condition with help from multiple LLMs. Participants reached the right conclusion only about a third of the time.

Even with those warnings, many clinicians don’t argue for an outright ban. DesRoches says people shouldn’t ignore the technology—but should use it with caution. “I don’t think people should avoid using them,” she said. “But I do think people should use them with their eyes open.”

Adam Rodman, a general internist at Beth Israel Deaconess Medical Center, takes a stronger stance—while still insisting on guardrails. “I would argue that LLMs, if used appropriately—that’s a big caveat—are the best tool for patient empowerment ever invented.”

Part of the push now is to keep patients informed without letting AI dictate medical decisions.. Rodman and other researchers point to strategies designed to reduce error and improve the way a chatbot gathers information.. One approach is to tell the model to take on the persona of a doctor. which Rodman says can “prompt the model to collect data in a physicianlike manner.”

Other tactics include asking the model to reassess its own reasoning or to seek a “second opinion” from a different model. Rodman also stresses privacy: remove personal information such as your name and Social Security number before entering anything into a chatbot.

For Wachter, the real-world tradeoff can show up in appointment rooms.. He says AI can leave patients with confident ideas that take time to correct.. “I’ve got 15 minutes for this appointment,” he said.. “And I’m going to have to spend the first 10 minutes talking the patient down from what GPT told them to do.”

But the concern isn’t only clinical—it’s access.. As more patients use chatbots to fill gaps. some experts say AI may be stepping in where care is hard to reach.. “The access issue is at crisis level. ” said Laura Adams. a senior adviser to the National Academy of Medicine on AI matters.. Even so, she argues the comparison shouldn’t be to perfect advice.. “It’s better than nothing,” she said.

Adams also puts the change in plain terms: “With AI and medical advice, the horse is way out of the barn.” For that reason, she and advocates say the focus should shift to AI literacy—teaching people how to use these tools responsibly rather than leaving them in the dark.

deBronkart, a health care blogger and activist, frames it as an education problem, not an abandonment problem. “The remedy is not to keep people ignorant,” he said. “It’s to teach them how to do it better” through educating children and adults.

He also notes that newer LLMs may improve in medical uses, including the possibility that some models could eventually undergo board certification, Wachter said—similar to the pathway physicians take.

For now, patients like Miller are approaching AI in the way many experts say they should: as a starting point, not an authority. “It’s just following up words that were probable,” she said. “I’m not looking at it as a source of absolute truth.”

AI large language models medical results patient empowerment Claude healthcare access OpenNotes misinformation Nature Medicine study

4 Comments

  1. I mean if it helps you understand your stuff before the appointment, that’s not automatically bad. But yeah I could totally see it making stuff up like “common” findings or whatever and then you’re freaking out for no reason.

  2. Wait so the AI invented details and now people are chasing wrong conclusions before a doctor even sees it… sounds like it’s the patients’ fault for believing it. Like just ask the doctor, not a robot. I’m not saying doctors are perfect either, but come on.

  3. I don’t get why they’re acting like everyone uses it to diagnose themselves. I tried one of these when I had imaging done and it just basically translated the scary words into normal English. The part about it “amplifying misconceptions” though… like what if it’s right but then someone still doesn’t trust the doctor? Also doubling every year?? That seems fast, are they saying it’s gonna be like replacing radiologists or something?

Leave a Reply

Your email address will not be published. Required fields are marked *

Are you human? Please solve:Captcha


Secret Link