More and more people are taking their health concerns to AI chatbots like ChatGPT. The reason for this isn’t far-fetched: long hospital wait times and high medical costs.
A recent survey shows that one in six American adults turns to chatbots for health advice at least once a month.
But a new Oxford-led study says this habit comes with risks. Many users struggle to get clear, helpful answers.
Often, they don’t even know what to ask. Even worse, they may receive advice that mixes both correct and harmful information.
Study Shows No Improvement With AI Chatbots
Researchers at the Oxford Internet Institute ran a large experiment. They gave 1,300 participants in the U.K. several medical scenarios created by doctors.
The goal was to test how well people could make health decisions using both AI tools and their own judgment.
Participants used several top AI models: GPT-4o (ChatGPT), Cohere Command R+, and Meta’s Llama 3.
They were also allowed to search online or rely on their own understanding. Surprisingly, the study found no major advantage to using AI. People did not perform better with chatbots than without them.
Dr. Adam Mahdi, co-author of the study, explained the issue: “Participants often left out important details when talking to the chatbots,” he said. “This led to advice that was incomplete or unclear.”
Chatbots May Hide Serious Problems
The results raised more concerns. Many participants failed to spot serious conditions, some even downplayed the risks after reading chatbot responses.
In several cases, AI-generated advice mixed accurate and inaccurate information. This made it hard for users to decide what to do next.
Worse, people often misunderstood the chatbot’s suggestions, and this confusion led to poor health choices. The study suggests chatbots may actually weaken decision-making, not strengthen it.
AI Is Expanding in Healthcare, But Is It Ready?
Despite these issues, tech companies continue to develop AI health tools:
- Apple is creating an AI coach for sleep, exercise, and diet.
- Amazon wants to use AI to analyze social health factors.
- Microsoft is working on AI that can sort messages from patients to doctors.
These projects plan to make care faster and more personal. Still, experts warn that AI is not ready for these health-laden critical tasks.
The American Medical Association advises doctors not to rely on chatbots like ChatGPT for making medical decisions.
Even OpenAI, the company behind ChatGPT, warns users not to trust the chatbot with a diagnosis.
Why Communication With Chatbots Fails
One key problem is how people interact with AI. Many do not provide enough context or detail. Therefore, they get vague or misleading answers in return.
Also, unlike doctors, chatbots cannot ask follow-up questions. They also cannot see body language, tone, or urgency, things that are essential in health care.
Dr. Mahdi noted another issue. “Chatbots are tested in labs, not real life,” he said. “But real people make mistakes, skip details, or panic. The current testing process does not reflect this complexity.”