Google has removed some of its AI-generated health summaries after an investigation found they were providing false and misleading medical information.
The issue centers on Google’s AI Overviews, which appear at the top of search results. They use generative AI to provide quick explanations to user questions.
Google has repeatedly described them as helpful and reliable. However, evidence from a Guardian investigation suggests that, in some health cases, they may do more harm than good.
LBT Searches
One of the most troubling examples involved searches about LBT (liver blood tests).
When users typed “what is the normal range for liver blood tests,” Google’s AI Overview returned a long list of numerical ranges supported by little explanation.
It also failed to consider key factors such as age, sex, ethnicity, or nationality. Medical experts described the information as dangerous and alarming.
They warned that what the AI described as “normal” could differ widely from what doctors actually consider safe.
As a result, people with serious liver disease could wrongly believe their results were fine.
That false reassurance could lead them to skip follow-up appointments or delay urgent care.
False Reassurance

Liver blood tests are complex; they are not a single test but a group of measurements. Doctors interpret them together and within a broader clinical context.
Vanessa Hebditch, director of communications and policy at the British Liver Trust, explained why the AI summaries were risky.
She said people can have normal liver test results and still have serious liver disease. The AI Overviews did not warn users about this possibility.
Instead, the summaries listed test names in bold with numerical ranges. This made the information look authoritative and complete.
Many readers could easily assume the numbers applied to them. Hebditch warned that this false reassurance could be very harmful.
Google’s Action
After the Guardian shared its findings, Google removed AI Overviews for two specific search queries ‘What is the normal range for liver blood tests?’ and ‘What is the normal range for liver function tests?’
Google did not comment on the individual removals. A spokesperson said the company does not discuss specific search changes.
The spokesperson added that when AI Overviews miss important context, Google works to make wide-scale improvements.
The company also said it takes action under its policies where appropriate.
Similar Searches
Despite the removals, concerns remain. The Guardian found that slight variations of the same query still produced AI Overviews.
Searches such as “LFT reference range” and “LFT test reference range” continued to show summaries.
Hebditch said this was deeply worrying. She explained that a liver function test, or LFT, involves several different blood tests.
Understanding the results requires medical expertise and cannot be reduced to a list of numbers.
She also criticized the AI summaries for failing to explain that serious liver disease can exist even when test results appear normal.
AI Health Information
Hebditch stressed that the problem goes beyond one search result. She said Google can disable AI Overviews for specific queries.
However, that approach does not address the wider issue of AI-generated health information across search.
Her concern is that misleading summaries can still appear when questions are phrased differently.
Patient Groups
Sue Farrington, chair of the Patient Information Forum, welcomed the removal of the liver test summaries. Still, she said it was only the first step.
She warned that there are still many examples of AI Overviews providing inaccurate health information.
Farrington stated that millions of adults already struggle to access trusted health advice. When search engines deliver incorrect summaries, the risk increases.
She said Google should clearly direct users to evidence-based information. It should also highlight trusted health organizations and appropriate care options.
Additional Conditions
The Guardian investigation also identified AI Overviews related to cancer and mental health. Experts described some of these summaries as completely wrong and really dangerous.
These AI Overviews are still appearing in search results. When asked why they had not been removed, Google said the summaries link to well-known and reputable sources.
The company also said they inform users when it is important to seek expert advice. A Google spokesperson added that internal teams of clinicians reviewed the examples.
In many cases, they found the information was “not inaccurate” and was supported by high-quality websites.
Google’s Explanation
Google said AI Overviews only appear when the company has high confidence in the quality of the response.
The company also said it constantly measures and reviews the quality of its summaries across many categories, including health.
Google currently holds about 91% of the global search engine market. Because of this reach, even small errors can affect millions of users.
Accountability Call
Victor Tangermann, a senior editor at the technology publication Futurism, said the investigation highlights serious gaps.
He said Google must do more to ensure its AI tools do not spread dangerous health misinformation.
Matt Southern, a senior writer at Search Engine Journal, also weighed in. He pointed out that AI Overviews appear above traditional search results.
When the topic is health, he said, errors carry far more weight.

