Study reveals AI chatboats give misleading medical advice 50% of the time

New Delhi: Artificial Intelligence-driven chatbots are providing unreliable or problematic medical advice that is neither prescribed nor recommended by experts, suggests a new study published in the medical journal BMJ Open. The research highlights the increased risk of artificial intelligence and technology to health, as they are increasingly interfering in day-to-day life, with people relying on them without any expertise.

Researchers from the United States, Canada, and the United Kingdom assessed five widely used platforms—ChatGPT, Gemini, Meta AI, Grok, and DeepSeek—by posing 10 questions in each of five different health categories. The study, published this week in the medical journal BMJ Open, found that nearly half of the responses were problematic, with around 20 per cent classified as highly concerning.

Artificial intelligence gives misleading advice

During the research, scientists found that chatbots reacted relatively better on closed-ended prompts and questions related to vaccines and cancer, and worse on open-ended prompts and in areas like stem cells and nutrition.

Answers received from chatbots were confident, and certainty was delivered through each prompt; no chatbot was able to produce a full list in response to any prompt, said researchers. There were only two refusals to answer a question, both from Meta AI.

The results of the study showed growing concern about how people are using AI platforms to generate answers and seek advice, which aren’t licensed to give medical advice and lack clinical judgment to diagnose.

The researchers noted that the results “highlight important behavioural limitations and the need to reevaluate how AI chatbots are deployed in public-facing health and medical communication.” They added that these tools can produce “authoritative-sounding but potentially flawed responses,” underscoring the need for more caution in their use.

OpenAI has said that more than 200 million people ask ChatGPT health and wellness-related questions every week. The platform announced in January health tools for both everyday users and clinicians, and Anthropic said the same month its Claude product is launching a new health care offering, as the explosive growth of AI chatbots has made popular tool for seeking guidance during illness.

The authors of the BMJ Open study warned that deploying chatbots without proper public education and oversight poses a significant risk, as it could lead to the spread and amplification of misinformation.