You have not yet added any article to your bookmarks!
Join 10k+ people to get notified about new posts, news and tips.
Do not worry we don't spam!
Post by : Anis Farhan
Using artificial intelligence (AI) systems for medical advice — including diagnosing symptoms or suggesting treatment — may pose significant risks to patients’ health and safety, according to a major study released this week. The research, conducted by a team from the University of Oxford and published through partner institutions, found that AI chatbots often provide inaccurate, inconsistent or contradictory guidance that could mislead users seeking help with medical questions.
The findings come amid a rapid boom in the use of AI-powered applications and chatbots by millions of people worldwide who turn to these systems for quick health answers. While the underlying technology continues to improve, experts warn that current AI models — including widely used large language models — are not yet reliable substitutes for professional medical advice and may even misinform users in ways that jeopardise their health.
Artificial intelligence tools such as large language models and specialised chatbots are increasingly accessible through smartphones, websites and dedicated apps. Many are marketed as convenient sources for health guidance, symptom interpretation and general advice. Some companies position these technologies as helping users “understand” possible conditions before seeing a clinician.
However, the new study indicates that widespread belief in AI’s medical capabilities may be misplaced and potentially dangerous, particularly when users interpret AI responses as definitive health guidance. Researchers emphasised that while these systems often perform well in controlled tests where conditions are fully specified, real-world interactions with people reveal major limitations.
The large-scale research effort involved nearly 1,300 participants in the UK who were asked to use AI tools to make decisions about a range of medical scenarios. These ranged from benign conditions like a common cold to potentially life-threatening situations such as head injuries requiring urgent care.
Key findings from the study showed that:
Users who employed AI tools to interpret symptoms did not make better decisions than those relying on traditional methods, such as internet search engines or official health websites.
Participants correctly identified relevant medical conditions only about 34.5% of the time.
Appropriate action recommendations — such as seeking urgent care or consulting a general practitioner — were made in just around 44.2% of cases.
Overall, there was no clear advantage in outcomes for those using AI compared with other information sources.
The researchers cautioned that this performance gap reflects challenges both in how people interact with AI and how AI interprets incomplete or vague information from users. Many participants did not know how to accurately describe symptoms to get useful guidance from the systems, while the models themselves sometimes returned conflicting or misleading advice.
One of the most concerning aspects uncovered in the research was the communication disconnect between AI systems and human users. Participants often struggled to provide the precise details necessary for AI to make accurate assessments. At the same time, the responses they received blended useful insights with inaccurate or non-helpful suggestions, making it difficult for users to determine a safe course of action.
For example, when two participants described what were essentially the same set of symptoms — such as severe headache and light sensitivity — the AI systems sometimes produced dramatically different recommendations depending on the phrasing of the question. One person might be told to seek immediate medical attention, while another was advised to rest at home.
Lead researchers argue that such inconsistency highlights how poorly current AI chatbots handle nuanced or context-dependent medical queries — a critical shortcoming when user health could be at stake.
Health-related AI applications are often promoted with glowing language that can create unrealistic expectations. Some medical chatbots present themselves as offering comprehensive or expert-level guidance, but in practice, they lack the clinical judgment and contextual awareness of trained practitioners.
Experts caution that many AI systems, especially those geared toward consumer use, have not undergone the rigorous evaluation and clinical testing required for medical devices or professional diagnosis tools. This regulatory gap means that users may trust outputs that are not backed by robust evidence or safety verification.
Health regulators in various countries are increasingly scrutinising how AI tools are marketed and used in healthcare settings, but current rules lag behind the rapid spread of these technologies.
Even the most advanced large language models, which can demonstrate high accuracy in controlled scenarios, face major challenges when applied to real-world health advice. Several underlying issues contribute to this:
Incomplete and varied user information: People often provide partial or unclear descriptions that make accurate interpretation difficult for AI.
Context sensitivity: Medical assessment often requires understanding broader context — something AI may struggle to infer from brief text prompts.
Bias and training limitations: Many AI models are trained on datasets that reflect historical clinical language or internet content that may not fully represent real patient scenarios.
Conflicting advice patterns: AI responses can blend correct and incorrect elements, making it hard for users to distinguish safe guidance.
These factors contribute to AI’s inherent limitations when providing health information without professional supervision.
Technology companies behind major AI models acknowledge both the potential and pitfalls of their systems. While many emphasise that their tools are not intended to replace healthcare professionals, critics argue that such disclaimers are not always prominent or clear enough for users to interpret responses responsibly.
Some developers are exploring specialised healthcare AI systems with dedicated training and safety layers. However, experts say that robust safeguards, regulatory oversight, and alignment with medical standards are essential before such systems can be trusted for general medical advice.
There are also calls for clear labels and warnings that emphasise the limitations of AI when used for medical self-diagnosis, including alerts to consult licensed practitioners for definitive guidance.
The risks associated with inaccurate or inconsistent AI medical guidance are not merely theoretical. In real-world cases documented by journalists and health professionals, patients seeking medical answers from AI have received troubling responses that contributed to anxiety, misinformation, and unnecessary delays in care.
For instance, some AI applications incorrectly flagged non-serious symptoms as severe conditions, while others failed to recognise when urgent medical intervention was necessary. In one notable example, a chatbot misled a young patient about cancer progression, causing significant distress before clinical evaluation clarified the actual situation.
Such incidents underscore the potential for AI to do harm when used outside its intended scope or without appropriate expert oversight.
Healthcare professionals and AI researchers alike warn that while artificial intelligence holds promise for supporting clinical workflows, administrative tasks, and data analysis, its use for standalone medical advice remains highly problematic.
Dr. Adam Mahdi, a co-author of the Oxford study, emphasised that the disconnect between AI’s technical capability and real-world performance should be a “wake-up call” for developers, regulators and users alike.
Other experts suggest that future progress in this area will depend on developing AI systems that can reliably interpret human cues, contextual nuance and complex medical information — requirements that go far beyond current capabilities.
Until then, clinicians and patient advocates urge caution and stress that AI should not be relied upon as a replacement for professional medical advice or judgement.
The new research highlights several practical takeaways for individuals considering using AI for health questions:
AI should not replace medical professionals: When in doubt about symptoms or medical conditions, users should seek qualified healthcare advice rather than depending solely on machine responses.
Verify information from trusted sources: Users are encouraged to cross-reference any AI-provided medical information with reputable health websites or direct consultation with practitioners.
Understand AI’s limitations: Knowledge of how AI models work and their shortcomings can help users interpret responses more critically.
Disclaimer:
This article synthesises findings from recent research and reporting on the risks associated with using AI for medical advice. It is intended for informational purposes and does not constitute medical guidance. Readers should consult healthcare professionals for personal medical concerns.
Study Warns Using AI for Medical Advice Is ‘Dangerous’ as Users Get Inaccurate Health Guidance
A major new study reveals that artificial intelligence (AI) chatbots and tools may give misleading o
Top Sci-Fi Movies Streaming on Netflix This February: Must-Watch Picks for Genre Fans
A curated news-style guide to the best science fiction films currently available on Netflix in Febru
BCCI Central Contracts Shake-Up: Kohli, Rohit Moved to Grade B as Board Reshapes 2025–26 List
Virat Kohli and Rohit Sharma have been placed in Grade B in the BCCI’s 2025–26 central contract list
Dalal Street Spotlight: Top 10 Stocks Investors Are Watching as Markets Open on a High
Indian stock markets begin the week with strong momentum, and several blue-chip and mid-cap stocks a
Market Movers Today: Key Stocks Set To Watch In Indian Markets
Indian equity markets are poised for active trading as several major companies, including Bharti Air
Milan Welcomes the World: Inside the Grand Opening Ceremony of the 2026 Winter Olympics
The 2026 Winter Olympics opening ceremony in Milan marked a defining moment for global sport, blendi