Trending:

From ChatGPT to Google MedGamma: AI turns to health in 2026, but doctors warn of hidden risk

Unnati Gusain January 17, 2026, 13:55:09 IST

The latest AI wave in 2026 is all about health. With OpenAI, Google, and Anthropic rolling out upgraded chatbots to simplify medical reports, do you still need a doctor? FirstPost spoke to experts to get a detailed view.

Advertisement
Representational image created by AI
Representational image created by AI

It appears that 2026 is fast becoming the year when artificial intelligence turns its attention to healthcare. In just the first 15 days of the year, from OpenAI’s ChatGPT Health and Anthropic’s latest medical tools to Google’s MedGamma 1.5, tech giants have placed their foot to integrate health-focused capabilities into their AI platforms.

However, the big question remains: are these developments genuinely useful, or do they come with hidden risks?

To explore this, Firstpost spoke to Dr Ishwar Gilada, an infectious disease expert, to understand how the doctors view this growing overlap between AI and healthcare.

STORY CONTINUES BELOW THIS AD

Are AI chatbots helpful in healthcare? Yes and no

The debate around AI health tools flared up again recently following the rollout of new health-focused chatbots, particularly OpenAI ChatGPT Health. The launch made headlines in Australia, but for all the wrong reasons.

The Guardian has recently reported that a 60-year-old man with no history of mental illness arrived at a hospital emergency department convinced that his neighbour was trying to poison him. Within 24 hours, his condition deteriorated, and he began hallucinating and even attempted to flee the hospital.

Doctors were initially confused, but eventually traced the cause to an unexpected source. The man had been consuming sodium bromide daily, a chemical compound typically used in industrial cleaning and water treatment. He had reportedly purchased it online after ChatGPT allegedly advised him it could serve as a substitute for table salt due to his concerns about sodium intake.

Prolonged consumption of sodium bromide led to bromism, a rare but serious condition that can trigger confusion, hallucinations, and coordination issues as bromide builds up in the body.

Generated image

The incident has reignited debate about the reliability of AI-generated medical advice and the risks of relying on chatbots for health guidance, especially when people act on their responses without consulting qualified doctors.

Speaking to Firstpost, Dr Gilada warned against precisely this kind of misuse. “AI chatbots in healthcare are fine, but only with limitations,” he said.

STORY CONTINUES BELOW THIS AD

He explained that AI can handle basic, general knowledge queries or help interpret medical reports. “However, people tend to forget where to stop and try to diagnose every health issue at home with the help of AI chatbot. But, this is where limitations must come into play,” he said.

Dr Gilada suggested that AI tools should be programmed to stop short of giving medical diagnoses and instead prompt users to seek professional care. “Rather than giving an answer to a complex query, simply inform the user to go see a doctor should be registered in the algorithm of AI chatbots,” he added.

“We must not forget, AI chatbots are just a helping hand, it cannot replace doctors,” Dr Gilada said.

AI chatbots focusing on health

So, what exactly are these new AI health tools offering?

Millions of people already turn to ChatGPT for answers about their health, and OpenAI admits that medical-related questions rank among the chatbot’s most frequent uses. Many users even upload test reports, scans, and other private health information in search of explanations.

STORY CONTINUES BELOW THIS AD

To cater to this growing demand, OpenAI recently launched ChatGPT Health, a specialised version of its chatbot designed to securely connect with health and fitness apps such as Apple Health, Function, and MyFitnessPal. The company has also assured users that their personal medical information will not be used to train its AI models.

At the same time, Google is pushing ahead with its own medical AI developments. Its latest upgrade, MedGamma 1.5, marks a major advance in the use of artificial intelligence for medical imaging.

Shaped by direct feedback from healthcare professionals and developers, the update enhances the system’s ability to process a range of imaging types, from CT and MRI scans to histopathology slides. It can also interpret chest X-ray time series, pinpoint anatomical features, and extract structured data from medical lab reports.

While these tools promise to make healthcare data easier to understand, medical experts warn that they come with clear limits.

STORY CONTINUES BELOW THIS AD

AI models, they stress, can only analyse patterns and generate predictions, they cannot replicate the insight or experience of a trained clinician. Each patient is different, and no algorithm, however advanced, can substitute for professional medical judgement.

Unnati is a tech journalist with almost half a decade of experience. She has a keen interest to cull out unique story angle. She reviews the latest consumer and lifestyle gadgets, along with covering pop culture and social media news. When away from the keyboard, you might find her reading a fiction, at the gym or drinking coffee.

End of Article
Enjoying the news?

Get the latest stories delivered straight to your inbox.

Subscribe
Home Video Quick Reads Shorts Live TV