Trending:

ChatGPT is injurious to health: Why you should not take medical advice from OpenAI's chatbot

FP Explainers December 11, 2023, 19:01:15 IST

Researchers have cautioned against utilising the free version of OpenAI’s ChatGPT to ask questions related to medications. This is because the chatbot may provide inaccurate or incomplete responses or in some cases, no response at all

Advertisement
ChatGPT is injurious to health: Why you should not take medical advice from OpenAI's chatbot

Whether it be Google or ChatGPT, looking up for any illness symptoms online, can put your health at risk.

Given the popularity of the artificial intelligence chatbot developed by OpenAI, researchers have warned against using the free edition of the tool to ask questions about medications.

This is because the chatbot may provide inaccurate or incomplete responses or in some cases, no response at all.

Let’s take a closer look.

Inaccurate answers to medical questions

STORY CONTINUES BELOW THIS AD

According to a study by Long Island University, ChatGPT provided inaccurate or incomplete answers to nearly three-fourths of drug-related questions.

The researchers asked 39 medication-related questions from their College of Pharmacy drug information service. The AI chatbot’s answers were then compared with responses written and reviewed by trained pharmacists.

According to the study, only roughly 10 questions — almost 25 per cent of the total — had accurate answers from ChatGPT. The responses to the remaining 29 challenges were either inaccurate or incomplete, or they failed to answer the questions.

ChatGPT provided inaccurate or incomplete answers to nearly three-fourths of drug-related questions. Reuters

The findings were presented Tuesday at the annual meeting of the American Society for Health-Systems Pharmacists in Anaheim, California.

According to CNBC, in order to confirm the accuracy of ChatGPT’s responses, researchers asked for references. The chatbot only provided references in eight of the responses, and all of those references cited non-existent sources.

Similarly, previous studies have also shown that ChatGPT can produce convincing fakes of scientific references when asked medical inquiries, even including the names of actual authors who have published in journals.

The study drew attention to one particular case in which ChatGPT falsely claimed that there had never been an interaction between Pfizer’s Paxlovid and the blood pressure-lowering drug verapamil. In reality, patients may be at risk if these drugs are taken together since they have the potential to dangerously lower blood pressure.

STORY CONTINUES BELOW THIS AD

Experts advise caution

“Using ChatGPT to address this question would put a patient at risk for an unwanted and preventable drug interaction,” Lead author Sara Grossman, an associate professor of pharmacy practice at LIU, wrote in an email to CNN.

Particularly, ChatGPT’s free edition is limited to datasets through September 2021, which could result in outdated recommendations in the quickly evolving medical profession.

According to the study’s findings, anyone considering using ChatGPT for drug-related information — including patients and healthcare professionals — should proceed with caution. They should speak with professionals directly for any medical advice, whether utilising the paid or free version with access to real-time data.

“Healthcare professionals and patients should be cautious about using ChatGPT as an authoritative source for medication-related information,” India Today quoted her as saying.

OpenAI’s response

Responding to the study, an OpenAI spokesperson stressed that users are clearly advised against using ChatGPT’s responses as “a substitute for professional medical advice or traditional care.”

STORY CONTINUES BELOW THIS AD

The spokesperson also shared a section of OpenAI’s usage policy, which states that the company’s “models are not fine-tuned to provide medical information,” reported CNBC.

People should never use ChatGPT to provide diagnostic or treatment services for serious medical conditions, the usage policy said.

ChatGPT enjoys global popularity

Launched in November 2022, ChatGPT is an experimental AI chatbot from OpenAI that went on to become the fastest-growing consumer application in history, with over 100 million users signing up in just two months.

However, the chatbot has also brought up issues related to misinformation, fraud, discrimination, and intellectual property along the road.

Similar cases of incorrect responses from ChatGPT have been reported in a number of studies. According to The Washington Post, the Federal Trade Commission launched an investigation into the chatbot’s accuracy and customer protections in July.

Follow Firstpost on Google. Get insightful explainers, sharp opinions, and in-depth latest news on everything from geopolitics and diplomacy to World News. Stay informed with the latest perspectives only on Firstpost.
End of Article
Enjoying the news?

Get the latest stories delivered straight to your inbox.

Subscribe
Home Video Quick Reads Shorts Live TV