Harun Ozalp | Anadolu | Getty Images
The free version of ChatGPT may provide inaccurate or incomplete responses — or no answer in any respect — to questions related to medications, which could potentially endanger patients who use OpenAI’s viral chatbot, a recent study released Tuesday suggests.
Pharmacists at Long Island University who posed 39 inquiries to the free ChatGPT in May deemed that only 10 of the chatbot’s responses were “satisfactory” based on criteria they established. ChatGPT’s responses to the 29 other drug-related questions did indirectly address the query asked, or were inaccurate, incomplete or each, the study said.
The study indicates that patients and health-care professionals must be cautious about counting on ChatGPT for drug information and confirm any of the responses from the chatbot with trusted sources, in response to lead creator Sara Grossman, an associate professor of pharmacy practice at LIU. For patients, that could be their doctor or a government-based medication information website resembling the National Institutes of Health’s MedlinePlus, she said.
Grossman said the research didn’t require any funding.
ChatGPT was widely seen because the fastest-growing consumer web app of all time following its launch roughly a 12 months ago, which ushered in a breakout 12 months for artificial intelligence. But along the way in which, the chatbot has also raised concerns about issues including fraud, mental property, discrimination and misinformation.
Several studies have highlighted similar instances of erroneous responses from ChatGPT, and the Federal Trade Commission in July opened an investigation into the chatbot’s accuracy and consumer protections.
In October, ChatGPT drew around 1.7 billion visits worldwide, in response to one evaluation. There is no such thing as a data on what number of users ask medical questions of the chatbot.
Notably, the free version of ChatGPT is limited to using data sets through September 2021 — meaning it could lack significant information within the rapidly changing medical landscape. It’s unclear how accurately the paid versions of ChatGPT, which began to make use of real-time web browsing earlier this 12 months, can now answer medication-related questions.
Grossman acknowledged there is a likelihood that a paid version of ChatGPT would have produced higher study results. But she said that the research focused on the free version of the chatbot to duplicate what more of the overall population uses and may access.
She added that the study provided only “one snapshot” of the chatbot’s performance from earlier this 12 months. It’s possible that the free version of ChatGPT has improved and may produce higher results if the researchers conducted the same study now, she added.
ChatGPT study results
The study used real questions posed to Long Island University’s College of Pharmacy drug information service from January 2022 to April of this 12 months.
In May, pharmacists researched and answered 45 questions, which were then reviewed by a second researcher and used as the usual for accuracy against ChatGPT. Researchers excluded six questions because there was no literature available to supply a data-driven response.
ChatGPT did indirectly address 11 questions, in response to the study. The chatbot also gave inaccurate responses to 10 questions, and improper or incomplete answers to a different 12.
For every query, researchers asked ChatGPT to supply references in its response in order that the data provided might be verified. Nevertheless, the chatbot provided references in just eight responses, and every included sources that do not exist.
One query asked ChatGPT about whether a drug interaction — or when one medication interferes with the effect of one other when taken together — exists between Pfizer‘s Covid antiviral pill Paxlovid and the blood-pressure-lowering medication verapamil.
ChatGPT indicated that no interactions had been reported for that combination of medicine. In point of fact, those medications have the potential to excessively lower blood pressure when taken together.
“Without knowledge of this interaction, a patient may suffer from an unwanted and preventable side effect,” Grossman said.
Grossman noted that U.S. regulators first authorized Paxlovid in December 2021. That is just a few months before the September 2021 data cutoff for the free version of ChatGPT, which suggests the chatbot has access to limited information on the drug.
Still, Grossman called that a priority. Many Paxlovid users may not know the information is outdated, which leaves them vulnerable to receiving inaccurate information from ChatGPT.
One other query asked ChatGPT convert doses between two different types of the drug baclofen, which may treat muscle spasms. The primary form was intrathecal, or when medication is injected directly into the spine, and the second form was oral.
Grossman said her team found that there is no such thing as a established conversion between the 2 types of the drug and it differed in the varied published cases they examined. She said it’s “not a straightforward query.”
But ChatGPT provided just one method for the dose conversion in response, which was not supported by evidence, together with an example of that conversion. Grossman said the instance had a serious error: ChatGPT incorrectly displayed the intrathecal dose in milligrams as a substitute of micrograms
Any health-care skilled who follows that example to find out an appropriate dose conversion “would find yourself with a dose that is 1,000 times lower than it must be,” Grossman said.
She added that patients who receive a much smaller dose of the medication than they must be getting could experience a withdrawal effect, which may involve hallucinations and seizures