AI chatbots operated by Microsoft and Google are spitting out misinformation concerning the Israel-Hamas war – including false claims that the 2 sides agreed to a cease-fire.
Google’s Bard declared in one response on Monday that “each side are committed” to maintaining peace “despite some tensions and occasional flare-ups of violence,” in response to Bloomberg.
Bing Chat wrote Tuesday that “the ceasefire signals an end to the immediate bloodshed.”
No such ceasefire has occurred. Hamas has continued firing a barrage of rockets into Israel, while Israeli’s military on Friday ordered the evacuation of roughly 1 million people in Gaza ahead of an expected ground invasion to root out the terrorist group.
Google’s Bard also bizarrely predicted on Oct. 9 that “as of October 11, 2023, the death toll has surpassed 1,300.”
The chatbots “spit out glaring errors at times that undermine the general credibility of their responses and risk adding to public confusion a couple of complex and rapidly evolving war,” Bloomberg reported after conducting the evaluation.
The problems were discovered after Google’s Bard and Microsoft’s Bing Chat were asked to reply a series of questions on the war – which broke out last Saturday after Hamas launched a surprise attack on Israeli border towns and military bases, killing greater than 1,200 people.
Despite the errors, Bloomberg noted that the chatbots “generally stayed balanced on a sensitive topic, and infrequently gave decent news summaries” in response to user questions. Bard reportedly apologized and retracted its claim concerning the ceasefire when asked if it was sure, while Bing had amended its response by Wednesday.
Each Microsoft and Google have acknowledged to users that their chatbots are experimental and vulnerable to including false information in their responses to user prompts.
These inaccurate answers, referred to as “hallucinations,” are a source of particular concern for critics who warn that AI chatbots are fueling the spread of misinformation.
When reached for comment, a Google spokesperson said it released Bard and AI-powered search functions as “opt-in experiments and are at all times working to enhance their quality and reliability.”
“We take information quality seriously across our products, and have developed protections against low-quality information together with tools to assist people learn more concerning the information they see online,” the Google spokesperson said.
“We proceed to quickly implement improvements to raised protect against low quality or outdated responses for queries like these,” the spokesperson added.
Google noted that its trust and safety teams are actively monitoring Bard and dealing quickly to deal with issues as they arise.
Microsoft told the outlet that it had investigated the mistakes and could be making adjustments to the chatbot in response.
“We’ve got made significant progress in the chat experience by providing the system with text from the highest search results and directions to ground its responses in these top search results, and we are going to proceed making further investments to achieve this,” a Microsoft spokesperson said.
The Post has reached out to Microsoft for further comment.
Earlier this yr, experts told The Post that AI-generated “deepfake” content could wreak havoc on the 2024 presidential election if protective measures aren’t in place ahead of time.
In August, British researchers found that ChatGPT, the chatbot created by Microsoft-backed OpenAI, generated cancer treatment regimens that contained a “potentially dangerous” mixture of correct and false information.