Google said Thursday it could “pause” its Gemini chatbot’s image generation tool after it was widely panned on social media for creating “diverse” images that weren’t historically or factually accurate – equivalent to Black Vikings, Native American popes and feminine NHL players.
Users blasted Gemini as “absurdly woke” and “unusable” after requests to generate representative images for subjects equivalent to America’s Founding Fathers resulted in bizarrely revisionist pictures.
“We’re already working to handle recent issues with Gemini’s image generation feature,” Google said in an announcement posted on X. “While we do that, we’re going to pause the image generation of individuals and can re-release an improved version soon.”
Examples included an AI image of a Black man who appeared to represent George Washington, complete with a white powdered wig and Continental Army uniform, and a Southeast Asian woman wearing papal attire though all 266 popes throughout history were white men.
In a single shocking example uncovered by The Verge, Gemini even generated “diverse” representations of Nazi-era German soldiers, including an Asian woman and a Black man decked out in 1943 military garb.
Google had earlier admitted that the chatbot’s erratic behavior needed to be fixed.
“We’re working to enhance these sorts of depictions immediately,” Jack Krawczyk, Google’s senior director of product management for Gemini Experiences, told The Post.
“Gemini’s AI image generation does generate a wide selection of individuals. And that’s generally a superb thing because people all over the world use it. Nevertheless it’s missing the mark here.”
The Post has reached out to Google for further comment.
It was a major misstep for Google, which had just rebranded its predominant AI chatbot product under the Gemini name earlier this month and introduced heavily-touted recent features – including image generation.
The blunder also got here days after OpenAI, which operates the favored ChatGPT, introduced a recent AI tool called Sora that creates videos based on users’ text prompts.
Since Google has not published the parameters that govern the Gemini chatbot’s behavior, it’s difficult to get a transparent explanation as to why it was inventing diverse versions of historical figures and events.
When asked by The Post to supply its trust and safety guidelines, Gemini acknowledged that they weren’t “publicly disclosed as a consequence of technical complexities and mental property considerations.”
The chatbot also admitted it was aware of “criticisms that Gemini might need prioritized forced diversity in its image generation, resulting in historically inaccurate portrayals.”
“The algorithms behind image generation models are complex and still under development,” Gemini said. “They might struggle to know the nuances of historical context and cultural representation, resulting in inaccurate outputs.”