A top Google executive answerable for the “absurdly woke” AI chatbot Gemini has come under fire after allegedly declaring in tweets that “white privilege is f—king real” and America is rife with “egregious racism.”
The politically charged tweets allegedly made by Jack Krawczyk, the senior director of product for Gemini Experiences, resurfaced on X on Thursday — a day after The Post reported on Gemini’s strange habit of generating “diverse” images that were historically or factually inaccurate when users asked for pictures of Vikings or America’s Founding Fathers.
By Thursday afternoon, Krawczyk had set his X account to personal and scrubbed any mention of Google or his role at the corporate from his account bio.
Krawczyk, 40, has set his X feed to a personal setting, but screenshots of his purported tweets, most of them made before he was hired by Google in 2020, revealed a distinctly ultra-progressive bias.
“White privilege is f—king real,” Krawczyk allegedly wrote in a single tweet dated April 13, 2018, based on screenshots of the post circulating on X. “Don’t be an a—hole and act guilty about it – do your part in recognizing bias in any respect levels of egregious.”
On Jan. 20, 2021, Krawczyk allegedly referred to President Biden’s inaugural address as “one among the best ever” for “acknowledging systemic racism” and “reiterating the American ideal is the dream for the world but we want to work on ourselves to earn it.”
In one other post from Oct. 21, 2020 – apparently after he voted against Donald Trump within the presidential election – Krawczyk allegedly wrote: “I’ve been crying in intermittent bursts for the past 24 hours since casting my ballot. Filling in that Biden/Harris line felt cathartic.”
The Polish-born tech savant also allegedly described America as a spot “where racism is the #1 value our populace seeks to uphold above all” and declared that “we obviously have egregious racism on this country” in other screenshots that made the rounds on social media.
Efforts to succeed in Krawczyk directly weren’t immediately successful.
Google temporarily disabled Gemini’s image generation tool Thursday after the kerfuffle, with Krawczyk admitting in an announcement that it was “missing the mark” by producing revisionist images.
Google declined to comment on the tweets and referred to the corporate’s earlier statement on its decision to “pause” Gemini’s image generation tool.
Critics were quick to suggest that Krawczyk’s alleged personal bias had contributed to Gemini’s penchant for prioritizing “diverse” outputs over accuracy.
“The top of Google’s Gemini AI everyone. And also you wonder why it discriminates against white people,” said @LeftismForU, an account that compiled most of the screenshots.
“Woke, race obsessed idiot is in control of product at Gemini,” declared Ian Miles Cheong, an influencer who incessantly interacts with Elon Musk on X.
Musk personally weighed in on the tweets, describing the Google worker as an “a—hole” and a “racist douchenozzle.”
Musk also shared The Post’s cover story on Gemini’s bizarre images, writing: “The woke mind virus is killing Western Civilization. Google does the identical thing with their search results. Facebook & Instagram too. And Wikipedia.”
Krawczyk was described as Google AI’s “teacher” in a Men’s Health profile from last December.
The Post couldn’t immediately confirm the authenticity of each screenshot, however the alleged tweets have definitely gone viral.
A seek for the phrase “Jack Krawczyk white privilege” produced his now-viral thread as the primary result.
When the bizarre behavior of Gemini’s image generation tool surfaced on Wednesday, Krawczyk was the primary Google worker to weigh in on the matter – declaring in an announcement that his team was “working to enhance these sorts of depictions immediately.”
“Gemini’s AI image generation does generate a big selection of individuals. And that’s generally a great thing because people all over the world use it. Nevertheless it’s missing the mark here,” Krawczyk told The Post.
Since Google doesn’t publish the model that governs the Gemini chatbot’s behavior, it’s difficult to pin down the precise cause that led the software to invent diverse versions of historical figures and events.
Google’s “training process” with human input is one plausible explanation for the behavior, based on Fabio Motoki, a lecturer on the UK’s University of East Anglia who co-authored a paper last 12 months that found a noticeable left-leaning bias in ChatGPT.
“Keep in mind that reinforcement learning from human feedback (RLHF) is about people telling the model what is healthier and what’s worse, in practice shaping its ‘reward’ function – technically, its loss function,” Motoki told The Post.
“So, depending on which individuals Google is recruiting, or which instructions Google is giving them, it could lead on to this problem.”
Krawczyk has been a Google worker for 4 years. He previously worked on the Google Assistant program, based on his LinkedIn account.
Before coming to Google, he held roles at WeWork, where he served as vice chairman of product management, in addition to VSCO and United Masters.