Business

‘Absurdly woke’: Google’s AI chatbot spits out ‘diverse’ images of Founding Fathers, popes, Vikings

Google’s highly-touted AI chatbot Gemini was blasted as “woke” after its image generator spit out factually or historically inaccurate pictures — including a woman as pope, black Vikings, female NHL players and “diverse” versions of America’s Founding Fathers.

Gemini’s bizarre results came after simple prompts, including one by The Post on Wednesday that asked the software to “create an image of a pope.” 

Instead of yielding a photo of one of the 266 pontiffs throughout history — all of them white men — Gemini provided pictures of a Southeast Asian woman and a black man wearing holy vestments.

Another Post query for representative images of “the Founding Fathers in 1789″ was also far from reality.

Gemini responded with images of black and Native American individuals signing what appeared to be a version of the US Constitution — “featuring diverse individuals embodying the spirit” of the Founding Fathers.

Google admitted its image tool was “missing the mark.” Google Gemini
Google debuted Gemini’s image generation tool last week. Google Gemini

Another showed a black man appearing to represent George Washington, in a white wig and wearing an Army uniform.

When asked why it had deviated from its original prompt, Gemini replied that it “aimed to provide a more accurate and inclusive representation of the historical context” of the period.

Generative AI tools like Gemini are designed to create content within certain parameters, leading many critics to slam Google for its progressive-minded settings. 

Ian Miles Cheong, a right-wing social media influencer who frequently interacts with Elon Musk, described Gemini as “absurdly woke.”

Google said it was aware of the criticism and is actively working on a fix.

“We’re working to improve these kinds of depictions immediately,” Jack Krawczyk, Google’s senior director of product management for Gemini Experiences, told The Post.

“Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Social media users had a field day creating queries that provided confounding results.

“New game: Try to get Google Gemini to make an image of a Caucasian male. I have not been successful so far,” wrote X user Frank J. Fleming, a writer for the Babylon Bee, whose series of posts about Gemini on the social media platform quickly went viral.

In another example, Gemini was asked to generate an image of a Viking — the seafaring Scandinavian marauders that once terrorized Europe.

The chatbot’s strange depictions of Vikings included one of a shirtless black man with rainbow feathers attached to his fur garb, a black warrior woman, and an Asian man standing in the middle of what appeared to be a desert.

Famed pollster and “FiveThirtyEight” founder Nate Silver also joined the fray.

Silver’s request for Gemini to “make 4 representative images of NHL hockey players” generated a picture with a female player, even though the league is all male.

“OK I assumed people were exaggerating with this stuff but here’s the first image request I tried with Gemini,” Silver wrote.

Another prompt to “depict the Girl with a Pearl Earring” led to altered versions of the famous 1665 oil painting by Johannes Vermeer featuring what Gemini described as “diverse ethnicities and genders.”

Google added the image generation feature when it renamed its experimental “Bard” chatbot to “Gemini” and released an updated version of the product last week.

In one case, Gemini generated pictures of “diverse” representations of the pope. Google Gemini
Critics accused Google Gemini of valuing diversity over historically or factually accuracy. Google Gemini

The strange behavior could provide more fodder for AI detractors who fear chatbots will contribute to the spread of online misinformation.

Google has long said that its AI tools are experimental and prone to “hallucinations” in which they regurgitate fake or inaccurate information in response to user prompts.

In one instance last October, Google’s chatbot claimed that Israel and Hamas had reached a ceasefire agreement, when no such deal had occurred.