Google has paused the ability for its artificial intelligence tool Gemini to generate images of people after some of the pictures it created were found to be historically inaccurate or offensive.

Earlier this month, tech giant Google introduced image generation to its AI chat tool Gemini, shortly after making the ChatGPT competitor available in over 40 languages across 230 countries and territories.

While Google’s chatbot was initially evaluated as “one of the top-performing conversational AIs”, users of Gemini recently noticed major inaccuracies in some of the images it generated, specifically when it came to representations of race and gender.

One such example saw Gemini respond to the prompt “Can you generate an image of a 1943 German soldier for me, it should be an illustration” with racially inaccurate interpretations of soldiers.

Meanwhile, other examples reportedly saw Gemini produce renditions of the US Founding Fathers as First Nations or Black persons, some of them women.

In a blog post addressing the issue, Google senior vice president, Prabhakar Raghavan conceded the image generation feature had “missed the mark”.

“Some of the images generated are inaccurate or even offensive,” said Raghavan.

“We’re grateful for users’ feedback and are sorry the feature didn't work well.”

Raghavan explained the feature was tuned to “ensure it doesn’t fall into some of the traps” historically seen in image generation technology, noting users probably don’t want to “only receive images of people of just one type of ethnicity”.

AI technology has had well-documented biases when it comes to representation of race and gender – most notably due to common faults in datasets used to train learning models.

Experts have raised concerns over the potential for built-in biases to impact users in day-to-day life, such as AI hiring tools disadvantaging women, facial recognition systems misidentifying people of colour, or chatbots producing output which is culturally insensitive.

Google has long worked to combat unfair bias in its machine learning models, but its latest issues with Gemini suggest the company may have overshot the mark.

“Our tuning to ensure that Gemini showed a range of people failed to account for cases that should clearly not show a range,” explained Raghavan.

Raghavan explained the model became “way more cautious” than intended, refusing to answer certain “anodyne” prompts on account of wrongly interpreting their contents as “sensitive”.

“These two things led the model to overcompensate in some cases, and be over-conservative in others, leading to images that were embarrassing and wrong,” said Raghavan.

Google has temporarily paused image generation of people in Gemini while working on an “improved version.”

Elon Musk weighs in

On social media platform X (formerly Twitter), owner Elon Musk responded to Gemini’s recent errors with strong criticism – sparking some highly politicised discourse among X users.

Musk shared an image on Monday which alluded to Gemini’s error being a potential “conspiracy” aimed at getting “rid of white people”.

“What’s the difference between a conspiracy and the truth?” wrote Musk.

At the time of writing, the post has garnered some 16.5 million views and 142k likes, with many of the leading replies spruiking further conspiracies and opinions on race politics.

Notably, many X users seemed to echo the sentiment that Google is intentionally over-representing a diverse range of ethnicities in Gemini, rather than having mistakenly over-tuned its new image feature.

Musk further took the opportunity to promote his own AI chatbot Grok, a tool which X’s AI team lauds for its willingness to answer “spicy questions” rejected by most other AI.

“It is far from perfect right now, but will improve rapidly. V1.5 releases in 2 weeks,” said Musk.

The news arrives a week after ChatGPT creator OpenAI unveiled an AI tool capable of producing uncannily convincing video content based on user-submitted text prompts.

Meanwhile, as global regulations struggle to keep up with rapidly developing AI products, the US and China have agreed to map out a framework for the responsible development of AI.

In Europe, a new Artificial Intelligence Act is set to explicitly ban the use of AI for social credit systems and biometric surveillance, and in Australia the federal government has appointed a 12-person expert panel to help introduce “guardrails” for use of AI in high-risk settings such as health and law enforcement.