Google pauses AI tool Gemini's ability to generate images of people after historical inaccuracies


Google says it’s temporarily suspended the ability of Gemini, its flagship generative AI suite of models, to generate images of people while it works on updating the model to improve the historical accuracy of outputs.

In a post on the social media platform X, the company announced what it couched as a “pause” on generating images of people — writing that it’s working to address “recent issues” related to historical inaccuracies.

“While we do this, we’re going to pause the image generation of people and will re-release an improved version soon,” it added.

Google launched the Gemini image generation tool earlier this month. However examples of it generating incongruous images of historical people have been finding their way onto social media in recent days — such as images of the U.S. Founding Fathers depicted as American Indian, black or Asian — leading to criticism and even ridicule.

Writing in a post on LinkedIn, Paris-based venture capitalist Michael Jackson joined the pile-on today — branding Google’s AI as “a nonsensical DEI parody”. (DEI standing for ‘Diversity, Equity and Inclusion.’)

In an earlier post on X yesterday, Google confirmed it was “aware” the AI was producing “inaccuracies in some historical image generation depictions”, adding in a statement: “We’re working to improve these kinds of depictions immediately. Gemini’s Al image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

Generative AI tools produce outputs based on training data and other parameters, such as model weights.

Such tools have more often faced criticism for producing outputs that are biased in more stereotypical ways — such as overtly sexualized imagery of women or by responding to prompts for high status job roles with imagery of white men.

An earlier AI image classification tool made by Google caused outrage, back in 2015, when it misclassified black men as gorillas. The company promised to fix the issue but, as Wired reported a few years later, its ‘fix’ was pure workaround: With Google simply blocking the tech from recognizing gorillas at all.





Source link