Google says its AI image-generator would sometimes ‘overcompensate’ for diversity
Google apologized Friday for its faulty rollout of a new artificial intelligence image-generator, acknowledging that in some cases the tool would “overcompensate” in seeking a diverse range of people even when such a range didn’t make sense. In a 2022 technical paper, the researchers who developed Imagen warned that generative AI tools can be used for harassment or spreading misinformation “and raise many concerns regarding social and cultural exclusion and bias.” Those considerations informed Google’s decision not to release “a public demo” of Imagen or its underlying code, the researchers added at the time. But users looking for someone of a specific race or ethnicity or in particular cultural contexts “should absolutely get a response that accurately reflects what you ask for.” While it overcompensated in response to some prompts, in others it was “more cautious than we intended and refused to answer certain prompts entirely — wrongly interpreting some very anodyne prompts as sensitive.” He didn’t explain what prompts he meant but Gemini routinely rejects requests for certain subjects such as protest movements, according to tests of the tool by the AP on Friday, in which it declined to generate images about the Arab Spring, the George Floyd protests or Tiananmen Square. In one instance, the chatbot said it didn’t want to contribute to the spread of misinformation or “trivialization of sensitive topics.” Much of this week’s outrage about Gemini’s outputs originated on X, formerly Twitter, and was amplified by the social media platform’s owner Elon Musk who decried Google for what he described as its “insane racist, anti-civilizational programming.” Musk, who has his own AI startup, has frequently criticized rival AI developers as well as Hollywood for alleged liberal bias. University of Washington researcher Sourojit Ghosh, who has studied bias in AI image-generators, said Friday he was disappointed that Raghavan’s message ended with a disclaimer that the Google executive “can’t promise that Gemini won’t occasionally generate embarrassing, inaccurate or offensive results.” For a company that has perfected search algorithms and has “one of the biggest troves of data in the world, generating accurate results or unoffensive results should be a fairly low bar we can hold them accountable to,” Ghosh said.







Discover Related
