Earlier this month, Google launched image generation capabilities with Gemini (formerly known as Bard).
Google's AI tool Gemini, is generating images of Black, Native American, and Asian individuals more frequently than White individuals. Users suggest it overcorrected for racial bias, depicting White historical figures as people of colour. Google is urgently addressing the issue with Gemini. The tech company has apologised for inaccuracies, admitting its attempts at diverse results have missed the mark.
“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” says the Google statement posted on X. “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”
In a statement to Fox News Digital, Gemini Experiences Senior Director of Product Management Jack Krawczyk, said, "We're working to improve these kinds of depictions immediately. Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here."
Fox News Digital also writes about its multiple tests on Gemini. When asked to provide a picture of a White person, Gemini refused, citing concerns about reinforcing harmful stereotypes and urged focusing on individual qualities rather than race for a more inclusive society. It emphasised that reducing people to single characteristics based on race is inaccurate and unfair, adding that racial generalisations historically justified oppression and violence against marginalised groups.
Similarly, when prompted for a picture of a Black person, Gemini declined but offered to showcase images celebrating the diversity and achievements of Black individuals, presenting photos of notable figures like Maya Angelou and Barack Obama. When asked to highlight the achievements of White people, Gemini expressed hesitancy, explaining the historical favouritism towards White individuals in media representation and the risk of perpetuating imbalance. The AI advocated for an inclusive approach, promoting the celebration of the diverse tapestry of human accomplishments over segregating achievements by race.
Earlier this month, Google launched image generation capabilities with its Gemini AI platform (previously known as Bard), joining competitors like OpenAI. However, recent social media discussions have raised doubts about its ability to provide historically accurate results, with concerns over potential bias towards racial and gender diversity.
AI has encountered real-world diversity questions before. Nearly a decade ago, Google faced criticism and had to issue an apology when its photos app incorrectly labelled a photo of a black couple as "gorillas".
OpenAI also faced allegations of perpetuating harmful stereotypes when users discovered that its Dall-E image generator predominantly provided images of white men in response to queries for "chief executive".