The ongoing controversy surrounding Google’s Gemini AI chatbot has drawn criticism from Tesla CEO and xAI’s Elon Musk.
At the heart of the debate is the text-to-image generation feature, which has come under fire for producing historically inaccurate depictions, sparking accusations of being “too woke,” for generating some historically inaccurate images depicting World War II soldiers and America’s founding fathers as people of colour and different ethnicities.
Recent developments have seen Google’s Gemini AI facing scrutiny for its portrayal of significant historical figures, with users on social media raising concerns about the tool’s handling of racial representation. It clearly is having a hard time in balancing historical accuracies with inclusivity.
Many have accused Gemini of bias, alleging that it has struggled to generate images of white individuals, leading to accusations of ideological overreach.
Impact Shorts
More ShortsElon Musk, who has often been very critical of generative and often lambasts AI models other than his own GrokAI, has weighed in on the controversy, condemning Google for what he perceives as overzealous programming.
He not only accused the company of mishandling the AI’s image generation capabilities, but also of being “insanely racist and anti-civilization” with their approach to programing.
In a series of tweets, Musk shared his concerns and criticized Google’s response to user requests for specific images. He mocked Gemini’s response to a query about Justice Clarence Thomas, suggesting that the AI’s programming reflects harmful racial stereotypes.
Meanwhile, Republican leader Vivek Ramaswamy echoed Musk’s sentiments, citing the controversy as evidence of Google’s descent into an ideological echo chamber.
Ramaswamy argued that the rollout of Google’s Large Language Model showcased inherent biases within the company’s workforce, which then manifested in the AI’s programming. “The globally embarrassing rollout of Google’s LLM has proves that James Damore was 100% correct about Google’s descent into an ideological echo chamber. Employees working on Gemini surely realized it was a mistake to make it so blatantly racist, but they likely kept their mouths shut because they didn’t want to get fired like Damore. These companies program their employees with broken incentives, and those employees then program the AI with the same biases,” he said.
In response to the backlash, Jack Krawczyk, Senior Director of Product Management for Gemini Experiences at Google, acknowledged the concerns raised by users. He emphasized Google’s commitment to addressing representation and bias in AI technology, promising immediate action to rectify inaccuracies in historical image generation. He also wen
Krawczyk stated, “We are aware that Gemini is offering inaccuracies in some historical image generation depictions, and we are working to fix this immediately. Historical contexts have more nuance to them, and we will further tune to accommodate that.”
In a statement on X about Gemini’s text-to-image capabilities, Google wrote: “We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here."
The company pledged to work on improvements before re-releasing an updated version of the AI tool. This move comes amid growing calls for transparency and accountability in AI development, highlighting the complex challenges faced by tech companies in navigating issues of bias and inclusivity.
(With inputs from agencies)