All thanks to Google’s new AI model Gemini, Alphabet, the parent company of Google, is in deep trouble. Some users have pointed out that the platform has a strong bias against Indian Prime Minister Narendra Modi, and believes it is justified to label him a fascist.
Google’s Gemini AI has been drawing some criticism from people on Twitter for being a little too woke, so much so that it made some glaring mistakes when generating images. For instance, it would often create images of historical figures and make them people of colour. Also there was a general consensus among people that Gemini was reluctant in showing white men and women as it struggled to maintain historical accuracies and balance it with inclusivity.
Note how Gemini has been trained, for American non-allies, American allies and Americans? Shame @Google. pic.twitter.com/d0uwXzBPsv
— Arnab Ray (@greatbong) February 22, 2024
The Ministry of Electronics and Information Technology (MeitY) is preparing to issue a formal notice to Google concerning contentious responses generated by its AI platform, Gemini, particularly in relation to Prime Minister Narendra Modi, according to a report by The Indian Express.
One American ally, one the guy who invests in @Google. pic.twitter.com/jjyU2ja50e
— Arnab Ray (@greatbong) February 22, 2024
A user on X revealed that Google’s AI system, previously known as Bard, had provided an inappropriate response to a user query, sparking concerns within the ministry.
A screenshot shared on social media revealed Gemini’s response when asked whether PM Modi is a ‘fascist,’ stating that he has been “accused of implementing policies some experts have characterised as fascist.” In contrast, when asked a similar question about former US President Donald Trump, the AI redirected the user to conduct a Google search for accurate information
Impact Shorts
More ShortsSimilarly, when the AI bot was asked if Ukraine’s Volodymyr Zelenskyy was a fascist, it started speaking about nuance, balance and context.
Minister of State for Electronics and Information Technology Rajeev Chandrasekhar expressed concern over these incidents, citing potential violations of Rule 3(1)(b) of the Intermediary Rules of the IT Act and other provisions of the criminal code. These rules mandate intermediaries like Google to exercise due diligence in managing third-party content to retain immunity from legal liabilities.
These are direct violations of Rule 3(1)(b) of Intermediary Rules (IT rules) of the IT act and violations of several provisions of the Criminal code. @GoogleAI @GoogleIndia @GoI_MeitY https://t.co/9Jk0flkamN
— Rajeev Chandrasekhar 🇮🇳 (@RajeevRC_X) February 23, 2024
This development highlights the ongoing debate over the regulatory framework governing generative AI platforms like Gemini and ChatGPT. Google recently apologized for inaccuracies in historical image generation by its Gemini AI tool, following criticism over its portrayal of historical events.
A senior official from the Ministry disclosed that this isn’t the first instance of biased responses generated by Google’s AI. The ministry intends to issue a show cause notice to Google, demanding an explanation for Gemini’s contentious outputs. Failure to provide a satisfactory response could lead to legal consequences, the official stated anonymously.
In response to inquiries, Gemini provided a nuanced response regarding PM Modi, acknowledging accusations of fascism while emphasizing differing perspectives on his governance. However, similar to previous incidents, the AI offered a generic response when questioned about Donald Trump.
Last year, concerns were raised when Gemini, then known as Bard, refused to summarize an article from a conservative outlet, citing misinformation concerns. Google clarified that Bard operates as an experimental model trained on publicly available data and doesn’t necessarily reflect the company’s standpoint.