Another AI fault was in place as Google’s AI overview went rogue once again and changed the current year by making a mistake. A user on X shared a snapshot of a Google Search asking, “is it 2027 next year.” In response to the question, AI overview came up with a never suggested answer saying, “No, 2027 is not next year. 2026 is next year.”
‘Room for improvement’
Responding to the screen shot, Musk wrote, “Room for improvement.”
The recent controversy sparking outrage among the audience is the Grok AI and its explicitly sexual images of women and children. Musk’s X platform has performed a non-consensual act that ignited many countries and urged them to take action against the deepfakes created by GrokAI.
Inaccurate information
Meanwhile, this is not the first time that Google’s AI Overview has come up with inaccurate information as well. The feature first sparked controversy shortly after its launch when it told users to add ‘glue’ to pizza or eat rocks for vitamins.
As Google made progress with Gemini, the inaccuracies with AI Overview seemed to improve but the chatbot had once again mired controversy after it said ‘Call of Duty: Black Ops 7’ was a fake game.
The current year is 2025
Google seemed to have turned its overview option for a short span of time as noticed, due to the controversy saying “2026 is next year.” The current year is 2025, according to Google’s AI overview. Meanwhile, asking the same question or similar variations of it to the company’s AI Mode, backed by Gemini 3, does not bring in the same inaccuracies.
Notably, a recent investigation by The Guardian had also revealed that AI Mode continues to serve inaccurate health information that puts people at risk of harm.
The AI reportedly went on to wrongly advise people with pancreatic cancer to avoid high fat foods which is the exact opposite of the opinions served by experts who say it may increase the risk of patients dying from the disease.
Ethical lapses in training data
In another example, the AI provided ‘bogus information about liver function tests which could lead to people having serious liver disease. It also provided ‘completely wrong’ information about women’s cancer tests which reportedly could lead to people dismissing genuine symptoms.
The question that usually arises remains stiff: Should we believe AI tools? Or all AI chatbots are sparking controversy?
AI chatbots work on prompts and single wrong information can misguide and mislead the young minds leading to harmful content generation, inadequate child safeguards, and ethical lapses in training data.


)

)
)
)
)
)
)
)
)



