Ai Hallucination
Recent Highlights
All Stories for Ai Hallucination
Keep Your Powder Dry: ChatGPT inventor Sam Altman doesn’t put trust in his creation
Mehul Das •It seems that OpenAI CEO, Sam Altman doesn't trust ChatGPT, his own creation. During his trip to India, Altman said that he trusts answers generated by ChatGPT less than anybody else on Earth
AI Hallucinates, Falsely Implicates: Man sues OpenAI after ChatGPT claimed he embezzled money
Mehul Das •OpenAI is getting sued in the US, after ChatGPT, its LLM chatbot falsely claimed that a popular radio host from Georgia, US embezzled money from a non-profit organisation. The AI chatbot went further by fabricating entire passages, making stuff up to support its claims
Turn-It-In: AI fails students for not using AI
Mehul Das •There are several apps and programmes that claim they can accurately tell when a document has been generated, or written using AI. However, most of these tools, even the ones made by OpenAI to detect if a piece was written by ChatGPT, give wrong results
No elections safe from AI, deep fake photos, videos of politicians to become common, warns former Google top boss
Mehul Das •Google's former CEO, Eric Schmidt, has warned that no democratic election can be free from AI's interference, unless governments across the world, work actively to curb misinformation and deep fakes. He is also calling for strict regulations on AI.
ChatGPT sued: Australian Mayor to sue OpenAI in world’s first defamation lawsuit against AI
Mehul Das •A mayor from a rural town in Australia is preparing to sue ChatGPT's founders and OpenAI in a defamation case, after it generated a response, falsely accusing the mayor of being involved in a bribery scandal. OpenAI may be liable to pay up to $400,000.
When GPT hallucinates: Doctors warn against using AI as it makes up information about cancer
Mehul Das •A team of doctors discovered that most AI bots like ChatGPT and BingAI give wrong or false information when asked about breast cancer. The study also discovered that ChatGPT makes up fictitious journals and fake doctors to support its answers.
AI goes bonkers: Bing's ChatGPT manipulates, lies and abuses people when it is not ‘happy’
Mehul Das •Several users have taken to Twitter and Reddit to share their experience with Microsoft’s ChatGPT-enabled Bing. They found that the chatbot lied and abused people, and that it manipulated its creators. Bing also had a few existential questions about itself.