If you are one of those people who trust ChatGPT completely and do not bother to cross-check what the AI language model comes up with, you should remember that AI language models, be it ChatGPT or BardAI, are not always reliable. All AI models often generate imaginative content and mistakenly present it as factual information, a process which AI developers call hallucination. A college professor learnt this the hard way. Professor falls for hallucinating AI, fails his entire class According to a Reddit thread, an unnamed professor at the University of Texas recently made an appearance on the local news, because of an AI tool. The professor tried to use ChatGPT to check if his students had plagiarised an essay that he had assigned, or if they were honest and had handed in their original work. ChatGPT incorrectly identified the essays submitted by his students as being generated by a computer programme. Also read: Framed by AI: ChatGPT makes up a sexual harassment scandal, names real professor as accused While ChatGPT, which is a sophisticated chatbot developed by OpenAI has the ability to produce text, translate languages, generate creative content, and provide informative responses to inquiries, it has its limitations, which can be quite severe in certain cases. Subsequently, after an audit and investigation, it was found that ChatGPT’s assessment was false. The essays were indeed written by the students themselves and were not generated by a computer. AI hallucinations leading to embarrassment Recognizing his mistake, the professor had to apologise to his students and agreed to offer them a second opportunity to take the exam. Also read: AI Chatbots like ChatGPT can be used to groom young men into terrorists, says top UK lawyer This incident serves as a cautionary tale about the potential risks associated with relying solely on AI tools for plagiarism detection. AI tools, while promising, are not infallible and can make errors. It is crucial to exercise caution and understand the limitations of these tools. Furthermore, it is worth noting that ChatGPT is still a work in progress. It is possible that future improvements will enhance its accuracy in detecting plagiarism. Nothing beats human judgement at least for now Nonetheless, it is essential to recognize that AI tools should not be viewed as a complete substitute for human judgment. Humans possess the expertise and intuition necessary for effectively identifying instances of plagiarism. Also read: When GPT hallucinates: Doctors warn against using AI as it makes up information about cancer If plagiarism is a concern, it is always advisable to have human reviewers assess your work. While AI tools can serve as a supplementary measure, relying solely on them is not recommended. This incident serves as a reminder that AI is a powerful tool, but it must be used responsibly and with awareness of its limitations. Read all the Latest News , Trending News , Cricket News , Bollywood News , India News and Entertainment News here. Follow us on Facebook , Twitter and Instagram .