Mark Walters, a radio host from Georgia, US, has taken legal action against OpenAI following an incident involving the ChatGPT service. According to reports from Gizmodo, Walters has filed a libel lawsuit, claiming that ChatGPT falsely accused him of embezzling funds from The Second Amendment Foundation (SAF), a nonprofit organization advocating for gun rights. Although it may be challenging for the legal team to demonstrate in court that an AI chatbot like ChatGPT caused damage to Walters’ reputation, this lawsuit has the potential to shape the ongoing discourse surrounding such tools, as they increasingly generate bold and unfounded claims. Wrongly accused by AI In the lawsuit, Walters’ lawyer argues that OpenAI’s chatbot, during an interaction with Fred Riehl, the editor-in-chief of a gun website, disseminated defamatory content about Walters. Riehl had requested a summary of a case involving Washington attorney general Bob Ferguson and The Second Amendment Foundation (SAF). However, the AI-generated response wrongly implicated Walters in the case, falsely identifying him as the treasurer and chief financial officer of SAF, even though Walters has no affiliation with the organization. Furthermore, the case Riehl was researching did not mention Walters at all. **Also read: Framed by AI: ChatGPT makes up a sexual harassment scandal, names real professor as accused** The AI chatbot went further by fabricating entire passages in the response, which were unrelated to any questionable financial accounting. It even made errors such as providing an incorrect case number. Notably, Riehl did not publish the false information provided by the AI, but instead reached out to the attorneys involved in the actual lawsuit to clarify the matter. Why do AI bots lie? This incident highlights a significant issue with ChatGPT and similar AI chatbots, as they have a well-documented history of generating completely false statements. This flaw undermines their reliability and usefulness. Despite the known flaws, companies such as OpenAI and Google have promoted AI chatbots as a novel means of retrieving information. However, paradoxically, they often caution users against fully trusting the output generated by these systems. This contradiction arises because AI chatbots operate based on complex algorithms and machine learning techniques, which can result in inaccuracies and fabrications in their responses. While they can be useful for generating insights or providing preliminary information, it is crucial to approach their output with skepticism and verify the information through reliable sources. Not the only instance where people have been accused falsely John Monroe, Walters’ attorney, is now arguing that these companies should be held accountable for the flawed outputs of their AI chatbots. Monroe asserts that while research and development in AI is valuable, it is irresponsible to release a system to the public knowing that it generates false information that can potentially cause harm. The question arises as to whether fabricated information generated by AI chatbots like ChatGPT could be considered as libel in a court of law. **Also read: ChatGPT sued: Australian Mayor to sue OpenAI in world’s first defamation lawsuit against AI** In April this year, a law ChatGPT recently made up a fictitious sexual harassment case, and named a real law professor as the perpetrator. Furthermore, ChatGPT also cited a fake article that it created on its own, saying it was from a reputed newspaper as evidence of the assault. Eugene Volokh, a professor at the University of California Los Angeles Law School who is studying the legal liability of AI models, believes it is a possibility. Volokh points out that OpenAI does acknowledge the potential for mistakes, but ChatGPT is not marketed as a joke, fiction, or random output. Therefore, it can be argued that there is an expectation of reliability associated with the system. In another case from April, this time from Australia, a mayor from a rural town said that he would be suing ChatGPT’s founders and OpenAI in a defamation case, after the chatbot generated a response, falsely accusing the mayor of being involved in a bribery scandal. OpenAI may be liable to pay up to $400,000. Volokh’s stance suggests that if false information generated by ChatGPT or similar AI models leads to harm and meets the legal criteria for libel, it could potentially be subject to legal consequences. The key factor would be establishing the level of accountability that companies like OpenAI bear for the actions and outputs of their AI chatbots. Read all the Latest News, Trending News, Cricket News, Bollywood News, India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.
OpenAI is getting sued in the US, after ChatGPT, its LLM chatbot falsely claimed that a popular radio host from Georgia, US embezzled money from a non-profit organisation. The AI chatbot went further by fabricating entire passages, making stuff up to support its claims
Advertisement
End of Article