A new study has found that ChatGPT and GPT-4 have great emotional intelligence and can be used as a great tool to teach children. Bill Gates even went to the extent of saying that ChatGPT and GPT-4 based applications will be used by teachers to teach students in about 18 months. However, there are some glaring issues as well. The study found that not only ChatGPT and GPT-4 have great emotional intelligence, but they also tend to prefer emotions over facts in a number of cases. Stumbling upon ChatGPT’s EQ Arvind Narayanan, a computer science professor at Princeton, set up a speech interface to ChatGPT for his nearly four-year-old daughter. It was part of an experiment, and part of his belief that artificial intelligence (AI) agents would one day play a significant role in her life. Narayanan’s daughter was naturally interested, frequently inquiring about animals, plants, and the human body, and he believed ChatGPT might provide beneficial answers. **Also read: ChatGPT Goes Incognito: OpenAI updates privacy settings for ChatGPT, rolls out 'Incognito' mode** To his amazement, the OpenAI chatbot performed admirably in terms of empathy after he informed the system that he was conversing to a tiny child. Microsoft Corp and Alphabet Inc’s Google are racing to improve their search engines using the huge language model technology that powers ChatGPT, but there is reason to believe the technology works better as an emotional companion than as a source of information. That may sound strange, but what is even stranger is that Google’s Bard and Microsoft’s Bing, both of which are based on ChatGPT’s underlying technology, are being positioned as search tools despite a history of embarrassing factual errors. BardAI and ChatGPT’s hallucination In its very first demonstration, Bard provided incorrect information about the James Webb Telescope, while Bing made a series of financial errors in its own introduction. When a chatbot is used for search, the cost of factual errors is substantial. According to Eugenia Kuyda, inventor of the AI companion app Replika, which has been downloaded over 5 million times, it is significantly lower when intended as a companion. **Also read: AI comes to the rescue: ChatGPT helps a student ace exam, despite him not attending any classes** “It won’t ruin the experience, unlike with search, where small mistakes can break trust in the product,” Kuyda explained. Margaret Mitchell, a former Google AI researcher who coauthored a study on the dangers of huge language models, has stated that they are “not fit for purpose” as search engines. Language models produce mistakes because the data on which they are trained frequently contain inaccuracies, and the models have no ground truth to validate what they say. Their designers may also prioritize fluency over accuracy. That is one of the reasons these tools are so good at simulating empathy. After all, they are learning from content scraped from the Internet, such as expressive replies expressed on social networking platforms like Twitter and Facebook, as well as personal assistance offered to members of forums like Reddit and Quora. ChatGPT’s EQ makes it a great therapist. Or does it? Conversations from movie and television show scripts, dialogue from novels, and emotional intelligence research papers are all thrown into the training pot to make these tools look compassionate. According to a piece published this month in Bloomberg Businessweek, some people are utilising ChatGPT as a type of robot-therapist. One user stated that they utilised it to avoid being a burden on others, even their human therapist. **Also read: AI bot as a therapist: US mental health platform using ChatGPT in counselling leads to controversy** Clinical psychologist Thomas Ward of King’s College London, who has studied the function of the software in treatment, warns against assuming that AI can properly replace a hole for those who require mental health help, especially if their concerns are significant. A chatbot, for example, is unlikely to recognise that a person’s emotions are too complicated to comprehend. ChatGPT, in other words, seldom says “I don’t know,” because it was built to provide responses with confidence rather than caution. People should be wary about using chatbots to express their emotions on a regular basis. “Subtle aspects of human connection, like the touch of a hand or knowing when to speak and when to listen, could be lost in a world that sees AI chatbots as a solution for human loneliness,” Ward adds. That could result in more problems than we think we’re solving. For the time being, they are more dependable for their emotional talents than their knowledge of facts. Read all the Latest News , Trending News , Cricket News , Bollywood News , India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.
A researcher has found that ChatGPT and GPT-4 have great EQ or emotional intelligence. However, ChatGPT and GPT-4 based applications also emphasise on emotions rather than facts, in addition to hallucinating and getting facts wrong.
Advertisement
End of Article


)

)
)
)
)
)
)
)
)
