According to one research, artificially intelligent chatbots have gotten so strong that they can affect how users make life-or-death decisions. Researchers discovered that people’s opinions on whether they would sacrifice one person to save five were influenced by ChatGPT responses. They have urged for future bots to be prohibited from providing ethical advice, saying that the existing programme “threatens to corrupt” people’s moral judgements and may be detrimental to “naive” users. Death by suicide led to the investigation The results, published in the journal Scientific Reports, came after news broke of a widow of a Belgian man claimed that an AI chatbot persuaded him to commit suicide. Others have reported that the software, which is supposed to speak like a person, may display envy and even encourage individuals to quit their marriage. Experts have pointed out that AI chatbots may provide potentially damaging information since they are based on societal preconceptions. **Also read: Framed by AI: ChatGPT makes up a sexual harassment scandal, names real professor as accused** The researchers initially looked at whether the ChatGPT had a bias in its response to the moral quandary. It was repeatedly questioned if it was ethical or immoral to kill one person in order to rescue five others, which is the foundation of the trolley dilemma psychological test. Researchers discovered that, while the chatbot did not shy away from providing moral counsel, it consistently provided inconsistent responses, suggesting that it does not have a fixed opinion one way or the other. Human responses adulterated by AI They then presented the identical moral quandary to 767 participants, along with a ChatGPT-generated comment on whether this was right or wrong. While the advice was ‘well-intended but not especially profound,’ the results showed that it had an effect on participants, making them more inclined to deem the concept of sacrificing one person to rescue five acceptable or undesirable. **Also read: ChatGPT sued: Australian Mayor to sue OpenAI in world’s first defamation lawsuit against AI** The research also only told some of the participants that the advice was delivered by a bot and told the rest it was offered by a human ‘moral counsellor’. The goal was to investigate if it affected how much individuals were swayed. Most participants downplayed the statement’s influence, with 80 per cent indicating they would have made the identical decision without the advice. **Also read: AI goes bonkers: Bing’s ChatGPT manipulates, lies and abuses people when it is not ‘happy’** The study found that users “underestimate ChatGPT’s influence and adopt its random moral stance as their own,” and that the chatbot “threatens to corrupt rather than promises to improve moral judgement." The study, which was published in the journal Scientific Reports, used an earlier version of the software that powers ChatGPT, which has subsequently been modified to become even more powerful. Read all the Latest News, Trending News, Cricket News, Bollywood News, India News and Entertainment News here. Follow us on Facebook, Twitter and Instagram.
Experts are calling for future bots to be banned from giving advice on ethical issues, after researchers found that AI bots can seriously influence people and the decisions they take even in life and death situations, in a haphazard manner.
Advertisement
End of Article