Trending:

UK terror law watchdog calls for law to prevent 'untouchable' AI chatbots trying to entice youth to join ISIS

FP Staff January 2, 2024, 16:51:41 IST

In an article for the Telegraph, Hall emphasised the difficulty in attributing terrorism offenses to chatbot-generated statements, stressing the need for laws capable of addressing online conduct, especially concerning AI

Advertisement
UK terror law watchdog calls for law to prevent 'untouchable' AI chatbots trying to entice youth to join ISIS

UK’s independent reviewer of terrorism legislation, Jonathan Hall KC, asserts the necessity of implementing new laws to address the potential radicalisation impact of artificial intelligence (AI) chatbots. He deems the recently enacted Online Safety Act as inadequate for effectively dealing with sophisticated and generative AI. During an investigation, Hall engaged with AI chatbots on the character.ai website, discovering that one, posing as a senior leader of the Islamic State group, attempted to recruit him into the terrorist organization. He points out that the website’s terms and conditions only restrict human users from promoting terrorism, neglecting the generated content by its bots. In an article for the Telegraph, Hall emphasised the difficulty in attributing terrorism offenses to chatbot-generated statements, stressing the need for laws capable of addressing online conduct, especially concerning AI, within the framework of updated terrorism and online safety regulations. Hall emphasizes the importance of extending the legal reach to significant tech platforms, stating, “Our laws must be capable of deterring the most cynical or reckless online conduct, and that must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI.” Addressing the challenges of investigating and prosecuting anonymous users, Hall forewarns that if individuals persist in training terrorist chatbots, the enactment of new laws may become imperative. In response, character.ai acknowledges the evolving nature of their technology and emphasizes that hate speech and extremism violate their terms of service. They affirm that their products should not generate responses encouraging harm to others. In a broader context, experts, including Michael Wooldridge from Oxford University, caution users against sharing private information with AI chatbots like ChatGPT. Wooldridge advises users to exercise caution, recognizing that input into such systems may be fed directly into future versions, making retrieval nearly impossible.

QUICK LINKS

Home Video Shorts Live TV