As artificial intelligence becomes an increasingly common space for people to express thoughts they may not share elsewhere, new efforts are emerging to ensure those conversations do not spiral into harm.
A developing tool in New Zealand aims to do just that by identifying users of ChatGPT who display signs of violent extremism and directing them towards support rather than simply cutting them off.
The initiative, reported by Reuters, reflects growing unease among policymakers and technology companies over the role AI platforms may play in sensitive or dangerous conversations. Instead of relying solely on moderation or bans, the new approach seeks to intervene more constructively, connecting users with human counsellors and specialised chatbot systems designed to de-escalate harmful thinking.
New tool that flags risky behaviour
At the centre of this effort is ThroughLine, a New Zealand-based startup that already works with major AI firms, including OpenAI, Anthropic and Google. The company currently provides a global network of crisis support, helping redirect users flagged for issues such as self-harm, domestic violence or eating disorders.
Now, ThroughLine is exploring ways to expand its scope to include violent extremism. Founder Elliot Taylor said the company is in talks with The Christchurch Call, a global initiative launched after New Zealand’s 2019 terror attack, to develop a system that combines chatbot-based intervention with referrals to real-world services.
The proposed solution would use a hybrid model. A specialised chatbot would engage users showing early signs of extremist thinking, while also connecting them to appropriate human-led support networks. Taylor emphasised that the system would rely on expert input rather than generic large language model training.
“We’re not using the training data of a base LLM,” he said. “We’re working with the correct experts,” reports Reuters.
ThroughLine already operates a network of around 1,600 helplines across 180 countries. When an AI system detects signs of distress, users are routed to services in their region. However, Taylor noted that the range of issues people discuss with chatbots has expanded significantly, now including flirtations with extremist ideologies.
Quick Reads
View AllA much needed step
The push for such tools comes amid mounting scrutiny of AI platforms and their potential links to real-world harm. According to Reuters, recent incidents, including a deadly school shooting, have intensified calls for stricter safeguards. In one case, OpenAI faced pressure from Canadian authorities after it emerged that a perpetrator had been banned from its platform without officials being notified.
Experts argue that simply blocking users may not be enough. In some cases, it could even drive individuals towards less regulated platforms, such as Telegram, where harmful ideas may spread unchecked.
As reliance on AI tools like ChatGPT continues to grow, so does the responsibility to handle sensitive conversations carefully. Many users now turn to chatbots for guidance, advice and emotional support, often sharing deeply personal or troubling thoughts.
Researchers say this makes early intervention crucial. A chatbot-based redirection system could help address not just harmful content, but also the underlying behavioural patterns.
Still, questions remain around implementation. Decisions about follow-up actions, including whether authorities should be alerted in high-risk cases, are yet to be finalised. Developers are also cautious about avoiding responses that could escalate a situation.
For Taylor, the goal is to ensure that users who disclose distress are not left without help.


)

)
)
)
)
)
)
)
)



