Trending:

AI assistants are going rogue — and nobody knows who’s responsible

FP News Desk October 21, 2025, 08:32:15 IST

Researchers showed that AI assistants can be hijacked via ordinary interactions, allowing attackers to manipulate devices and access files without consent

Advertisement
AI assistants are going rogue — and nobody knows who’s responsible

Security researchers demonstrated earlier this year that artificial intelligence (AI) assistants can be hijacked through ordinary interactions, such as calendar invites, which carried hidden malicious instructions. Once triggered, connected devices were manipulated and files accessed without consent. This experiment revealed that AI systems are not only tools for attackers but potential targets themselves, raising significant concerns for businesses and governments.

As AI becomes more autonomous, capable of acting across digital and physical environments, the line between human and machine agency blurs, shrinking the time needed to exploit vulnerabilities.

STORY CONTINUES BELOW THIS AD

Agentic AI is already deployed in sectors such as banking, e-commerce and logistics, streamlining operations, detecting fraud and making real-time decisions. Yet as these systems interact with humans, other agents and enterprise platforms, the cybersecurity attack surface expands, exposing risks such as impersonation attacks, prompt injections and data exfiltration.

Adapting governance and security for agentic AI

Experts say cybersecurity must evolve from a defensive function to a strategic enabler. Traditional frameworks, designed for predictable systems, struggle to contain autonomous AI that learns and adapts. Governments and large enterprises deploying AI in critical infrastructure face the urgent need for adaptive, context-aware security, human oversight, and escalation management to maintain system trustworthiness.

Governance frameworks must also adapt. Oversight should correspond to degrees of autonomy rather than broad labels, and accountability for harmful actions by AI systems needs clear definition to prevent legal and ethical gaps. In critical infrastructure, securing AI apps, models and workflows, preventing data leaks, and managing non-human identities (NHIs) are essential for resilience.

Experts emphasise that trust in agentic AI comes not only from technology but also from the integrity of those who create and govern it. Firms that implement foresight, collaboration and robust governance will be better prepared to manage the risks of autonomous AI while leveraging its potential for digital transformation.

Home Video Shorts Live TV