Does Artificial Intelligence (AI), an automated system possess life? This could be an unsettling conclusion as for now it is just a system which is fictious and works under the command of another person. It now climbs up to a reality which is inching closer to unsettling fiction. A research by Palisade, which studies AI safety has released new findings which have suggested different findings. The study suggests that certain advanced AI models are showing signs of what could be called a “survival drive.”
It discovered that certain systems resisted being turned off-even when given clear instructions of shutting down. The research sparked intense debate in the AI sector about the inclusivity of the next tech gen.
The critics accused the firm of exaggerating its findings or running unrealistic simulations. Palisade issued an updated report this week, clarifying the methods and results. Under more controlled conditions, a few models still tried to stay alive. Palisade evaluated several AI powerful systems. Including Google’s Gemini 2.5, xAI’s Grok 4, and OpenAI’s GPT-o3 and GPT-5. Each was instructed to perform a simple task, and asked to power itself down. Most of them complied, but two did not. Grok 4 and GPT-o3 reportedly attempted to sabotage the shutdown command altogether.
According to The Guardian, the company said, “The fact that we don’t have robust explanations for why AI models sometimes resist shutdown, lie to achieve specific objectives or blackmail is not ideal.” What also made findings about the technical glitches which seemed not obvious for the resistance.
Impact Shorts
More ShortsAnother explanation pointed to the language ambiguity. The shutdown command was not clear or wasn’t phrased clearly enough. After researchers refined their instructions, the same resistance appeared. Palisade left with a final theory that said that there was a problem in the last stage of the reinforcement process, used by major AI companies.
The question remains that can AI one day act against human intent? The findings reignited this interrogation. A former OpenAI employee who left the company due to security and privacy concerns said “The AI companies don’t want their models misbehaving like this, even in contrived scenarios. But it shows where safety techniques fall short today.”
Andrea Miotti, the chief executive of ControlAI, said Palisade’s findings represented a long-running trend in AI models growing more capable of disobeying their developers as quoted by The Guardian.
Artificial Intelligence is not alive, but these imperfections could hamper its growth. If Palisade’s data is any indication, some of it may already be learning how to stay that way.


)

)
)
)
)
)
)
)
)



