In his 1942 short story Runaround, Isaac Asimov introduced the ‘Three Laws of Robotics’: a robot may not harm a human being; it must obey human orders; and it must protect itself, in that order. Human safety trumped command. Command trumped self-preservation. It was a thought experiment on delegation. What happens when we outsource judgement? Who bears responsibility when machines act at speed? And what if the constraints fail?
That fictional architecture is now colliding with procurement reality. When the AI company Anthropic refused to remove contractual safeguards preventing its models from being used for mass domestic surveillance or fully autonomous weapons, the US Department of War responded by designating it a “supply chain risk".
The Pentagon insisted on the right to use the system for “any lawful purpose". US President Donald Trump publicly ordered federal agencies to cease using the company’s technology. Anthropic has signalled it will challenge the designation in court.
Hours later, OpenAI reached its own agreement with the Department of War. Its red lines are similar on paper, but the enforcement mechanism differs. Deployment is cloud-only. Safety layers remain under company control. Cleared personnel stay “in the loop”.
This development has not received as much attention as it should due to the war in West Asia. But it is a real-time test of who governs frontier AI in warfare. The state, the firm, or some uneasy hybrid?
The dispute sits at the fault line of the coming era of Lethal Autonomous Weapons Systems (Laws), systems capable of selecting and engaging targets without real-time human control. Although fully autonomous kill chains are not yet formally deployed, the technological trajectory is moving towards that possibility. The political and legal frameworks are not keeping pace.
Quick Reads
View AllThere are four clear implications emanating out of this development. First, the reliability threshold is being politicised. Today’s frontier models are not sufficiently reliable to power fully autonomous weapons. Deep neural systems remain probabilistic, vulnerable to adversarial perturbation, and prone to out-of-distribution error. Simulation-to-real gaps remain large. Emergent behaviour in decentralised swarms is not fully predictable. Yet the Pentagon’s doctrine is “any lawful use". Legality is being treated as a sufficient condition for deployment. Reliability, which should be a technical gating variable, risks becoming subordinate to operational urgency.
If fully autonomous systems are fielded before formalised validation, verification and audit standards exist, accountability collapses into after-the-fact investigation of opaque models. The Article 36 weapons review framework was designed for predictable munitions, not adaptive machine agents.
Second, procurement has become a governance instrument. The “supply chain risk” designation (historically applied to foreign adversaries) was used against a domestic AI firm by the US. This weaponises contracting authority. It signals that refusal to remove guardrails may trigger exclusion from the national security market. That creates chilling effects across the sector. Frontier labs, dependent on large government contracts, may internalise a simple lesson, ie, align with sovereign demand or exit.
This shifts AI norm-setting from multilateral diplomacy to bilateral coercion. The Convention on Certain Conventional Weapons has stalled on Laws. In its place emerges governance-by-procurement.
Third, the innovation model has inverted. During the nuclear and early aerospace eras, the US government defined the technological frontier. Industry executed against state specifications. In AI, the frontier resides in venture-backed firms. The state adapts to the commercial pace. In the short run, companies with scarce models and talent hold leverage. In the long run, sovereign power retains regulatory authority, export controls, and legal compulsion.
The logical response from defence establishments is sovereign AI architecture, ie, reduce vendor dependence, internalise capability, and prevent lock-in. The logical response from firms is layered control, just like what OpenAI did. Neither model resolves the underlying question of who ultimately authorises machine violence.
Fourth, the threshold to war may fall. Autonomy reduces personnel risk and compresses decision cycles. It makes force projection cheaper and politically more palatable. Proxy conflicts mediated by semi-autonomous systems become tempting. If kill decisions are pre-delegated to algorithms operating at machine speed, escalation pathways multiply. Swarm interactions can produce unintended engagement spirals. In nuclear strategy, deterrence stability rested on visible, attributable actors. In autonomous conflict, attribution becomes blurred and tempo accelerates beyond deliberative oversight.
What should one fear? Not “Skynet”. Not science fiction apocalypse. The realistic fear is incremental normalisation. Guardrails erode not in dramatic rupture but in procurement language. “Human in the loop” becomes “human on the loop”. Oversight becomes statistical confidence. Emergency exceptions become routine deployment. Law lags capability.
And what should be done? First, technical standards must precede deployment. Fully autonomous weapons should require codified reliability metrics, adversarial robustness testing, audit logs, and kill-chain traceability. Verification and validation cannot be policy slogans; they must be measurable thresholds embedded in directives and contracts.
Second, a binding international instrument on Laws (at minimum prohibiting fully autonomous lethal engagement without meaningful human control) should move from aspirational declaration to treaty negotiation. Like early nuclear arms control, the aim is to shape norms before proliferation makes restraint impossible.
Finally, democratic oversight must not be displaced by technological inevitability. Ethical AI cannot be reduced to “lawful use” in environments where the law itself evolves under political pressure.
Asimov’s robots were fiction because real machines do not possess a conscience. They optimise, classify and execute probabilistic inference at speed. The danger is a quiet abdication from the transfer of lethal discretion from human judgement to statistical systems. China will follow the US. It has already started integrating AI into swarms, naval platforms and decision systems. It will not slow in the name of restraint. It will accelerate. In that dynamic, autonomy becomes a variable in an arms race. And this will change warfare forever, and we might end up playing catch-up.
(Aditya Sinha [X: @adityasinha004] writes on macroeconomics and geopolitics. Views expressed in the above piece are personal and solely those of the author. They do not necessarily reflect Firstpost’s views.)
)