In a rare confrontation with the Silicon Valley, the Department of Defence (DoD) has clashed with artificial intelligence (AI) company Anthropic over the usage of its technology in weapons and surveillance of citizens, according to Reuters.
The DoD has clashed with Anthropic over safeguards built into its technology that would prevent the government from deploying its technology to target weapons autonomously and conduct US domestic surveillance, Reuters reported three people familiar with the matter as saying.
Six people familiar with the matter said the DoD and Anthropic are at a standstill despite extensive talks under a contract worth up to $200 million.
This is the first major confrontation between the Trump administration and a technology company in this term.
In his second term, technology companies —including the Big Tech like Amazon and Meta— have towed Trump’s line and not put up any meaningful resistance. Some of the largest tech companies like Oracle and Palantir have been deeply tied with the administration along with their bosses like the Ellisons of Oracle and Peter Thiel of Palantir, blurring the line between private business, governance, and profiteering.
Anthropic is among major AI companies that got DoD contracts last year. Others were Alphabet’s Google, Elon Musk’s xAI, and ChatGPT-maker OpenAI.
Anthropic concerned about AI’s weaponisation as others dilute positions
Among peers that have diluted their policies to accommodate the Trump administration’s weaponisation of AI on battlefield as well inside the country, Anthropic has stood out with its resistance to the usage of its technology in weapons and surveillance.
In talks with the Trump administration, Anthropic has raised concerns that its tools could be used to spy on Americans or assist weapons targeting without sufficient human oversight, sources told Reuters.
Quick Reads
View AllBut the DoD has argued that it should be able to use AI regardless of the company’s policies as long as the usage complied with US laws.
While the DoD could use Anthropic AI as per their wishes, as no would be able to stop the administration from doing that, there could be problems.
Firstly, Anthropic’s models like Claude are trained to avoid taking steps that might lead to harm. Secondly, the DoD would need to reop in Anthropic’s staffers to retool models and get it updated with time.
While Anthropic has so far so far resisted pressure from the Trump administration, other AI giants have watered down their previous commitments to prevent the weaponisation of AI. While Musk’s xAI barely appears to have any restriction, Google last year removed pledge to not use AI for weapons and surveillance, and OpenAI and Meta had similarly removed restrictions for military usage in 2024.


)

)
)
)
)
)
)
)
)



