The pentagon just gave to Anthropic until this Friday at 5:01 p.m. to accept its unrestricted use of its AI models for all types of applications, including espionage and military applications. The company has so far refused, but the Trump administration is threatening to invoke a 75-year-old rule to “appropriate” Anthropic’s AI technology.
red lines. The conflict has its origin in the red lines imposed by Anthropic’s ethical standards. The company, led by Dario Amodei, refuses to have its models used for mass surveillance of American citizens – it says nothing about others – or in the development and use of lethal autonomous weapons controlled entirely by AI.
The Pentagon wants to use AI (almost) without limits. These types of safeguards clash head-on with the Pentagon’s position, which demands that its technology providers open the use of their software and hardware solutions for any legal purpose defined by the military, without external vetoes. As long as the US constitution and laws allow it, a private company should not be able to impose limits on the use of its technology, the US Government indicates.
Tension after the Maduro incident. Things began to go wrong when it was learned that the Claude model was used in a US special forces operation in January to capture the former Venezuelan presidentNicolás Maduro. The incident put the army’s dependence on Claude under the microscope: Anthropic is currently the only AI company that operates in the Pentagon’s classified systems, which gives it a notable position of power that now wants to be broken by the US government.
This smells bad. The Pentagon’s strategy is disturbing from a legal point of view. There are three main possibilities for action:
- Cancel the Anthropic contract and start working with another (or other) AI companies willing to accept their terms. Yesterday we knew that xAI has already signed an agreement so that the DoD can use its Grok model, in classified systems. Google seems to be also an option they are working with.
- Identify Anthropic as a risk to your supply chain. That is very dangerous, because it would mean that a huge number of companies in the US would not be able to work with Anthropic. It would be a kind of veto like the one the US imposed on Huawei, but applied to a national company. The impact for Anthropic and its investors (Amazon and Google among them) would be catastrophic.
- Activate Title 1 of the Defense Production Act of 1950, a special law theoretically designed to control the economy during wars and emergencies. It was used, for example, during the COVID-19 pandemic to boost the production of medical supplies and accelerate the production of vaccines. It seems unlikely that they can do something like that.
How did this whole mess start?. The Biden administration promoted measures and ethical limits to restrict the application of AI, but everything changed with the mandate of Donald Trump. In June 2025 Anthropic released Claude Gova specialized series of AI models specifically designed for use by US national agencies in security, defense and intelligence.
AI with military and intelligence applications. These models were prepared to operate in environments with classified information. Anthropic also offered them for a symbolic price of 1 dollar to ensure that the Government would prefer them over those of other competitors. Shortly thereafter, the DoD granted the company a contract worth $200 million, and the company has since gone integrating with the Palantir systems used in US government agencies.
Two opposing positions. Anthropic therefore positions itself as a defender of certain limits for the use of its AI models. The Department of Defense (DoD) disagrees, arguing that military use of any technology should only adhere to the US Constitution or laws. The company maintains that seeks to support the national security missionbut only within what their models can do reliably and responsibly.
The dilemma. If the Pentagon carries out its threat, a precedent will be set where the State can intervene in the intellectual property of a software company under the argument of national emergency. This would force all Big Tech to decide if they are willing to cede full control of their technological developments to the military… or risk being intervened by an almost 80-year-old law.
Image | Ben White | Anthropic
In Xataka | IBM has been living for decades that no one could kill COBOL. Anthropic has other plans

GIPHY App Key not set. Please check settings