This February 28, Israel and the United States They bombed Iran. It is something that occurs in parallel to a ‘war’ that is taking place on American soil: that of what AI should the country’s military arm use. Because yes, AI has become an essential tool for Intelligence operations, to the point that there are reports that suggest that Claude was key in the massive bombings on Saturday.
But there is a problem. Hours before the attack, Trump ordered that Claude and any Anthropic artificial intelligence tools not be used in military operations. And the fact that the Pentagon has disobeyed only responds to one thing: Claude is too deep inside the United States military systems.
The Anthropic Mess. This topic is complex, so let’s go with some context before getting into it. When the United States was looking for an AI to support its defense systems and will integrate with PalantirAnthropic offered theirs for the modest price of one dollar. That it was worth it a 200 million contract and both Anthropic and the Pentagon got to work integrating the company’s models into all kinds of systems.
Claude’s support is so important to the Pentagon in massive scale data analysis that it is estimated that he was used for the capture of Nicolás Maduro a few months ago. The “problem” is that Anthropic programmed its AI not to violate two red lines:
- It will not be used to massively spy on American citizens.
- It will not be used for the development or control of autonomous weapons and attack systems.
“The Woke AI”. The War Department and Donald Trump They didn’t agree with this. and last week they released a ultimatum: Either Anthropic gave up its ‘unleashed’ AI, or there would be consequences. What consequences? Play the card Defense Production Act of 1950 to take over the force of Anthropic’s creation. The company had until 5:01 p.m. last Friday to respond, and boy did it do so.
In a long statement signed by Dario Amodei, CEO of Anthropic, it was stated that the company was on the side of the country’s defense interests, but not at any price. Their moral standard was very clear and they were not going to give in to the blackmail of a United States that hours before threatened to “make them a Huawei” by putting Anthropic on a blacklist. Amodei’s response infuriated Trump and Pete Hegseth. The Secretary of Defense called Claude an “AI Woke,” a line that Trump himself followed.
On his social network Truth Social, Trump pointed out that Anthropic is a “radical left-wing AI company run by people who have no idea how the real world goes.” Striking, to say the least, and with another response: the United States ended its collaboration with Anthropic and prohibited the use of its AI. The problem is that it’s… fake.
“I am ordering ALL US federal agencies to IMMEDIATELY CEASE all use of Anthropic’s technology. We don’t need it, we don’t want it, and we will not do business with them again! – Donald Trump
Claude to attack Iran. As soon reported The Wall Street Journalthe air attack against Iran was carried out with the help of those same radical left tools. The media noted that commands around the world, including the United States Central Command in the Middle East, used Claude’s tools to assess the situation, identify targets and simulate battle scenarios.
Dependence. And this just paints a scenario, one in which the Pentagon is going to have a very difficult time removing those Anthropic tools from its system. It happened in Venezuela and it seems that it has happened again in Iran. Claude is too deep inside the Pentagon’s systems, maintaining an almost symbiotic relationship with the Palantir software, and breaking that from one day to the next seems complicated.
HE esteem that it will take six months to eliminate Claude’s trace from the Pentagon software, but despite the prohibition of use and his inclusion on the blacklist by Hegseth, another decision seems to prevail: if we already have this, we will use it until we find a successor.
OpenAI goes out for the crumbs (millionaires). And it didn’t take them even half a second to find that new AI provider. OpenAI -ChatGPT- issued a release in which he noted that “the United States needs AI models to support its mission, especially in the face of growing threats from potential adversaries that are increasingly integrating artificial intelligence technologies into their systems.” Interestingly, they have the same red lines that Anthropic imposed (no use for mass domestic surveillance, no direct autonomous weapons systems, no AI making high-risk decisions automatically).
But there is a difference: if Anthropic refused to give full powers to the Pentagon, OpenAI points out that, despite maintaining the same moral principles, the use of its AI is tied to the legal use that the Department of Defense wants to make. This is ambiguous because if a certain use is considered legal, it does not conflict with that “morality.” We will see if it is a mere exchange of chips resulting from anger because someone opposed a government order or if the change from Anthropic to OpenAI translates into what the US needs for its security.
In Xataka | The war between Anthropic and the Pentagon points to something terrifying: a new “Oppenheimer Moment”



GIPHY App Key not set. Please check settings