17:01 this Friday, February 27. That was the deadline that the United States gave Anthropic to grant full powers to the Pentagon about the use of your AI. It is your software that is fully integrated with Palantir and the Department of Defense systems, but the US believes it has a problem: Anthropic has tied its AI to some moral rules that shouldn’t exist. So this week Defense sent them a message: either they give them an AI without restrictions or there will be consequences.
And Anthropic has answered with a resounding “no.”
Monumental friction. Here we are not facing a conflict between companies: it is a company against its Government. Anthropic offered its AI to the Pentagon for integration into its systems. He did it at a symbolic price: one dollar. The Pentagon accepted and the response was a $200 million contract. The Department of Defense began integrate Anthropic AI into your systemswith everything that entails: full access to documents that no one outside the Pentagon can access.
The US also wants be a field toolbut there is a problem: that AI is ‘programmed’ so that it cannot be used for mass surveillance of American citizens, for the development of weapons or for the use of autonomous weapons. This is precisely what the Pentagon wants to do. The Secretary of Defense sent a message to Anthropic: or they give them AI without limits or they make them a Huawei.
These are my principles. We imagine that they have been some intense days, but Dario Amodei, CEO of the company, has answered on its blog. His opening sentence is powerful: “I deeply believe in the existential importance of using AI to defend the United States and other democracies to defeat our autocratic adversaries.” Good start to what seems to be a statement in which he gives in, but… no.
After a review of what they have “given up” so that their AI is in the Department of Defense systems and criticize that the US removes the tasting from the Defense Production Act of 1950 to terrorize an Anthropic that will go public this year, the resolution is firm: “we cannot agree in good conscience to the request.” The company is clear that its AI can be very helpful to the Government, but it continues to oppose, mainly, two specific uses:
- Mass espionage because AI can put together complete databases of anyone’s life.
- The use of autonomous weapons that cannot be trusted for decision making, since they do not have the judgment that a professional soldier does (or that a professional soldier should have). An AI does not question, has no remorse, does not wonder if it is right or wrong or if the goal is a false positive. An AI… executes.
We have breastfed for you. In the statement, Amodei launches almost a plea, a “with everything I have given you,” stating that this commitment to the leadership of the United States has been against the interests of the company itself. They point out that they have given up “several hundred million dollars to prevent Claude from being used in companies linked to the Chinese Communist Party” and that this cost some attacks by China, with some of their companies trying to abuse Claude.
And it is an open letter, a declaration of intent that has been endorsed by competitors’ employees. For 219 from Google and another 65 from OpenAI. Many have given their names, many others have signed anonymously, but all with the same goal: to reject the Department of Defense’s demands to use their models for mass surveillance and “autonomously kill people without human supervision.”
“In any case, these threats do not change our position: we cannot in good conscience agree to your request”
There is no middle ground. No matter how nice and romantic the statement from Google and OpenAI employees sounds, the reality is different. They are challenging the Government, a Government that is continually demonstrating that it is going to execute whatever they want (there is ICE, the climate movements, the departure of the WHO, tariffs or threats to partners and allies). And the problem is that Anthropic has a lot to lose, much more than that $200 million that is pocket change in the context of AI investments.
If they give in, it would mean taking a step back in an almost founding aspect of the company. If they do not give in, they become the “prestige brand” of AI. They demonstrate that they have the model that today’s most technologically advanced army needs and that they are untouchable, at least as long as a United States that is already moving towards alternatives such as Google, X and OpenAI finds an alternative. But they run the risk of, as we said, being blacklisted by the US.
The Government has threatened to condemn Anthropic to being a company that poses a “supply chain risk”. As Amodei points out in the statement, it is a label reserved for the country’s adversaries and has never been applied to an American company. It would put Anthropic in the same bag as Huawei and other Chinese companies and would prevent the rest of the American partners from making deals with them. But, even if they do not give in, the US can take over AI by force with the letter of the Defense Production Act. Through this decree, if they consider that this tool is necessary and essential for national security, it does not matter what Amodei says.
On the roof of the Pentagon. That’s where the ball is right now, and time is running out. As we said before, it is not an operation between companies, it is not a fight between politicians: it is a company that is being threatened by its own country. A threat that is a “give this to us by fair means or we will take it from you by bad means”.
In any case, we have to wait to see how the day develops. Anthropic notes that if the Department of Defense ultimately chooses another company, they will work for a seamless transition. The question is whether there will be such a peaceful solution taking into account that Anthropic’s AI is already highly integrated into the Pentagon’s systems… and it does not sound feasible for them to turn a corner to retrace what they have done.
Images | The White House, Fortute Brainstorm Tech
In Xataka | An 86-year-old farmer was offered $15 million to build a data center. He said no



GIPHY App Key not set. Please check settings