The Pentagon labeled Anthropic a national security risk. So Anthropic is suing the Pentagon

The soap opera between Anthropic and the Pentagon has a new chapter (and now they are going…). After the push and pull of the last few weeks, Anthropic stood and that ended up causing The US put the company on the blacklist. Anthropic was not amused.

what has happened. Anthropic has sued the US Department of Defense (or War), calling the decision to blacklist them “unprecedented and illegal” and arguing that it will cause irreparable harm to the company. . In statements to Fortunean Anthropic spokesperson has assured that they remain committed to protecting national security and want to find a solution, but that “it is a necessary step to protect our business, our customers and our partners.” The administration has not commented on this lawsuit.

A lot of money at stake. By blacklisting Anthropic, the government prevents defense contractors and suppliers from using Claude in their Pentagon-related activities. Additionally, Trump ordered the entire government to stop using Anthropic’s AI. The company says government contracts are already being canceled and other private contracts are in jeopardy. Anthropic’s commercial director, Paul Smith, has assured that there is a client who already Claude has been swapped for another generative AI. This contract alone will make them lose at least 100 million dollars.

Doubts about legality. Anthropic says the government’s move is not legal. Are they right? According to legal experts at Lawfarethe “supply chain risk” label will not withstand judicial scrutiny. The main reason is that this designation is intended for foreign adversaries, as happened with Huawei. The law’s definition is “the risk that an adversary could sabotage or subvert a covered system,” it says nothing about using it as punishment to a national company for a disagreement. According to Lawfare, the statements by Trump and the defense secretary “frame the action as ideological punishment of a political enemy.”

The disagreement. The origin of this escalation is in the red lines that Anthropic put Basically, the company refused to allow its model to be used for mass surveillance of citizens and especially the development of lethal weapons without human supervision. The concern is justified: a soldier can refuse to carry out an illegal order, an AI cannot.

The Pentagon does not like red lines (from others, of course) and demanded to be able to use their technology without limits. In Trump’s words in a Truth Social post: “We will decide the fate of our country, NOT an out-of-control radical left-wing AI company run by people who have no idea what the real world is like.”

Meanwhile OpenAI… Shortly after Anthropic was blacklisted, the government found a new candidate to carry out your plans: OpenAI. According to the company by Sam Altman, its development has more safeguards and hey, calm down, it’s not that big of a deal. What has followed is an image crisis for ChatGPT, with resignations and mass uninstalls of users who have switched to Claude. But let’s not fool ourselves, although Anthropic has won the battle of public opinion, if the US keeps up, the future looks pretty bleak for Amodei’s side.

In Xataka | Anthropic has become the Apple of our era and OpenAI our Microsoft: a story of love and hate

Image | Anthropic (edited)

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.