Pete Hegseth’s threat to Dario Amodei has a subtext that goes far beyond the $200 million contract that the Pentagon can cancel: If the US military deploys AI-controlled autonomous weapons without the safeguards that Anthropic requiresyou will have removed the only firewall that has historically prevented an illegal order from being executed.
Why is it importantand. The entire legal and ethical system of the US military rests on a principle that seems obvious but has important consequences: a soldier can and should disobey a manifestly illegal order.
It is the mechanism that, in theory, prevents war crimes. A drone AI-controlled autonomous vehicle does not have that mechanism. You can’t refuse. You can’t hesitate. He cannot be tried in a court-martial.
Between the lines. Amodei speaks of “autonomous weapons that fire without human intervention” to point out a legal vacuum. If an AI makes the decision to kill, who is responsible criminally? The programmer? The general who activated the system? The president who signed the order?
International humanitarian law (including the Geneva Conventions) was written with human beings making decisions in mind. And now AI dissolves that chain of responsibility.
The backdrop. The mass surveillance argument is also a bitter pill to swallow. The Fourth Amendment of the US Constitution protects citizens from warrantless searches and interventions. It works, among other reasons, because the State has never had the physical capacity to process everything that happens in public spaces.
And in the same way, with AI that operational limit disappears: we move to millions of conversations recorded in real time, transcribed, classified and connected in just seconds. What was previously impossible due to lack of human resources becomes routine with a LLM. Constitutional protection until now has depended, in part, on the inefficiency of the State, its limitations.
Yes, but. The Pentagon has an argument that cannot be ruled out: other democracies are also developing these capabilities, and China or Russia are not going to wait for the United States to resolve their ethical dilemmas.
The practical question is whether having those unrestricted capabilities makes you safer or simply more dangerous to your own citizens.
The big question. OpenAI and Google have accepted the Pentagon’s conditions“all legal uses” without specific exceptions, and xAI has just been cleared to operate on classified systems. Anthropic has been left alone in its position.
And what is at stake now is not whether Claude survives as a military supplier or not, it is whether the AI industry is going to set some limit on what it sells to the State, or whether that debate will be settled directly by Congress, the courts or, in the worst case, the first serious incident that no one could have foreseen. It seems like a matter of time.
Featured image | Xataka

GIPHY App Key not set. Please check settings