OpenAI says its agreement with the Pentagon is completely secure. His way of convincing us: “Trust us”

Don’t worry about anything, really. Trust us. Who says it is OpenAI, a company led by Sam Altman that has earned the reputation of saying one thing on one hand and doing another on the other. There are whole books written on that premise, and it is inevitable not to remember it now that this gigantic startup has signed a disturbing agreement.

soap opera. OpenAI reached an agreement with the Department of Defense to integrate its AI models into government agencies, replacing Anthropic. They did so by indicating that they would impose requirements on the use of these models and would have red lines similar to those defended in Anthropic: no mass espionage, no development of autonomous weapons. That decision has cost Anthropic the contract with the DoDbut also has been tagged as a “risk to the supply chain.”

Trust us. There are two problems here. The first, that OpenAI has never shown the contract that makes it clear that there are red lines to the use of GPT by the military. And the second and most serious, that according to OpenAI we do not need it because we only need to trust them. Altman himself tried to dispel doubts explaining that they had added amendments to the agreement to ensure that those red lines were not crossed.

The wall of opacity. Despite promises of transparency, OpenAI refuses to publish the contract. The firm’s head of national security, Katrina Muligan, he came to affirm in that it does not feel “obliged” to share the legal language of the agreement. This has raised suspicions about what has really been signed behind the scenes.

Holes everywhere. Brad Carson, who served as secretary of the US Army under Obama, indicated at The Intercept how Sam Altman’s legal language in his posts on X is suspect. The CEO of OpenAI mention for example that “the AI ​​system will not be intentionally used for domestic surveillance of US citizens.” That “intentionally” is, according to experts like Carson, a kind of blank check to allow data on American citizens to be captured while spying on foreigners “by accident” but systematically. As Carson explains,

They are trying to confuse you with complicated legal terms that ordinary people think mean something completely different. But lawyers know what it means. And lawyers know that this is no protection.

The human factor. The integration of OpenAI’s AI into DoD systems now falls under the direct supervision of Secretary of Defense Pet Hegseth and President Trump. This represents an ethical dilemma: the security of the system depends on the political will of figures who have traditionally had no problem eliminating restrictions on mass surveillance systems.

Quo vadis, OpenAI. The 180º turn it’s clear for OpenAI. While in its beginnings the startup was defined With the message of creating AI systems “for the benefit of humanity” and prohibiting the military use of its technology, this agreement demonstrates that such premises no longer seem to exist.

another bad sign. This way of acting by OpenAI has caused it to be openly criticized on networks, but there have also been internal problems. This is demonstrated by the fact that its director of robotics, Caitlin Kalinowski, has decided to resign from office over concerns about the company’s military negotiations.

And an obvious question. The dispute between the Department of Defense and the Pentagon centered precisely on the fact that they did not want Anthropic to establish red lines. OpenAI claims to have established basically the same ones, so how is it possible that the DoD allows OpenAI to establish them when it has not allowed Anthropic to do so? It doesn’t seem to make any sense.

What a mess. We are living a real soap opera with three protagonists. The US Department of Defense (DoD) – now renamed the Department of War –, the company Anthropic and its rival, OpenAI. The DoD, which used Anthropic’s AI for military operations, He demanded to be able to use it without restrictionsbut Dario Amodei, CEO of the startup, he flatly refused. That was the moment Sam Altman took advantage of to become the new ally of the DoDsomething that has been seen by many as opportunistic and morally reprehensible.

Image | Xataka with Freepik

In Xataka | The war between Anthropic and the Pentagon points to something terrifying: a new “Oppenheimer Moment”

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.