no to all military use

The race to dominate artificial intelligence has narrowed to a handful of actors capable of competing at the highest level. Anthropic is part of that small group along with names like OpenAI or Google, and their models Claude they have gained ground in areas such as programming. In that big moment, however, the company faces a delicate decision: maintain certain limits on the military use of its technology, even at the cost of straining its relationship with the United States Department of Defense.

The standard that changes everything. According to Axiosciting a senior administration official, the Pentagon is pressuring four leading AI laboratories to allow the use of their models for “all lawful purposes,” including in especially sensitive areas such as weapons development, intelligence gathering or battlefield operations. Anthropic, however, would not have accepted those conditions after months of difficult negotiations, which has led the Department of Defense to consider reviewing its relationship with the company.

The lines you don’t want to cross. Faced with this broad demand, those led by Dario Amodei have made it clear that they maintain specific limits. The company insists that two areas remain out of discussion. A spokesperson told the aforementioned media that the company remains “committed to using cutting-edge AI in support of US national security,” but clarified that conversations with the Department of Defense have focused on “our strict limits around fully autonomous weapons and mass domestic surveillance,” and that these issues do not “relate to current operations.”

The episode that ended up raising the tension. The Wall Street Journal statedciting people with knowledge of the matter, that Claude was used in a US military operation in Venezuela to capture Nicolás Maduro through the relationship with Palantir. In that same text, the AI ​​company responded that they cannot comment on whether their technology was used in a specific military operation, classified or not. And he added that any use, whether in the private sector or in the Government, must comply with its use policies.

What is at stake. Beyond that episode, Axios reported that from the US military level “everything is on the table,” including the possibility of reducing or even breaking the relationship with Anthropic. The same senior official cited by the media added that, if this path is chosen, there would have to be “an orderly replacement,” which suggests that the process would require a certain amount of time. WSJ provides another interesting fact: last year a $200 million contract was signed between Anthropic and the Department of Defense.

The substance of the dispute. At a time when AI companies seek to consolidate incomejustify valuations and demonstrate usefulness in critical environments, the relationship with the defense sector is a showcase and a source of first-rate business. At the same time, it is also an area where ethical and strategic limits become more visible. Anthropic’s decision to maintain certain restrictions may reinforce its identity as a security-oriented company, but also limit its access to million-dollar contracts.

Images | Anthropic | Oleg Ivanov

In Xataka | Data centers in space promise to save the planet. And also ruin the earth’s orbit

Leave your vote

Leave a Comment

GIPHY App Key not set. Please check settings

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.