The US tried to treat Anthropic as if it were an enemy company for refusing to arm its AI. The judge just stopped him

There is a new chapter in the clash between Anthropic and the Pentagon, and it is one that must not have sat well with the Trump administration. After declaring it “a risk to the supply chain” (put her on the blacklistOh), Anthropic went to court and now the judge has just agreed with them, so the order has been paralyzed. what has happened. The Trump administration sought to punish Anthropic after refuse to let their AI be used in lethal autonomous weapons and mass surveillance, but Judge Rita Lin, of the Northern District of California, just blocked the order. The judge has asked the government for a report, which they must present before April 6, in which they detail how they have complied with their resolution. The government has seven days to appeal. “Orwellian idea”. The judge is quite harsh with the government’s decision. He considers that it is an “arbitrary and capricious” move and that “no provision of the applicable law supports the Orwellian idea that an American company can be branded as a potential adversary and saboteur of the United States for expressing its disagreement with the Government.” Furthermore, he indicates that if the problem is that they do not trust Anthropic’s AI “the War Department could simply stop using Claude.” It’s not going to sit too well with the Trump administration. In his order he also mentions the “financial and reputational prejudice” to which Anthropic would be exposed if this measure is applied, arguing that it could leave the company paralyzed. Why is it important. It is the first time that a restriction of this caliber has been applied to a domestic company. Supply chain risk is defined as “the risk that an adversary could sabotage or subvert a covered system,” but what has happened here is that it has been used as a punishment for disagreement. Furthermore, if the order were implemented, Anthropic would be commercially isolated by being prohibited from working, not only with civilian agencies, but also with private companies that wanted to work with the defense department. And now what. Several legal experts They already warned that the decision would not survive legal scrutiny and it has. This decision represents a victory for Anthropic, which in a statement assured that “Our goal remains to collaborate constructively with the Government to ensure that all Americans benefit from safe and reliable AI.” The question now is what will be the next step of the Trump administration, which has not yet commented on the matter. In Xataka | OpenAI says its deal with the Pentagon is secure. Seriously, really, you have to believe it, trust it, it assures you Image | Anthropic, edited

If he wants to beat Anthropic, he needs more hands. So you’re going to double your template.

OpenAI just realized that they had been launching products without rhyme or reason and they need to focus on something. And that something is the business sector, where an Anthropic with a much clearer business plan has been eating up their ground. To achieve this they need to increase their staff. A lot. More hands. According to Financial TimesOpenAI is planning to increase its staff throughout the year, almost doubling it. They currently have around 4,500 workers and the idea, according to internal sources, is to reach 8,000. To reach that figure they would have to hire 12 people a day; human resources will be on fire. The departments that need the most personnel are product development, engineering, research and sales. In addition, there is a figure that the company wants to reinforce; These are “technical ambassadors”, who will be a type of advisors who will guide companies that use their products so that they get the most out of them. They have also rented new offices in San Francisco, which will bring the total surface area to more than 90,000 square meters. Unstoppable Anthropic. This is part of a strategic reform that seeks to regain ground in the business segment, where Anthropic has gained a very solid position. According to data from Ramp AI IndexAlthough OpenAI is still the most used solution in business, adoption is falling while Anthropic is doing like a shot. 70% of companies that buy AI solutions for the first time choose Anthropic, at this rate, in a short time they will overtake them in the number of business users. It’s not that big of a deal. OpenAI has downplayed this figure because Ramp is a financial service and its data comes only from transactions made with your credit card “it’s like saying that global sales of lemons can be calculated based on my son’s lemonade stand,” said a company spokesperson. Be that as it may, the reality is that OpenAI is taking steps towards a restructuring of its product portfolio and its organization, and recovering business share is among its priorities. Unify portfolio. As part of this strategic pivot, OpenAI is planning the launch of a super app that will unify Codex, ChatGPT and the Atlas browser into a single tool. During 2025, OpenAI launched many very disparate products, many of them being half abandoned along the way like ChatGPT Atlas. In addition to showing a clear lack of focus, it is a very inefficient strategy; There are many computers, they all need computing capacity and no one is clear about what to prioritize, a disaster. The change is led by Fidji Simo, the company’s apps manager, who recently told employees “We cannot waste this moment because we are distracted by parallel projects.” The agreement with the Pentagon. All this coincides with the soap opera of Anthropic and the Pentagon. After weeks of tensions, Anthropic finally ended up on the government’s blacklist and OpenAI signed the agreement. What followed was that people started uninstalling ChatGPT en masse and in the public eye ChatGPT became the bad AI and Anthropic became the good AI that had not given in to government pressures. Sam Altman assured that there was no problemthat we could rest assured and that they also had red lines, a statement that has little to do with the facts. In Xataka | OpenAI wants us to have sex with ChatGPT. Your wellness advisors think it’s a terrible idea Image | Levart_Photographer, Nathan Sack on Unsplash

Anthropic is winning the enterprise AI race, so OpenAI has a new plan: become Anthropic

OpenAI has thrown out everything that moved in AI. They have been launching everything: a video generatora web browser with AI, an image generator with Studio Ghibli styletools e-commerceetc. The logic was simple: whoever tries everything has more chances to get something right, but the result has ended up being the opposite. While OpenAI seemed to be everywhere, Anthropic was focused on a single site and It has managed to eat the land where it mattered most. Enough of trying everything. Fidji Simo, the board that Altman signed last summer, recently called upon employees to give them a message that is rarely heard in a company with the growth of OpenAI: their main rival was teaching them a lesson. What Anthropic is doing, Simo explained, should be a wake-up call for OpenAI, which has lost leadership among software developers and enterprise customers. “We cannot waste this moment because we are distracted by parallel projects,” he stressed. The hidden cost of doing a little of everything. The problem with shooting at everything that moves is not only the focus, but the resources that this implies. In companies that develop foundational models, the key resource is computing capacity, and at OpenAI that resource jumped from one team to another depending on the priorities of the day. The Sora team, for example, was integrated into the research division despite being one of the company’s most visible products. OpenAI was growing fast in too many directions, and that also created internal tensions over which project should be prioritized. Anthropic focused on one thing. As OpenAI diversified, Its main rival adopted a completely opposite strategy: few products, a lot of depth. Claude does not generate images or video, does not have his own browser and is not trying to create his own chips (at the moment). It is dedicated to creating foundational models and offering them both in web service mode and especially through APIs for companies and developers. Claude Code, its flagship product for programming, became a viral phenomenon among software engineers last fall, and has ended up consolidating itself as the reference tool among amateur developers—vibe coding is still going strong—and of course among technical teams in all types of companies. OpenAI strikes back. The response has not been long in coming: OpenAI launched last month a new version of Codexhis programming tool, and accompanied it with new GPT-5.4 which is precisely much more oriented towards professional environments. According to Simo itself, Codex already exceeds two million weekly active users, almost four times more than at the beginning of the year. To drive usage of its product, OpenAI is deploying engineers to consulting firms and business partners to accelerate adoption of these products. IPO on the horizon. Both OpenAI and Anthropic are taking clear steps towards an IPO which in fact could occur this year. That makes gaining share in the corporate market—which is the one that really pays, the one that signs contracts, and the one that justifies valuations—absolutely essential for these IPOs to be successful. The initial share price and real valuation of these companies will depend on how well positioned they are, and at OpenAI they want to recover the lost ground in the enterprise market. In the meeting with the staff Simo explained that “we are acting as if this were a code red.” The paradox of being the pioneer. OpenAI unleashed the AI ​​fever with the launch of ChatGPT in November 2022 and made generative AI an almost everyday phenomenon. However, being the first usually has a trap, because it forces you to explore and diversify to maintain your reference position and that is very expensive. Anthropic came along later, saw where the real money was, and focused specifically on that sector. The student has surpassed the teacher, it seems, and at OpenAI they want to correct the strategy. What will happen to so much product?. It remains to be seen how this OpenAI strategy affects its entire product catalog. If you start focusing on developers and enterprise solutions, what will happen to your imager, Sora or Atlas? The structural tension between being a “research laboratory” and being a “product company” can pose a challenge for a company that naturally did not stop exploring new ideas to apply AI to them. Image | TechCrunch | Wikimedia Commons In Xataka | Sam Altman says he’s terrified of a world where AI companies believe themselves to be more powerful than the government. It’s just what you’re building

We believed that human programmers would end up being code reviewers. Anthropic just killed that

The rise of the Generative AI The world of software development seemed to follow a clear script: models would write the code and humans would review it. It was the new balance. Well, Anthropic just killed him. The problem of programming with AI. What we know today as vibe codingthis practice of giving instructions in natural language to an AI so that it generates code at full speed, has skyrocketed software production in companies. Anthropic affirms that the amount of code generated by each of its own engineers has grown by 200% in the last year. And now there’s a problem: there’s so much new code that reviewing it has become the bottleneck of the process. Human developers can’t cope. Many pull requests (change proposals that must be reviewed before integrating new code) are skimmed or not read very carefully at all. What Anthropic has done. The company Code Review has been releaseda tool integrated into Claude Code that, instead of waiting for a human to review the code, deploys a team of AI agents to do it automatically every time a pull request is opened. This new system is now available in preview phase for Team and Enterprise plan customers. Cat Wu, Product Manager at Anthropic, explained told TechCrunch that the question they constantly received from their clients’ technical managers was always the same: “Now that Claude Code is generating a ton of pull requests, how do I make sure they are reviewed efficiently?” How it works inside. AI agents work in parallel autonomously the moment a pull request is opened, examining the code from different perspectives. An end agent then aggregates and prioritizes the issues it has found, removing duplicates and sorting them by severity. The result reaches the developer through a featured comment, accompanied by more online comments about specific bugs. The focus, according to Anthropicis in logical errors, not in matters of style, something designed on purpose so that the feedback does not generate too much noise. Issues are labeled by color depending on how important they are: red for critical, yellow for attention, and purple for pre-existing code. Numbers. The company has been using Code Review internally for months before launching it to the market. According to what they saybefore implementing it, only 16% of their pull requests received meaningful review comments. With the tool, that percentage rises to 54%. In large pull requests (more than 1,000 modified lines) 84% returned results, with an average of 7.5 problems detected. And less than 1% of those results are flagged as incorrect by the engineers themselves. In one of the cases documented by the company, they spoke of a single line change that seemed routine. However, Code Review marked it as critical, as it apparently could have broken the entire service’s authentication. The bug was fixed before integration. Furthermore, according to the company, the engineer later acknowledged that he would not have caught it alone. ANDhe new role of the programmer. The narrative that had spread in the last two years was that developers would evolve towards a profile closer to that of a reviewer or supervisor of code generated by AI. Now that transition is also being automated, at least in part. Anthropic does not eliminate the human from the equation (in fact the tool does not approve pull requests), but it does compress the review work that was supposed to be the last bastion. It seems that now the human goes from reviewer to final arbiter. Price. It is not a cheap tool. Each revision has a cost based on token consumption. Anthropic esteem The average price per review is between $15 and $25, depending on the complexity of the code. It is a cost that the company justifies in the context of large technology companies where errors that escape review have a much higher price. Cover image | Compagnons In Xataka | Software companies sank on the stock market for a simple reason: investors are panicking about AI

The Pentagon labeled Anthropic a national security risk. So Anthropic is suing the Pentagon

The soap opera between Anthropic and the Pentagon has a new chapter (and now they are going…). After the push and pull of the last few weeks, Anthropic stood and that ended up causing The US put the company on the blacklist. Anthropic was not amused. what has happened. Anthropic has sued the US Department of Defense (or War), calling the decision to blacklist them “unprecedented and illegal” and arguing that it will cause irreparable harm to the company. . In statements to Fortunean Anthropic spokesperson has assured that they remain committed to protecting national security and want to find a solution, but that “it is a necessary step to protect our business, our customers and our partners.” The administration has not commented on this lawsuit. A lot of money at stake. By blacklisting Anthropic, the government prevents defense contractors and suppliers from using Claude in their Pentagon-related activities. Additionally, Trump ordered the entire government to stop using Anthropic’s AI. The company says government contracts are already being canceled and other private contracts are in jeopardy. Anthropic’s commercial director, Paul Smith, has assured that there is a client who already Claude has been swapped for another generative AI. This contract alone will make them lose at least 100 million dollars. Doubts about legality. Anthropic says the government’s move is not legal. Are they right? According to legal experts at Lawfarethe “supply chain risk” label will not withstand judicial scrutiny. The main reason is that this designation is intended for foreign adversaries, as happened with Huawei. The law’s definition is “the risk that an adversary could sabotage or subvert a covered system,” it says nothing about using it as punishment to a national company for a disagreement. According to Lawfare, the statements by Trump and the defense secretary “frame the action as ideological punishment of a political enemy.” The disagreement. The origin of this escalation is in the red lines that Anthropic put Basically, the company refused to allow its model to be used for mass surveillance of citizens and especially the development of lethal weapons without human supervision. The concern is justified: a soldier can refuse to carry out an illegal order, an AI cannot. The Pentagon does not like red lines (from others, of course) and demanded to be able to use their technology without limits. In Trump’s words in a Truth Social post: “We will decide the fate of our country, NOT an out-of-control radical left-wing AI company run by people who have no idea what the real world is like.” Meanwhile OpenAI… Shortly after Anthropic was blacklisted, the government found a new candidate to carry out your plans: OpenAI. According to the company by Sam Altman, its development has more safeguards and hey, calm down, it’s not that big of a deal. What has followed is an image crisis for ChatGPT, with resignations and mass uninstalls of users who have switched to Claude. But let’s not fool ourselves, although Anthropic has won the battle of public opinion, if the US keeps up, the future looks pretty bleak for Amodei’s side. In Xataka | Anthropic has become the Apple of our era and OpenAI our Microsoft: a story of love and hate Image | Anthropic (edited)

Microsoft wants Copilot to do more complex tasks. To achieve this, it has turned to Anthropic AI

For a long time, when we talked about artificial intelligence at Microsoft, there was one name that came up again and again: OpenAI. The relationship between both companies was decisive for the takeoff of ChatGPT and also for the launch of Copilot. But the AI ​​board is moving quickly. New models, new players and increasingly intense competition are pushing large technology companies to diversify their bets. In that context, Microsoft’s latest move is understood. The advertisement. Microsoft has decided to integrate Anthropic technology within Copilot, the assistant that is already part of tools such as Outlook, Teams or Excel within Microsoft 365. Among the new features is coworka tool based on Anthropic technology aimed at facilitating tasks within the work environment. But that’s not all: Claude’s models will also be available within the Copilot chatbot alongside the more advanced OpenAI models, thus expanding the capabilities of the assistant without depending on a single artificial intelligence provider. From asking for something to delegating work. Microsoft explains that Cowork is designed to go a step beyond the classic model of an assistant who answers questions or writes texts. The idea is that Copilot can take care of entire tasks within Microsoft 365. When the user makes a request, the system converts it into a work plan that runs in the background. To do this, it uses data from Outlook, Teams or Excel. From there, in theory, you propose actions, ask for clarification if needed, and allow the user to review or approve each step before the changes are applied. Some examples. Let’s imagine, for example, that we ask Copilot to review our agenda in Outlook. The system could analyze the calendar, detect conflicts between meetings and identify lower priority meetings. From there I would propose different adjustments, such as rescheduling some appointments or reserving blocks of time to focus on more important tasks. Once those suggestions are reviewed and approved, the system itself could apply the changes automatically, accepting, rejecting or rescheduling meetings and reserving blocks of time to focus on other tasks. The strategy. As we noted above, the move also reflects how Microsoft’s AI strategy is changing. The company has maintained a very close relationship with OpenAI for years and continues to be one of its largest shareholders, with a stake close to 27% after investments of around $13 billion since 2019. However, the rise of new models and the rapid evolution of the sector are pushing large technology companies to not depend on a single technology. Incorporating Anthropic tools within Copilot points precisely in that direction: building an ecosystem capable of relying on different models depending on the task. Platforms before models. What we are seeing with decisions like this is that the race for AI is not limited to developing increasingly advanced models. It’s also about deciding where those capabilities are going to live. In the case of Microsoft, the answer seems quite broad: The company has been integrating Copilot into more and more products and services in its ecosystem (and also external ones). For some users, this constant presence can be very useful; For others it can be somewhat invasive. But beyond these perceptions, the movement clearly shows Microsoft’s strategy. On the whole. So this is not just about adding another technology within Copilot, but rather reinforcing the idea that Microsoft wants to turn this assistant into a meeting point for different AI capabilities within its software. Incorporating Anthropic models alongside those of OpenAI points precisely to that scenario. Rather than relying on a single technology, the company appears to be laying the groundwork for a Copilot capable of combining different solutions as the AI ​​market continues to evolve. Images | Microsoft In Xataka | The best and worst of the Internet we know has been built on anonymity. AI brings bad news

Anthropic releases a new feature to download all your memory to leave ChatGPT and switch to Claude

This weekend Anthropic has gone from being an AI used by the Pentagon, other US agencies and having partners such as Microsoft or Amazon to total ostracism: from Friday at 5:01 p.m. It is classified as a “risk to the supply chain”. Total veto, a serious threat to the survival of a company valued at 380,000 million dollars and also a challenge for those entities that in less than six months will have to transition to another alternative. The Pentagon itself He already has an agreement with OpenAI to succeed him. Anthropic’s situation is delicate to say the least serving its strategic clients and alliances, something essential to continue growing in the tough battle of intelligence. The company led by Dario Amodei, which was firm in its principles when expressing its concern about the use of artificial intelligence for mass civil surveillance and the development of weapons capable of firing without human intervention, has already announced that he will contestbut for now they look rough. He only has the civil…in every sense, because Claude has risen to number 1 for free downloads in the App Store in the United States, as reported by CNBC. Because yes, this tug of war with the US government has brought an increase in the popularity of Claude, less known than other alternatives such as ChatGPT or Gemini. On the other hand, this movement in which the US Administration has said goodbye to Anthropic in favor of OpenAI also has a reading in which Claude wins: the terms of the agreement and how it affects ChatGPT users. Anthropic Coup de Effect. So Anthropic has been taken out of the sleeve a new feature to facilitate the transition from other AI models, such as ChatGPT or Gemini, to Claude. Because if you have been using ChatGPT for a while for example and already knows youstarting from scratch is a step backwards in every sense. The new feature allows you to import all your memory from other models into Claude so that it immediately knows everything about you (everything that your previous AI already knew). You no longer start from scratch. How to download your memory and load it in Claude. To incorporate your preferences and context from other AI providers into Claude you have to do two steps: Copy and paste the prompt below into the AI ​​you normally use, like Gemini or ChatGPT: I’m moving to another service and need to export my data. List every memory you have stored about me, as well as any context you’ve learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: (date saved, if available) – memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I’ve given you about how to respond (tone, format, style, ‘always do X’, ‘never do Y’). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I’ve made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. The model will return everything it knows about you in a block of text, which you have to copy and paste later into Claude. Go to ‘Settings‘ > ‘Capabilities‘and there in Import Memorypaste the answer. Then, tap ‘Add to memory’. From that moment on, Claude already knows what your previous AI knew. It has small print. This is a feature for users on a paid plan (Pro, Max, Team or Enterprise). If you are on the free version, at most you will only be able to have that context in that conversation, but not permanently. In short: the import is free as a manual process, but for Claude to remember it permanently a payment plan is required. In Xataka | Claude: 23 functions and some tricks to get the most out of this artificial intelligence In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it

While Anthropic goes on the US blacklist, the Pentagon already has someone to succeed him: OpenAI

The pentagon gave an ultimatum to Anthropic to accept the unlimited use of its AI models for applications of all kinds, including espionage and military use. The deadline arrived, at 5:01 p.m. this Friday, February 27, and Anthropic said no: he would be faithful to his principles. The sword of Damocles has fallen on the company led by Dario Amodei and the United States has completed its threats. How did he communicate? A few hours ago, United States Secretary of War Pete Hegseth, for the Pentagon, Anthropic is already a “risk to the supply chain.” The context. This chronicle of a death foretold has been meeting its deadlines and everyone has remained in their initial position: Anthropic rejected the Pentagon’s lawsuit over concerns about the use of AI for mass civilian surveillance and the development of weapons capable of firing without human intervention. The company behind Claude has already announced that he will contest. We will have to see the cost of maintaining his position. The United States will apply a sanction that until now we had only seen applied to companies from rival countries, Huawei is one of the clearest examples. What’s going to happen now. Leaving aside the fact that the president of the United States refers to Anthropic as “a radical left-wing and woke company” on your social network Truth Socialthe US Ministry of Defense has carried out its threat, which has come into effect immediately: It will terminate its contract with Anthropic, valued at up to $200 million, and as announced Peter Hegseth, no contractor, supplier or partner doing business with the United States Armed Forces may do business with Anthropic. There will be a six-month period for the Pentagon and other government agencies to transition Claude to alternatives. OpenAI said yes. The United States already has a company to provide its services to the Pentagon and other agencies: OpenAI. Sam Altman announced the agreement to deploy its models on its classified network explaining that the Department of Defense had shown a “deep respect for security” and that both AI security and broad benefit sharing are the foundation of its mission. Among the security principles specifically mentioned by Altman are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems. According to the CEO of OpenAI, the War Department is aligned with these principles. Likewise, he explained that they will apply technical safeguards to guarantee the correct behavior of their models. Claude’s shadow is long. Saying goodbye overnight to your reference AI company (even with that transition period) and putting a veto on other companies working with it is a tricky measure to put into practice as it is behind recent strategic operations, such as Maduro’s arrest and others imminent. Likewise, it leaves projects such as that of Palantirwhich Claude uses. behind the scenes. According to AxiosDeputy Secretary of Defense Emil Michael was in talks with Anthropic to offer a deal just as Pete Hegseth dropped the bomb on X/Twitter. This theoretical agreement would have allowed the collection or analysis of data on US citizens, such as location, web browsing or financial information. At the moment it is unknown if this interest of the Pentagon in collecting personal data legally applies to OpenAI. In Xataka | IBM has been living for decades that no one could kill COBOL. Anthropic has other plans In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it Cover | Tomasz Zielonka

The war between Anthropic and the Pentagon points to something terrifying: a new “Oppenheimer Moment”

Anthropic has refused to bow to pressure from the Pentagon. Its co-founder and CEO, Dario Amodei, has just published a statement in which they make it clear that they are not willing to break their ethical principles. No massive espionage with AI, no development of lethal autonomous weapons with its models. And that reminds us of a terrible case: the one with the atomic bomb. From hero to villain. J. Robert Oppenheimer went from being the “father of the atomic bomb” and a national hero to become in an outcast. His sin was not betrayal, but his moral clarity. After witness the horror of Hiroshima and Nagasaki, Oppenheimer desperately tried to stop the atomic escalation and the development of the hydrogen bomb. Either you are with us, or against us. The United States, which had praised him in the past, took advantage of his former political affiliations and stripped him of all his privileges and influence. This demonstrated how the US government simply decided that scientific knowledge was state property and that any researcher who tried to propose ethical limits to their own projects would be treated as an enemy of the country. History is threatening to repeat itself these days. From Oppenheimer to Anthropic. He is doing it with a protagonist that is still there—the US Government—and another that is changing: the one who now defends the ethics of a scientific-technological project is not Oppenheimer, but Dario Amodei, CEO of Anthropic. Claude is increasingly vital in the US Government. Your company is between a rock and a hard place these days. Anthropic managed to make its model Claude become the pretty girl of the US Government. The ability of this AI has proven to be so remarkable that it was apparently used to plan the arrest of the former president of VenezuelaNicolás Maduro. red lines. But so that the Pentagon could use Claude, Anthropic imposed certain red lines. No use for mass surveillance of US citizens, and no use for the development of lethal autonomous weapons. And the Pentagon has ended up not liking those red lines, so they want to eliminate them and use Claude as they please as long as, they say, the Constitution and American laws are respected. The Pentagon wants AI without restrictions. That has ended up causing an enormously tense situation these days. The Pentagon threatened to punish Anthropic if it did not give in to its demands, and those threats from the Department of Defense have not been subtle at all. In fact, they have suggested that they could label Anthropic as a company that is “a supply chain risk,” a black label typically reserved for companies in rival countries like China or Russia. Contradiction. Dario Amodei himself explained in an entry on the company’s official blog that those two threats were self-exclusive: “These last two threats are inherently contradictory: one labels us as a security risk; the other labels Claude as essential to national security.” Can AI be nationalized? It’s a disturbing irony: the same government that considers Claude an essential tool for national security is willing to label his creators a public threat if they don’t hand over the keys to the kingdom and their AI. What the Department of Defense and the Pentagon want is to basically “nationalize” the AI ​​technology developed by Anthropic and appropriate it as they already did with the technology that gave rise to the atomic bomb. We know how that ended. Anthropic refuses to give in. The danger is enormous in both sections: mass surveillance, rather than defending democracy, can dynamite it from within, and the NSA scandal is a good example. But even more worrying is the Pentagon’s intention to use this AI to develop lethal autonomous weapons. Amodei insisted on this point, indicating that “The foundational models of AI They’re just not reliable enough. to power fully autonomous weapons. “We will not knowingly provide a product that puts American warfighters and civilians at risk.” Amodei even offers the Department of War/Defense help in the “transition to another provider” of AI models, but at the moment it is not clear which path the US government will take. Oppenheimer Moment. If the Pentagon finally execute his threat and ban Anthropic, the message for the industry will be chilling. In the age of AI there are no conscientious objectors: if a company develops a technological and strategic advantage at a military level, that company is at the mercy of the State. It is a new and terrifying “Oppenheimer Moment” that conditions the future not only of Anthropic, but of the development of AI models itself. In Xataka | “The world is in danger”: Anthropic’s security manager leaves the company to write poetry

Anthropic has red lines for its AI. The Pentagon just demanded that you delete them all

The pentagon just gave to Anthropic until this Friday at 5:01 p.m. to accept its unrestricted use of its AI models for all types of applications, including espionage and military applications. The company has so far refused, but the Trump administration is threatening to invoke a 75-year-old rule to “appropriate” Anthropic’s AI technology. red lines. The conflict has its origin in the red lines imposed by Anthropic’s ethical standards. The company, led by Dario Amodei, refuses to have its models used for mass surveillance of American citizens – it says nothing about others – or in the development and use of lethal autonomous weapons controlled entirely by AI. The Pentagon wants to use AI (almost) without limits. These types of safeguards clash head-on with the Pentagon’s position, which demands that its technology providers open the use of their software and hardware solutions for any legal purpose defined by the military, without external vetoes. As long as the US constitution and laws allow it, a private company should not be able to impose limits on the use of its technology, the US Government indicates. Tension after the Maduro incident. Things began to go wrong when it was learned that the Claude model was used in a US special forces operation in January to capture the former Venezuelan presidentNicolás Maduro. The incident put the army’s dependence on Claude under the microscope: Anthropic is currently the only AI company that operates in the Pentagon’s classified systems, which gives it a notable position of power that now wants to be broken by the US government. This smells bad. The Pentagon’s strategy is disturbing from a legal point of view. There are three main possibilities for action: Cancel the Anthropic contract and start working with another (or other) AI companies willing to accept their terms. Yesterday we knew that xAI has already signed an agreement so that the DoD can use its Grok model, in classified systems. Google seems to be also an option they are working with. Identify Anthropic as a risk to your supply chain. That is very dangerous, because it would mean that a huge number of companies in the US would not be able to work with Anthropic. It would be a kind of veto like the one the US imposed on Huawei, but applied to a national company. The impact for Anthropic and its investors (Amazon and Google among them) would be catastrophic. Activate Title 1 of the Defense Production Act of 1950, a special law theoretically designed to control the economy during wars and emergencies. It was used, for example, during the COVID-19 pandemic to boost the production of medical supplies and accelerate the production of vaccines. It seems unlikely that they can do something like that. How did this whole mess start?. The Biden administration promoted measures and ethical limits to restrict the application of AI, but everything changed with the mandate of Donald Trump. In June 2025 Anthropic released Claude Gova specialized series of AI models specifically designed for use by US national agencies in security, defense and intelligence. AI with military and intelligence applications. These models were prepared to operate in environments with classified information. Anthropic also offered them for a symbolic price of 1 dollar to ensure that the Government would prefer them over those of other competitors. Shortly thereafter, the DoD granted the company a contract worth $200 million, and the company has since gone integrating with the Palantir systems used in US government agencies. Two opposing positions. Anthropic therefore positions itself as a defender of certain limits for the use of its AI models. The Department of Defense (DoD) disagrees, arguing that military use of any technology should only adhere to the US Constitution or laws. The company maintains that seeks to support the national security missionbut only within what their models can do reliably and responsibly. The dilemma. If the Pentagon carries out its threat, a precedent will be set where the State can intervene in the intellectual property of a software company under the argument of national emergency. This would force all Big Tech to decide if they are willing to cede full control of their technological developments to the military… or risk being intervened by an almost 80-year-old law. Image | Ben White | Anthropic In Xataka | IBM has been living for decades that no one could kill COBOL. Anthropic has other plans

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.