Anthropic releases a new feature to download all your memory to leave ChatGPT and switch to Claude

This weekend Anthropic has gone from being an AI used by the Pentagon, other US agencies and having partners such as Microsoft or Amazon to total ostracism: from Friday at 5:01 p.m. It is classified as a “risk to the supply chain”. Total veto, a serious threat to the survival of a company valued at 380,000 million dollars and also a challenge for those entities that in less than six months will have to transition to another alternative. The Pentagon itself He already has an agreement with OpenAI to succeed him. Anthropic’s situation is delicate to say the least serving its strategic clients and alliances, something essential to continue growing in the tough battle of intelligence. The company led by Dario Amodei, which was firm in its principles when expressing its concern about the use of artificial intelligence for mass civil surveillance and the development of weapons capable of firing without human intervention, has already announced that he will contestbut for now they look rough. He only has the civil…in every sense, because Claude has risen to number 1 for free downloads in the App Store in the United States, as reported by CNBC. Because yes, this tug of war with the US government has brought an increase in the popularity of Claude, less known than other alternatives such as ChatGPT or Gemini. On the other hand, this movement in which the US Administration has said goodbye to Anthropic in favor of OpenAI also has a reading in which Claude wins: the terms of the agreement and how it affects ChatGPT users. Anthropic Coup de Effect. So Anthropic has been taken out of the sleeve a new feature to facilitate the transition from other AI models, such as ChatGPT or Gemini, to Claude. Because if you have been using ChatGPT for a while for example and already knows youstarting from scratch is a step backwards in every sense. The new feature allows you to import all your memory from other models into Claude so that it immediately knows everything about you (everything that your previous AI already knew). You no longer start from scratch. How to download your memory and load it in Claude. To incorporate your preferences and context from other AI providers into Claude you have to do two steps: Copy and paste the prompt below into the AI ​​you normally use, like Gemini or ChatGPT: I’m moving to another service and need to export my data. List every memory you have stored about me, as well as any context you’ve learned about me from past conversations. Output everything in a single code block so I can easily copy it. Format each entry as: (date saved, if available) – memory content. Make sure to cover all of the following — preserve my words verbatim where possible: Instructions I’ve given you about how to respond (tone, format, style, ‘always do X’, ‘never do Y’). Personal details: name, location, job, family, interests. Projects, goals, and recurring topics. Tools, languages, and frameworks I use. Preferences and corrections I’ve made to your behavior. Any other stored context not covered above. Do not summarize, group, or omit any entries. The model will return everything it knows about you in a block of text, which you have to copy and paste later into Claude. Go to ‘Settings‘ > ‘Capabilities‘and there in Import Memorypaste the answer. Then, tap ‘Add to memory’. From that moment on, Claude already knows what your previous AI knew. It has small print. This is a feature for users on a paid plan (Pro, Max, Team or Enterprise). If you are on the free version, at most you will only be able to have that context in that conversation, but not permanently. In short: the import is free as a manual process, but for Claude to remember it permanently a payment plan is required. In Xataka | Claude: 23 functions and some tricks to get the most out of this artificial intelligence In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it

While Anthropic goes on the US blacklist, the Pentagon already has someone to succeed him: OpenAI

The pentagon gave an ultimatum to Anthropic to accept the unlimited use of its AI models for applications of all kinds, including espionage and military use. The deadline arrived, at 5:01 p.m. this Friday, February 27, and Anthropic said no: he would be faithful to his principles. The sword of Damocles has fallen on the company led by Dario Amodei and the United States has completed its threats. How did he communicate? A few hours ago, United States Secretary of War Pete Hegseth, for the Pentagon, Anthropic is already a “risk to the supply chain.” The context. This chronicle of a death foretold has been meeting its deadlines and everyone has remained in their initial position: Anthropic rejected the Pentagon’s lawsuit over concerns about the use of AI for mass civilian surveillance and the development of weapons capable of firing without human intervention. The company behind Claude has already announced that he will contest. We will have to see the cost of maintaining his position. The United States will apply a sanction that until now we had only seen applied to companies from rival countries, Huawei is one of the clearest examples. What’s going to happen now. Leaving aside the fact that the president of the United States refers to Anthropic as “a radical left-wing and woke company” on your social network Truth Socialthe US Ministry of Defense has carried out its threat, which has come into effect immediately: It will terminate its contract with Anthropic, valued at up to $200 million, and as announced Peter Hegseth, no contractor, supplier or partner doing business with the United States Armed Forces may do business with Anthropic. There will be a six-month period for the Pentagon and other government agencies to transition Claude to alternatives. OpenAI said yes. The United States already has a company to provide its services to the Pentagon and other agencies: OpenAI. Sam Altman announced the agreement to deploy its models on its classified network explaining that the Department of Defense had shown a “deep respect for security” and that both AI security and broad benefit sharing are the foundation of its mission. Among the security principles specifically mentioned by Altman are the prohibition of domestic mass surveillance and human responsibility for the use of force, including autonomous weapons systems. According to the CEO of OpenAI, the War Department is aligned with these principles. Likewise, he explained that they will apply technical safeguards to guarantee the correct behavior of their models. Claude’s shadow is long. Saying goodbye overnight to your reference AI company (even with that transition period) and putting a veto on other companies working with it is a tricky measure to put into practice as it is behind recent strategic operations, such as Maduro’s arrest and others imminent. Likewise, it leaves projects such as that of Palantirwhich Claude uses. behind the scenes. According to AxiosDeputy Secretary of Defense Emil Michael was in talks with Anthropic to offer a deal just as Pete Hegseth dropped the bomb on X/Twitter. This theoretical agreement would have allowed the collection or analysis of data on US citizens, such as location, web browsing or financial information. At the moment it is unknown if this interest of the Pentagon in collecting personal data legally applies to OpenAI. In Xataka | IBM has been living for decades that no one could kill COBOL. Anthropic has other plans In Xataka | Anthropic and OpenAI have developed AI. The US Pentagon is showing you who really owns it Cover | Tomasz Zielonka

The war between Anthropic and the Pentagon points to something terrifying: a new “Oppenheimer Moment”

Anthropic has refused to bow to pressure from the Pentagon. Its co-founder and CEO, Dario Amodei, has just published a statement in which they make it clear that they are not willing to break their ethical principles. No massive espionage with AI, no development of lethal autonomous weapons with its models. And that reminds us of a terrible case: the one with the atomic bomb. From hero to villain. J. Robert Oppenheimer went from being the “father of the atomic bomb” and a national hero to become in an outcast. His sin was not betrayal, but his moral clarity. After witness the horror of Hiroshima and Nagasaki, Oppenheimer desperately tried to stop the atomic escalation and the development of the hydrogen bomb. Either you are with us, or against us. The United States, which had praised him in the past, took advantage of his former political affiliations and stripped him of all his privileges and influence. This demonstrated how the US government simply decided that scientific knowledge was state property and that any researcher who tried to propose ethical limits to their own projects would be treated as an enemy of the country. History is threatening to repeat itself these days. From Oppenheimer to Anthropic. He is doing it with a protagonist that is still there—the US Government—and another that is changing: the one who now defends the ethics of a scientific-technological project is not Oppenheimer, but Dario Amodei, CEO of Anthropic. Claude is increasingly vital in the US Government. Your company is between a rock and a hard place these days. Anthropic managed to make its model Claude become the pretty girl of the US Government. The ability of this AI has proven to be so remarkable that it was apparently used to plan the arrest of the former president of VenezuelaNicolás Maduro. red lines. But so that the Pentagon could use Claude, Anthropic imposed certain red lines. No use for mass surveillance of US citizens, and no use for the development of lethal autonomous weapons. And the Pentagon has ended up not liking those red lines, so they want to eliminate them and use Claude as they please as long as, they say, the Constitution and American laws are respected. The Pentagon wants AI without restrictions. That has ended up causing an enormously tense situation these days. The Pentagon threatened to punish Anthropic if it did not give in to its demands, and those threats from the Department of Defense have not been subtle at all. In fact, they have suggested that they could label Anthropic as a company that is “a supply chain risk,” a black label typically reserved for companies in rival countries like China or Russia. Contradiction. Dario Amodei himself explained in an entry on the company’s official blog that those two threats were self-exclusive: “These last two threats are inherently contradictory: one labels us as a security risk; the other labels Claude as essential to national security.” Can AI be nationalized? It’s a disturbing irony: the same government that considers Claude an essential tool for national security is willing to label his creators a public threat if they don’t hand over the keys to the kingdom and their AI. What the Department of Defense and the Pentagon want is to basically “nationalize” the AI ​​technology developed by Anthropic and appropriate it as they already did with the technology that gave rise to the atomic bomb. We know how that ended. Anthropic refuses to give in. The danger is enormous in both sections: mass surveillance, rather than defending democracy, can dynamite it from within, and the NSA scandal is a good example. But even more worrying is the Pentagon’s intention to use this AI to develop lethal autonomous weapons. Amodei insisted on this point, indicating that “The foundational models of AI They’re just not reliable enough. to power fully autonomous weapons. “We will not knowingly provide a product that puts American warfighters and civilians at risk.” Amodei even offers the Department of War/Defense help in the “transition to another provider” of AI models, but at the moment it is not clear which path the US government will take. Oppenheimer Moment. If the Pentagon finally execute his threat and ban Anthropic, the message for the industry will be chilling. In the age of AI there are no conscientious objectors: if a company develops a technological and strategic advantage at a military level, that company is at the mercy of the State. It is a new and terrifying “Oppenheimer Moment” that conditions the future not only of Anthropic, but of the development of AI models itself. In Xataka | “The world is in danger”: Anthropic’s security manager leaves the company to write poetry

Anthropic has red lines for its AI. The Pentagon just demanded that you delete them all

The pentagon just gave to Anthropic until this Friday at 5:01 p.m. to accept its unrestricted use of its AI models for all types of applications, including espionage and military applications. The company has so far refused, but the Trump administration is threatening to invoke a 75-year-old rule to “appropriate” Anthropic’s AI technology. red lines. The conflict has its origin in the red lines imposed by Anthropic’s ethical standards. The company, led by Dario Amodei, refuses to have its models used for mass surveillance of American citizens – it says nothing about others – or in the development and use of lethal autonomous weapons controlled entirely by AI. The Pentagon wants to use AI (almost) without limits. These types of safeguards clash head-on with the Pentagon’s position, which demands that its technology providers open the use of their software and hardware solutions for any legal purpose defined by the military, without external vetoes. As long as the US constitution and laws allow it, a private company should not be able to impose limits on the use of its technology, the US Government indicates. Tension after the Maduro incident. Things began to go wrong when it was learned that the Claude model was used in a US special forces operation in January to capture the former Venezuelan presidentNicolás Maduro. The incident put the army’s dependence on Claude under the microscope: Anthropic is currently the only AI company that operates in the Pentagon’s classified systems, which gives it a notable position of power that now wants to be broken by the US government. This smells bad. The Pentagon’s strategy is disturbing from a legal point of view. There are three main possibilities for action: Cancel the Anthropic contract and start working with another (or other) AI companies willing to accept their terms. Yesterday we knew that xAI has already signed an agreement so that the DoD can use its Grok model, in classified systems. Google seems to be also an option they are working with. Identify Anthropic as a risk to your supply chain. That is very dangerous, because it would mean that a huge number of companies in the US would not be able to work with Anthropic. It would be a kind of veto like the one the US imposed on Huawei, but applied to a national company. The impact for Anthropic and its investors (Amazon and Google among them) would be catastrophic. Activate Title 1 of the Defense Production Act of 1950, a special law theoretically designed to control the economy during wars and emergencies. It was used, for example, during the COVID-19 pandemic to boost the production of medical supplies and accelerate the production of vaccines. It seems unlikely that they can do something like that. How did this whole mess start?. The Biden administration promoted measures and ethical limits to restrict the application of AI, but everything changed with the mandate of Donald Trump. In June 2025 Anthropic released Claude Gova specialized series of AI models specifically designed for use by US national agencies in security, defense and intelligence. AI with military and intelligence applications. These models were prepared to operate in environments with classified information. Anthropic also offered them for a symbolic price of 1 dollar to ensure that the Government would prefer them over those of other competitors. Shortly thereafter, the DoD granted the company a contract worth $200 million, and the company has since gone integrating with the Palantir systems used in US government agencies. Two opposing positions. Anthropic therefore positions itself as a defender of certain limits for the use of its AI models. The Department of Defense (DoD) disagrees, arguing that military use of any technology should only adhere to the US Constitution or laws. The company maintains that seeks to support the national security missionbut only within what their models can do reliably and responsibly. The dilemma. If the Pentagon carries out its threat, a precedent will be set where the State can intervene in the intellectual property of a software company under the argument of national emergency. This would force all Big Tech to decide if they are willing to cede full control of their technological developments to the military… or risk being intervened by an almost 80-year-old law. Image | Ben White | Anthropic In Xataka | IBM has been living for decades that no one could kill COBOL. Anthropic has other plans

IBM has been living for decades that no one could kill COBOL. Anthropic has other plans

IBM shares fell about 13.2% yesterday on the New York Stock Exchange for a simple reason: Anthropic advertisement that its AI model, Claude, can be used to modernize systems that are based on the legendary COBOL programming language. And that is something that seemed virtually impossible. The immortal language. As Anthropic itself indicates, it is estimated that COBOL manages 95% of all transactions made at ATMs in the US. A 2022 study revealed that there are 800 billion lines of COBOL code that continue to operate in production systems on a daily basis. That almost no one uses anymore. Faced with this reality is another equally powerful one: almost no one programs in COBOL anymore, because this language has been with us for 65 years and has ended up being replaced by modern programming languages. The question, of course, is who is in charge of those millions of lines of code if there are almost no human programmers who can do it. Anthropic itself made it clear: “the number of people who understand COBOL decreases every year.” AI to the rescue. That’s where Claude, Anthropic’s family of generative AI models, comes in. According to this company, Claude is now capable of “modernizing” COBOL despite how difficult and expensive it was to carry out something like that. IBM has been trying for years and in fact applied that same recipebut its AI (Watson) does not seem to have managed too much progress. Claude helps, but there must be a human expert supervising. At Anthropic they promise that their AI model is capable of reading the entire code base of a COBOL project, identifying entry points, execution paths through subroutines, mapping data flows and documenting dependencies. They highlight, however, that with the supervision of a human expert this can help modernize and polish all types of COBOL-based systems. Critical systems. Of course, the question is whether AI will actually deliver on that promise, especially when we’re talking about absolutely critical systems used in financial transactions. According to Anthropic “the modernization of the code legacy It has been stagnant for years because understanding it cost more than rewriting it. “AI reverses that equation.” COBOL is no longer IBM’s ace in the hole. It’s hard to know how much of IBM’s business depended on COBOL systems, but it’s certainly a relevant part. In 2025 the company achieved revenue of $67.5 billion. About 45% comes from software. The rest is consulting and infrastructure, and this last division is where the IT business is included. IBM Z mainframesclosely linked to COBOL systems. It’s reasonable to think that revenues dependent on mainframes and COBOl are around 20% of IBM’s revenues (and probably more in profits). AI and the SaaSpocalypse. What happened with IBM and COBOL is the latest case of a software that seemed to have a long-term future but with AI may not have such a long-term future. Investors now seem to think that AI will replace many of these systems and SaaS platforms. It is indeed what has been called “SaaSpocalypse” in reference to the stock market falls of this type of companies in recent months: Salesforce, SAP, Microsoft, Adobe, Intuit and Atlassian have suffered notable falls in the stock market that are around 30-40% on average. But. This investor panic that is being experienced contrasts with the current reality: AI models are proving to be able to do surprising things in the field of programming, but they are far from being perfect. The code must be reviewed, and IBM itself he already made it clear In a 1979 training manual: “A computer can never be held responsible. Therefore, it should never make an administrative decision.” IBM has already survived other crises. The blue giant has suffered a blow to the stock market, but it is one of those technology companies that have managed to recover and resist all the attacks of an industry that is normally merciless. IBM itself also has its modernization solutions for its clients, and some analysts they are clear that in fact IBM will make more money than before if COBOL finally goes away. In Xataka | Old programmers never die, and Silicon Valley is realizing that

Anthropic just accused DeepSeek and other Chinese companies of “distilling” Claude

For months we have talked about the race between the United States and China to dominate artificial intelligence as if it were only a question of who trains the most powerful model or launches the next version first. But the pulse begins to move to another, more delicate area: that of the rules of the game. When one laboratory accuses another of extracting capabilities from its system to accelerate its own development, the discussion goes beyond the technical. That’s exactly what Anthropic just did by denounce “distillation” campaigns against his model Claude. The complaint. In a text published this Monday, the company claims to have detected “industrial-scale campaigns” aimed at extracting Claude’s capabilities. According to their version, the activities attributed to DeepSeekMoonshot and MiniMax reportedly involved more than 16 million queries, question and answer interactions, and were channeled through approximately 24,000 fraudulent accounts, in violation of their terms of service and regional access restrictions. The race and the suspicion. The announcement by the firm led by Darío Amodei occurs in a context of growing tension around the progress of Chinese AI. Let us remember that DeepSeek altered the Silicon Valley landscape a year ago with the launch of R1, a competitive model that was presented as Developed at a fraction of the cost of American alternatives. The impact was immediate on the markets and revived the political debate in Washington about the technological advantage over China. Distilling is not always cheating. Anthropic itself recognizes that distillation is a common technique in the sector. It consists, in simple terms, of training a less capable model using the responses generated by a more powerful one, something that large laboratories use to create smaller, cheaper versions of their own systems. The problem, according to the company, appears when this practice is used to “acquire powerful capabilities from other laboratories in a fraction of the time and at a fraction of the cost” that developing them independently would entail. In that case, distillation would cease to be an internal optimization and would become, always according to Anthropic, a way of taking advantage of the work of others. Recognizable pattern. The three laboratories would have used fraudulent accounts and proxy services to access Claude on a large scale while trying to avoid detection systems. The company details infrastructures, what it calls “hydra cluster”, extensive networks of accounts that distribute traffic between its API and third-party cloud platforms, so that when one account was blocked, another took its place. Anthropic maintains that what differentiated these activities from normal use was not an isolated query, but rather the massive and coordinated repetition of requests aimed at extracting very specific capabilities from the model. Three campaigns. Although Anthropic presents the campaigns as part of the same dynamic, it distinguishes relevant nuances. DeepSeek would have focused its more than 150,000 queries on extracting reasoning capabilities and generating safe alternatives to politically sensitive questions. Moonshot, with more than 3.4 million queries, would have been oriented towards the development of agents capable of using tools and manipulating computing environments. MiniMax would concentrate the largest volume, more than 13 million queries, and according to Anthropic’s account, it reacted in a matter of hours to the launch of a new system, redirecting its traffic to try to extract capabilities from its most recent system. A geopolitical issue. The company states that illicitly distilled models may lose safeguards that seek to prevent state or non-state actors from using AI for purposes such as the development of biological weapons or disinformation campaigns. It also argues that distillation undermines export controls by allowing foreign laboratories to close the gap in other ways, while at the same time recognizing that executing these large-scale extractions requires access to advanced chips, thus reinforcing the logic of restricting their availability while, at the same time, remembering that the risk would grow if these capabilities end up being integrated into military, intelligence or surveillance systems. Images | Xataka with Nano Banana Pro In Xataka | Seedance is the greatest brutality we have seen generating video. And it has an uncomfortable message: it has surpassed Sora and Veo without NVIDIA chips

Anthropic corners Gemini 3 Pro and GPT-5.2 more than ever

Think for a moment about the artificial intelligence models you have used in recent days. It may have been through ChatGPT, Gemini either Claudeor perhaps through tools like Codex, Claude Code or AI Cursor. In practice, the choice is usually simple: we end up using what best fits what we need at any given moment, almost without stopping to think about the technology behind it. However, that balance changes frequently. Each new model that appears promises improvements, new capabilities or different ways of working, and with it a fairly direct question returns: if it is worth trying, if it can really offer us something better or if what we already use is still enough. Claude Sonnet 4.6 just came to the foreand this is how it is positioned against the competition. Claude Sonnet’s starting point 4.6. Here we find what Anthropic describes as a transversal improvement in capabilities, which includes advances in coding, computer use, long-context reasoning, agent planning, and tasks typical of intellectual and creative work. Added to this set is a context window of up to one million tokens in beta, designed to process entire code bases, extensive contracts or large collections of information without fragmentation. Three levels, the same map. To understand where Sonnet 4.6 fits in, it’s worth looking at how Anthropic tends to organize its family of models into different levels with different objectives. Haiku prioritizes speed and efficiency, Opus is reserved for tasks that require the deepest reasoning, and Sonnet occupies the middle ground, designed as a balance between capacity and operating cost. Within this framework, the company maintains that the new Sonnet comes close in some real jobs to the performance previously associated with the Opus, an ambitious claim. When AI starts using the computer. One of the improvements that Anthropic highlights most strongly in Sonnet 4.6 is its progress in what it calls computer usethat is, the ability of the model to interact with software in a way similar to a person, without depending on APIs designed specifically for automation. This progress is supported by references such as OSWorld-Verified, a testing environment with real applications where the Sonnet family has been improving steadily over several months. The company also recognizes limits and risks that we have talked about before, such as attempts at manipulation through prompt injection. Searching for the ‘best’ model. At this point, the relevant question stops being how much Sonnet 4.6 has improved in absolute terms and begins to focus on how it is compared to the other large models that today compete for the same space of use. The comparison is not simple nor does it allow for a single winner, because each system excels in different areas and responds to different technical priorities. That is why it is advisable to read the benchmarks with a practical perspective, identifying in which specific tasks the real differences appear. Where each model stands out. The direct comparison with GPT-5.2 draws a distribution of strengths rather than a clear victory. According to the table published by Anthropic, Sonnet 4.6 stands out especially widely in the autonomous use of the computer measured in OSWorld-Verified, in addition to showing an advantage in office tasks (GDPval-AA Elo) and in some analysis or problem solving scenarios (Finance Agent v1.1, ARC-AGI-2). GPT-5.2, for its part, maintains better results in graduate-level reasoning (GPQA Diamond), visual comprehension (MMMU-Pro) and terminal programming (Terminal-Bench 2.0), with nuances such as results marked as Pro in some tests (BrowseComp, HLE) or self-reported grades in Terminal-Bench 2.0. The comparison with Gemini 3 Pro introduces a different nuance, because here the advantages are concentrated above all in the field of reasoning and general knowledge. The Google model obtains better results in graduate-level reasoning tests (GPQA Diamond) and in wide-ranging multilingual questionnaires (MMMLU), in addition to being ahead in visual reasoning without tools (MMMU-Pro). Sonnet 4.6, on the other hand, retains a certain advantage when external tools or scenarios closer to the applied work come into play. The absence of some comparable data in the table itself forces, in any case, to interpret this duel with caution. Where Sonnet 4.6 can be used. The new model is available in all Claude plans, including the free level, where it also becomes the default option within claude.ai and Claude Cowork. It can also be used through Claude Code, the API and the main cloud platforms, maintaining the same price as the Sonnet 4.5 version. After going through capabilities, limits and comparisons, the real decision returns to the user’s daily life. Sonnet 4.6 aims to be especially useful in productive tasks, direct interaction with software and long workflows, while GPT-5.2 and Gemini 3 Pro maintain advantages in academic reasoning, visual comprehension or general knowledge depending on the test considered. No one dominates all fronts, and that fragmentation defines the current moment of artificial intelligence. Images | Anthropic In Xataka | In 2025, AI seemed to have hit a wall of progress. A volatilized wall in February 2026 In Xataka | The great revolution of GPT-5.3 Codex and Claude Opus 4.6 is not that they are smarter. It’s that they can improve themselves

Anthropic wanted to secretly scan and then destroy millions of books to train its AI. It hasn’t been so secret

A language model for AI needs input if it is to be trained to be more accurate and effective. The issue is how the information is obtained and whether there is an ethical way to do it that is profitable for the technology company in power. There is no doubt that the preferred option for companies has been to use all possible physical and digital content without anyone’s permission. There is also evidence. A judicial leak reveals that Anthropic invested tens of millions of dollars in acquiring and digitizing literary works without permission from the authors. According to account Washington Post, the project, internally called “Panama”, was part of a frenetic race among big technology companies to accumulate massive data to train their artificial intelligence models. How it all started. The Panama Project was launched by Anthropic in early 2024. According to internal documents revealed per the Washington Post, the goal was to “destructively scan every book in the world.” Furthermore, these documents also explicitly state that the company did not want anyone to know that they were working on it. In about a year, the company spent tens of millions of dollars buying millions of books, cutting their spines with hydraulic machines and scanning their pages to feed the AI ​​models that power Claudeits star chatbot. According to the media, the books, once digitized, ended up being recycled. Because has come to light. The details of the project have been revealed in a lawsuit for infringement of rights copyright filed by literary authors against Anthropic. Although the company agreed to pay $1.5 billion to close the case in August 2025, a district judge decided to make more than 4,000 pages of internal documents public last week, exposing the entire operation. They are not the only ones. Court documents reveal that other technology companies such as Meta, Google and OpenAI had also participated in this race to obtain massive information to train their models. According to revealed According to the documents, an Anthropic co-founder theorized in January 2023 that training AI models with books could teach them “how to write well” instead of imitating “low-quality internet slang.” On the other hand, an internal Meta email from 2024 described access to a digital library of books as “essential” to be competitive with rivals in the race to dominate AI. However, the documents revealed by the media also show how Meta employees expressed concern on several occasions about the legality of downloading millions of books without permission. An internal email from December 2023 indicates that the practice had been approved after being “escalated to MZ,” apparently referring to CEO Mark Zuckerberg. According to court records to which the media has had access, the companies did not consider it “practical” to obtain direct permission from publishers and authors. Instead, they found ways to mass-acquire books without the writers’ knowledge, including downloading unauthorized copies from third-party sites. Chat logs from April 2024 show an employee asking why they were using servers rented from Amazon to download torrents instead of Facebook’s own. The answer: “Avoid the risk of tracing” the activity back to the company. Data torrent. The documents to which the Washington Post has had access also they test that Ben Mann, co-founder of Anthropic, personally downloaded over 11 days in June 2021 a collection of books from LibGen, a gigantic library of copyrighted content. The outlet further revealed that, a year later, in July 2022, Mann celebrated the launch of the ‘Pirate Library Mirror’ website, which boasts a massive database of books and openly claims to violate copyright laws. “Just in time!!!” Mann wrote to other Anthropic employees, according to the outlet. Anthropic stated in legal documents that it never trained a revenue-generating business model using LibGen data nor did it use Pirate Library Mirror to train any full model. Anthropic’s legal solution. According to point the medium in its article, faced with the legal risk, Anthropic changed its strategy. The company hired Tom Turvey, a Silicon Valley veteran who had helped create the project Google Books two decades earlier. Under his direction, Anthropic considered purchasing books from libraries or secondhand bookstores, including New York’s iconic Strand bookstore. The company ultimately ended up buying millions of books and stacking them in a giant warehouse, often in batches of tens of thousands, according to court filings. The Washington Post assures In addition, the company worked with used book sellers in the United Kingdom. A project proposal mentions that Anthropic sought to “convert between 500,000 and two million books in a six-month period.” What the law says. Most legal cases against AI companies are still ongoing, but the media mention two court rulings that have considered that the use of books to train AI models without permission from the author or publisher may be legal under the “fair use” doctrine of copyright. In June 2025, District Judge William Alsup determined that Anthropic had the right to use books to train AI models because they process them in a “transformative” way. He compared the process to teachers “teaching schoolchildren to write well.” That same month, Judge Vince Chhabria ruled in the Meta case that the authors had not shown that the company’s AI models could harm the sales of their books. In the Anthropic case, the physical book scanning project was considered legal, but the judge determined that the company may have infringed copyright by downloading millions of books without authorization before launching Project Panama. The final agreement. Instead of facing a trial, Anthropic agreed to pay $1.5 billion to publishers and authors without admitting guilt. According to point According to the media, authors whose books were downloaded can claim their share of the settlement, estimated at about $3,000 per title. Cover image | Emil Widlund and Anthropic In Xataka | If AI is going to leave us without jobs, in the United Kingdom they are already seriously discussing the solution: a universal basic income

Programming is the new board of AI. OpenAI and Anthropic have made it clear with GPT-5.3-Codex and Claude Opus 4.6

When ChatGPT broke out in November 2022, OpenAI seemed unrivaled. And, to a large extent, that was the case. That chatbot, despite its errors and limitations, inaugurated a category of its own. However, in the technology sector advantages are rarely permanent and, in 2026, the position of the company led by Sam Altman It’s a far cry from what it had then. Google has managed to attract the general public with Nano Banana Prowhile Gemini steadily gaining ground as an artificial intelligence chatbot. At the same time, ChatGPT’s market share has fallen significantly in some markets. Anthropic, for its part, has established itself as a reference in software engineering and has become one of the preferred tools among programmers. In this race to set the pace of AI, this Thursday we witnessed a curious movement: the almost simultaneous arrival of two models focused on programming, GPT-5.3-Codex and Claude Opus 4.6. The coincidence does not seem coincidental and reflects the extent to which the major players in the sector compete to define the next step, in a scenario where the main beneficiaries are, once again, the users. With these new models already on the table, the question becomes what they really contribute. There are plenty of promises and they are also beginning to appear benchmarks comparable that help to place them. So, therefore, it is time to look in a little more detail at what OpenAI and Anthropic propose for those who use AI as a development tool. GPT-5.3-Codex and Opus 4.6 enter the scene: what each promises to developers GPT-5.3-Codex is presented as a model focused on scheduling agents which seeks to expand the scope of what a developer can delegate to AI. OpenAI claims that it combines improvements in code performance, reasoning and professional knowledge over previous generations and is 25% faster. With this balance, the system is oriented to prolonged tasks that involve research, use of tools and complex execution, while also maintaining the possibility of intervening and guiding the process in real time without losing the work thread. One of the most striking elements that OpenAI highlights in this generation is the role that Codex itself would have had in its development. The team used early versions of the model to debug training, manage deployment, and analyze test and evaluation results, an approach that accelerated research and engineering cycles. Beyond that internal process, GPT-5.3-Codex also shows progress in practical tasks such as the autonomous creation of web applications and games. The company has published two examples that we can try right now by clicking on the links: a racing game with eight maps and a diving game to explore reefs. Anthropic’s turn comes with Claude Opus 4.6, an update that the company presents as a direct improvement in planning, autonomy and reliability within large code bases. The model, they claim, can sustain agentic tasks for longer, reviewing and debugging its own work more accurately. The idea is that we can use these capabilities in tasks such as financial analysis, documentary research or creating presentations. Added to this is a context window of up to one million tokens in beta phase, a leap that seeks to reduce the loss of information in long processes and reinforce the usefulness of the system. Beyond the core of the model, Anthropic accompanies Opus 4.6 with a series of changes aimed at prolonging its usefulness in real workflows. Among them there are mechanisms such as the so-called “adaptive thinking”, which allows the system automatically adjust the depth of your reasoning depending on the context. Configurable effort levels and context compression techniques designed to sustain long conversations and tasks without exhausting the available limits also appear on the scene. Added to this are teams of agents that can be coordinated in parallel within Claude Code and deeper Excel or PowerPoint integration. While OpenAI’s product, GPT-5.3-Codex, is not yet available in the API, Anthropic’s is. Maintains the base price of $5 per million entry tokens and $25 per million exit tokenswith nuances such as a premium cost when the prompts exceed 200,000 tokens. Measure who wins with numbers? When trying to put GPT-5.3-Codex and Claude Opus 4.6 face to face, the main obstacle is not the lack of figures, but rather their difficult correspondence. Each company selects evaluations that best reflect its progress and, although many belong to similar categories, they differ in methodology, versions or metrics, which prevents a direct reading. In this type of models, this fragmentation of results is part of the state of the technology itself, but also requires cautious interpretation that separates technical demonstrations from truly equivalent comparisons. Only from this filter is it possible to identify the few points where both systems can be measured under comparable conditions and draw useful conclusions for developers. If we restrict the analysis to truly comparable metrics, the common ground between GPT-5.3-Codex and Claude Opus 4.6 is limited to two specific evaluations identified through our own research: Terminal-Bench 2.0 and OS World in its verified version. The results show a distribution of strengths rather than a clear supremacy. GPT-5.3-Codex marks a 77.3% in Terminal-Bench 2.0 compared to 65.4% for Opus 4.6, which points to greater efficiency in terminal-centric workflows. On the contrary, Opus 4.6 reaches a 72.7% on OSWorldsurpassing the 64.7% of GPT-5.3-Codex in general interaction tasks with the system, a contrast that reinforces the idea of ​​specialization according to the environment of use. So we could say that the capabilities described by each manufacturer point to tools that are no longer limited to generating code, but rather seek to participate in prolonged processes of analysis, execution and review within real professional environments. This transition introduces new selection criteria that go beyond punctual performance. In Xataka | OpenAI has a problem: Anthropic is succeeding right where the most money is at stake

Anthropic has taken Apple’s strategy against Microsoft to the Super Bowl: making using the rival look ridiculous

Anthropic has opened the Super Bowl by attacking OpenAI with ads that show virtual therapists advertising dating apps and personal trainers selling boosts for short people. The message: “Ads are coming to AI. But not to Claude“(“The ads are reaching the AI. But not Claude.”) Sam Altman has responded in X calling them “dishonest” and accusing them of “doublespeak“, “double speech” in Spanish, although a better adapted translation could be “deceptive language” or simply “hypocrisy.” It seems like a minor skirmish, two rivals fighting over an advertisement. But under that hood is a billion-dollar question: What kind of business will AI be when it’s established? The history of the Internet is summarized in two great models: One free supported by advertising: Google, Facebook, YouTube, Instagram, TikTok… regardless of whether they have premium versions. Other direct payment by subscription: Netflix, DAZN, Disney+, Apple Music, PSN… The first aims to maximize the audience, the second aims to maximize the revenue per user. The AI ​​is right now deciding which of the two paths it takes. In Xataka AI is breaking one of the oldest economic paradigms in history: that cheap equals "bad" OpenAI has already chosen and is starting to test putting ads on free ChatGPT accounts. Altman justifies it with the classic argument of democratization: “More Texans use free ChatGPT than the total number of people using Claude in the United States.” In other words: they want to reach those billions of people who are not going to pay 20 dollars a month. And for that you need advertising. Anthropic chooses the opposite. “Anthropic offers an expensive product to rich people,” Altman reproaches him. In a way, it is true: Claude is betting above all on contracts with companies and premium subscriptions of 20, 100 and 200 dollars per month. Their model depends on the AI ​​being valuable enough for you to pay for it. And so that you look from time to time to the higher plan with the temptation to go up one more step. Without advertising, without sponsored links and without responses being influenced by advertisers. The difference is not only business, it is product. An AI with advertising has different incentives than one without it. What happens when you ask the assistant what car to buy you and there is a manufacturer paying to appear in their answers? What about medical, financial, legal advice? OpenAI has promised that “ads do not influence responses.” That’s what he said in minute 0. But that promise will be increasingly difficult to sustain as monetization pressure increases. {“videoId”:”x9u4ml2″,”autoplay”:false,”title”:”Does Gemini 3 surpass ChatGPT? This is Google’s new AI”, “tag”:”Webedia-prod”, “duration”:”156″} Anthropic has its own problem: If it only reaches those who can afford to pay, AI becomes a tool of the elites. A technology that promises to democratize knowledge ends up reproducing the class divisions that already exist. We saw this coming with the arrival of $200 plans to access the AI ​​elite. A gap that creates another gap, The parallel with the history of the Internet is inevitable. Free social networks caught (almost) all of us in the 1910s, but in return they built advertising surveillance machines optimized for the engagementnot for anyone’s well-being. Payment services are cleaner, but also more exclusive. So AI is now at that bifurcation point: OpenAI is committed to being the YouTube of AI: free for everyone, supported by ads and with premium versions for those who want to pay. Anthropic wants to be the Netflix: better experience and free of ads, but only for those who pay. It is true that it maintains a free plan, but its limits are a continuous invitation to check out or leave. And now it’s up for grabs What kind of relationship with those machines that know more and more about us and from which we ask more and more?. Whether they will be services that serve us or whether they will be platforms that monetize us. In Xataka | The AI ​​of 2026 brings an uncomfortable truth: the most useful will be the one that watches us the most Featured image | Anthropic (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news Anthropic has taken Apple’s strategy against Microsoft to the Super Bowl: making using the rival look ridiculous was originally published in Xataka by Javier Lacort .

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.