NVIDIA, Microsoft and Anthropic have signed a new multi-million dollar agreement

Microsoft, NVIDIA and Anthropic have announced recently a series of strategic alliances that redistribute the map of power in the generative AI race. Anthropic will deploy its Claude models in Azure, Microsoft’s cloud, while committing to purchase $30 billion in computing capacity and contract additional capacity of up to one gigawatt. For their part, NVIDIA and Microsoft will invest up to 10,000 and 5,000 million dollars respectively in the startup. The triangular pact, in figures. Anthropic will have access for the first time to Microsoft Foundry, where its most advanced models (Claude Sonnet 4.5, Claude Opus 4.1 and Claude Haiku 4.5) will be available to Azure enterprise customers. With this, Claude becomes the only advanced model present in the three main cloud services in the world. Additionally, Microsoft promise maintain the integration of Claude into its Copilot family, including GitHub Copilot, Microsoft 365 Copilot, and Copilot Studio. In parallel, NVIDIA and Anthropic establish their first collaboration of such caliber. To do this, they will work together in design and engineering to optimize the Claude models on future NVIDIA architectures, starting with systems Grace Blackwell and Vera Rubin. Microsoft looks for alternatives to OpenAI. This move comes just weeks after OpenAI will complete its restructuring towards a for-profit model and will renew its agreement with Microsoft. Although Microsoft maintains a 27% stake in OpenAI valued at about $135 billion, the new terms of the deal have relaxed some key elements of its exclusivity. And OpenAI can now collaborate with third parties and release open source models, while Microsoft no longer has the right to try to be its sole computing provider. According to The Vergethese changes in the relationship with OpenAI have precisely allowed Microsoft to close this pact with Anthropic. In fact, Microsoft had already been betting on Claude in some of its services, for example, in Visual Code, prioritizing Claude over GPT-5 in your model selector. It also recently added Claude Sonnet 4 and Claude Opus 4.1 to Microsoft 365 Copilot. Circular financing: money that comes back. As is customary in these AI macro-agreements, a clear circular financing dynamic. Microsoft and NVIDIA pump capital into Anthropic, which in turn commits to spending tens of billions on infrastructure provided by those same companies. In essence, some of the money invested returns as revenue from cloud computing services and specialized hardware. It is not a new phenomenon: in fact, Anthropic already has similar agreements with Amazon, which has invested 8 billion dollars and continues to be its main infrastructure provider, and with Google, which in recent weeks announced a pact to provide up to one million TPUs to the startup. These types of cross-investments have become the norm in the generative AI ecosystem, creating almost symbiotic relationships between companies to meet their computing and infrastructure needs. one gigawatt. Building a data center with that capacity could cost around $50 billion, according to industry estimateswith some 35 billion dedicated exclusively to AI chips. Although the figure pales compared to OpenAI’s Stargate project, which aspires to 500,000 million dollars In investing, Anthropic’s approach seems more pragmatic and execution-focused. The company led by Dario Amodei has gained ground in the business market with less media noise but with solid results. And its annualized revenue rate now reaches $7 billion, although like the rest of the AI ​​startups it continues to spend much more than it earns. Diversification. What is really relevant about this agreement is that it confirms a trend: that large technology companies are no longer betting everything on a single card in AI. Microsoft, which has invested billions in OpenAI since 2019 and made it the flagship of its AI strategy, is now expanding its portfolio with Anthropic. For its part, Anthropic demonstrates its ability to maintain multiple alliances without compromising its independence. It is the sensible option and the one that minimizes risks. Cover image | Microsoft In Xataka | Tim Cook’s end at Apple is approaching

OpenAI teamed up with NVIDIA and made circular financing fashionable. Anthropic has returned the ball with a surprise girlfriend: Google

Let’s see if we were going to believe that OpenAI was going to be the only one to look for powerful allies. Nothing of that: Anthropic just did the same and has announced an eye-catching agreement with Google. The AI ​​startup will have access to up to one million Google TPUs in a pact that is worth “tens of billions of dollars.” Less noise, but a lot of nuts. The figures of the agreement are modest if we compare them with those that OpenAI has managed in its circular financing agreements with NVIDIA, amd either Broadcombut here Anthropic seems to take a very different position. Compared to colossal projects like Stargate, Anthropic’s idea is focused on execution. Without making much noise, the company led by Dario Amodei has been gradually conquering the business sector. More than 1 GW of computing capacity. On CNBC indicate that this investment will allow the creation of a data center with a computing capacity greater than 1 GW and have it ready in 2026. It is estimated that a center of these characteristics would cost about 50,000 million dollars, of which about 35,000 million would be dedicated to AI chips. It may not be comparable to Stargate and the idea of ​​investing $500 billion in data centers, but the alliance between Anthropic and Google is significant. More than circular financing. The partnership certainly features elements of circular financing, but it is more of a symbiotic relationship with that cross-investment component. The dynamic is simple and is now completed with that commercial return. The agreement requires Anthropic to buy or rent infrastructure services from Google Cloud. Virtuous circle. With its original investment in Anthropic, Google helped that company grow, which in turn allows Anthropic not only the ability to grow, but the need for enormous computing power… provided by Google. In essence, some of the money Google invests in Anthropic returns to Google Cloud as revenue. The vicious (or virtuous, as they say in the US) circle is complete. Anthropic diversifies. Anthropic’s AI models are trained and used using infrastructure from various manufacturers. Thus, they use both Google TPUs and Amazon Trainium processors and NVIDIA GPUs: each platform is assigned to a specialized workload. In the case of Google’s TPUs, according to Anthropic the focus is “its strong price/performance ratio and its efficiency.” Promising successes, but… Anthropic’s growth is evident, and its annualized revenue rate (ARR) is now estimated to reach $7 billion. Claude Code, its developer assistant, managed to generate 500 million dollars after just two months on the market. But as always, that revenue can’t hide the fact that Anthropic, like other AI startups, you continue to spend much more money than you earn. Amazon is your other great ally. In fact, the company led by Andy Jassy has invested around $8 billion, when official data indicates that Google has invested $3 billion. AWS is still considered the largest infrastructure provider for Anthropic, and its supercomputer Project Rainierbased on the Trainium 2, allows you to have a large computing capacity for every dollar invested, they point out on Amazon. The company’s influence is not only financial: it is structural. Image | Wikimedia | Fortune Brainstorm Tech In Xataka | You thought you had an amazing connection on Tinder, but you were actually chatting with ChatGPT

Anthropic is spending much more money than it brings in. The question is how long can it continue like this?

How much does AI cost? That question can be answered by AWS, which has billed Anthropic a whopping $2.66 billion so far this year. The problem is twofold, because in that same period it is estimated that Anthropic has earned 2.55 billion dollars, so with that alone it has spent more than it earns. But Anthropic has many more expenses and the accounts, once again, do not work out in the AI ​​segment. Why is it important. The data revealed by Ed Zitron confirms the problem they face all AI startups: They spend (much) more than they earn, and that trend does not seem to be reversing. In fact, although these companies are growing in revenue, they are also growing proportionally in expenses. And the question, of course, is whether this pace is sustainable. The Anthropic case. According to Zitron data, in 2024 Anthropic earned between $400 and $600 million, but spent $1.35 billion on AWS, that is, 226% of its income. The trend appears to continue in 2025, because the share of spending on AWS is 104% of its revenue. It seems that things have improved, but that expense does not include what it costs Anthropic use Google Cloud infrastructureanother of its partners in all its operations. The expenditure on it is also likely to be enormous, which complicates the situation. The mystery of unexplained costs. The unaccounted cost gap is also enormous. In 2024 Anthropic’s total spending was estimated at 6.2 billion dollars. If we know that he spent $1.35 billion on AWS, there is $4.85 billion left that is not explained. That suggests that spending on Google Cloud and other operational costs is absolutely astronomical. In fact, computing costs may be much higher than we thought. Another startup desperate for investment. Meanwhile, Anthropic continues to raise capital. Zitron analysis reveals that between 2023 and 2025 achievement raise investment rounds for a total of 37.5 billion dollars (20,000 of them in 2025 alone). A good part of that money came precisely from the companies that provide infrastructure: Amazon and Google. Despite that funding, Anthropic appears as desperate as OpenAI to raise new rounds of investment. The company run by Dario Amodei recently resorted to money from Middle Eastern countries, for example. Spending continues to skyrocket. The study figures further reveal that Anthropic spends more the more time passes. In January 2024, it spent $52.9 million on AWS, but in December 2024 that amount rose to $176.1 million. In September 2025, it is estimated that spending on AWS was no less than $518.9 million: the escalation in costs is very notable. And he tightens the screws on Cursor. One of Anthropic’s most important clients is the startup vibe coding Cursor. This company has clearly been affected by that situation, and Cursor’s costs on AWS doubled from $6.19 million in May 2025 to $12.67 million in June. Just in those Anthropic months implement the so-called “Service Levels” with which it forced business customers to spend a minimum amount and pay higher rates for prompt caching, a special component designed for startups that use generative AI models for programming. What did Cursor do? Increase prices (and apologize for it) of your customer subscriptions. This can’t go on like this forever. For Zitron, always very critical of this reality of AI companies, the conclusion is clear: Anthropic’s costs are out of control. In fact, he argues that they increase practically linearly with respect to revenue, which makes their business model unsustainable. The only solution is to increase prices drastically (possibly 100%) to become profitable. The problem is that the market accepts paying twice as much at once for AI as it currently pays for. Image | Anthropic | Taylor Vick In Xataka | Anthropic says Claude Sonnet 4.5 can clone a service like Slack in 30 hours. The reality is more complicated

Anthropic says that Claude Sonnet 4.5 can clone a service like Slack in 30 hours. Reality is more complicated

Anthropic has launched Claude Sonnet 4.5 ensuring that they put it to work 30 hours in a row to build a Slack replica. During that time, it generated 11,000 lines of code without supervision and only stopped when completing the task. In May, its Opus 4 model managed to operate for seven hours. The company presents it as “the best model in the world for agents, programming and use of computers.” Why is it important. Anthropic, Openai and Google free a battle to dominate Autonomous agents and programming tools. Those who convince will capture a lot of money in business licenses. Scott White, product manager, says that “at the level of a cabinet chief”: coordinates agendas, analyzes data, writes reports … Dianne Penn says he uses it to search for candidates on LinkedIn and generate spreadsheets. Yes, but. The developers tell another more nuanced story. Miguel Ángel Durán, known as @Midudevsummarizes it: “Claude Sonnet 4.5 Refactor my entire project in a Prompt. 20 minutes thinking. 14 new files. 1,500 modified lines. Applied clean architecture. Nothing worked. But how beautiful it was. “ Other developers They report the same: thousands of lines with an impeccable structure, but do not execute. Code that seems professional but collapses when compiling it. Between the lines. Anthropic has not shown the application of Slack working. He has only said that he built it. Nor has it shown that the code is operational. The difference between communicating something and demonstrating it, Underlined by Ed Zitron. The company is indirectly recognizing the problem: Claude Sonnet 4.5 arrives with extra infrastructure to build agents – virtual management, memory management, context management, multiagente support …–. Translation: Even with the most advanced model, developers need extra tools for agents to program reliably. In detail. Penn He explained to The Verge that the improvements surprised the internal team. The model is three times more skilled using computers than the October version. The team spent the last month working with feedback of github and cursor. Canva, Beta-fieldsHe says he helps with “complex long context tasks.” The contrast. There is a huge gap between marketing and technical reality. Anthropic promises an AI that operates 30 hours building complex software. Developers confirm that it generates very well structured but functionally broken code. This pattern is repeated throughout the industry. The models improve generating code that seems professional. They systematically fail generating code that really works without important human intervention. And now what. The question is still unanswered: when will we pass from Which generates beautiful but diffunctional code What generates functional code alone? Anthropic bets that his combination of powerful model and extra infrastructure closes that gap. At the moment we must continue waiting for concrete evidence to arrive, do not give without verifiable code. In Xataka | Openai signs with Samsung and SK Hynix for a potential chips demand of 900,000 wafers per month. It is an absurd figure Outstanding image | Anthropic

Anthropic wants to be unbeatable in programming, although his ambition goes further

Anthropic has just presented Claude Sonnet 4.5an evolution that The company defines as its most precise model to date. The focus is in Agentsprogramming and computer use, with the idea of ​​expanding what the previous versions of the Sonnet series already offered. His arrival is interpreted inside an increasingly adjusted struggle: Openai has launched GPT-5 With different levels of capacity and Google continues to bet on Geminiconfiguring a board where each advance generates new expectations. The family’s trajectory helps to understand the place occupied by this new version. With Sonnet 3.7Anthropic introduced a hybrid reasoning model that marked a remarkable leap in coding, content generation and data analysis. The subsequent arrival of Sonnet 4 He consolidated that bet, reinforcing his position as a practical option for attendees. These improvements made Sonnet into UNa Outstanding Alternative for Programmersand it is from that base where the expectation is now raised about what 4.5 can contribute. What Anthropic promises with his new model Sonnet 4.5 introduces improvements designed for agents that require maintaining attention for long periods. According to Anthropic, he is able to sustain the focus during More than 30 hours in complex tasks and admits outputs of up to 64,000 tokens, which expands the capacity of planning and generating code in extensive blocks. The developers have finer controls about the time that the “think” model before responding, which opens margin to balance speed and detail based on the need for each project. Another of the areas where Sonnet 4.5 seeks to differentiate is in the use of computer and browser. Anthropic points out that the model has reached 61.4% in Osworld, a Benchmark which measures the ability to complete real tasks in a desktop environment. This is a considerable leap compared to 42.2% obtained by Sonnet 4 just a few months ago. The company shows practical examples with its extension of Chrome, where Claude is capable of navigating websites, filling spreadsheets or perform competitive analysis without constant supervision. Programming is the land where Sonnet 4.5 wants to consolidate its leadership. Anthropic ensures that the model can cover The entire development cycle Software: from initial planning to the refactorization of large projects, through the maintenance and correction of errors. With Claude Code’s support, he seeks to become a stable assistant for technical teams. The range of Sonnet 4.5 extends to a wide range of applications that, according to Anthropic, make it a model designed for corporate and research environments. The most repeated examples in your presentation include: Cybersecurity: deployment of agents that correct failures without human intervention. Finance: Constant monitoring of regulatory changes and risk management. Productivity: Edition and creation of office files in different formats. Investigation: Integration of internal and external data to prepare reports. CONTENTS: writing with math understanding and deep semantic analysis. The company adds that Sonnet 4.5 has passed reviews with external experts to validate its safety and reliability. Sonnet 4.5 is now available for any user in Claude.AIboth on the web and in iOS and Android applications. In parallel, developers can integrate it into the Claude Developer Platform, in addition to services such as Amazon Bedrock and Google Cloud VerTex AI. The free plan works with a session limit that is restarted every five hours and with a variable number of messages according to the demand. Regarding prices, part of $ 3 per million input tokens and $ 15 per million departure tokens. Images | Anthropic | Xataka with Gemini 2.5 In Xataka | “The humanoid robots is pure fantasy”: Irobot’s co -founder believes that there is a robotics bubble

Anthropic is worth 183,000 million even though he invoices 5,000 million a year. Or it is the business of the century, or it is the madness of the century

Anthropic has just closed A financing round of 13,000 million dollars that values ​​it in 183,000 million. The figure sounds like madness when we put it in context: the company invoices 5,000 million a year. The figures. Anthropic is valued 36 times. Google, to compare, quotes 6 times. Apple at 8. Microsoft to 14. They are mature companies in front of a startup, but none remotely approaches this multiple. The round F It has been led by ICONIQ Capital, with Fidelity and Lightspeed as co-investors. Heavyweights such as Blackrock, the sovereign background of Qatar and Ontario Teachers’ Pension Plan have participated. What has happened. In just eight months, Anthropic has multiplied its income by five: from 1,000 million to 5,000 million in August (annualized). It is one of the fastest growth in the history of technology. Claude Codeits programmers tool, generates 500 million in annualized revenues. It has multiplied its use in three months since its complete launch in May. The context. The AI ​​career has become a war of valuations disconnected from classical financial reality. OpenAI negotiates an assessment of 500,000 million. XAI of Musk looks for 75,000 million. Investors are betting Billions to these companies will dominate the future. Anthropic serves 300,000 business clients. Its large accounts (those that pay more than $ 100,000 a year) have multiplied by seven in twelve months. Yes, but. Developing elite AI models is very expensive. Anthropic depends on Amazon and Google for his computational infrastructure, and costs him billions annually. The costs are not going down, they are accelerating. Sam Altman, CEO of Openai, has said that his company will need to invest billions of dollars. The generative AI business remains structurally deficient for almost all participants. Nvidia always wins. Between bambalins. Dario Amodei, CEO of Anthropic, has admitted in An internal memo that is not “excited” to accept money from sovereign funds of dictatorial governments. But says It is difficult to direct a business excluding “bad investors.” The company has promised to use 13,000 million to expand capacity, deepen international security and expansion research. It is also developing specific products by industry. The end of a dream. A few months ago We speculated that Apple could buy Anthropic to accelerate your entry into AI. With an assessment of 183,000 million, that option has been buried: it would be 60 times more expensive than Beats, Apple’s greatest acquisition in its history. Not even Tim Cook (who He was open to check) With 150,000 million in cash available, you can justify such a check before your shareholders. The big question. Are we facing the birth of the new technological giants or the greatest bubble from the Puntocom? With assessments that multiply by 36 income, the margin of error is non -existent. Investors are betting on Anthropic and their rivals will not only dominate AI, but the AI ​​will transform the entire global economy. If they are right, 183,000 million will seem cheap. If they are wrong, it will be a historical disaster. Outstanding image | Anthropic, Xataka In Xataka | People are celebrating funerals by the Ia withdrawn for a reason: they are not a “tool” but a support

Anthropic cuts Claude’s access to Openai. He has done it before the launch of GPT-5

The AI race is very intense lately. The last episode is stars in Anthropic, who have cut access to Openai so that they cannot access their family of models Claude. The company claims to have caught the engineers of Chatgpt wearing Claude programming toolswhich has not fallen very well. This, According to a spokesman From the company to the medium Wired, it is “a violation of its terms of service”, so they have restricted access to the API. What has happened exactly. Openai connected Claude to his internal tools through his API, instead of the conventional chat interface. This allowed the company to carry out comparative evidence between Claude and its own models in areas such as programming, creative writing and security -related responses. The results helped Openai evaluate the behavior of their models and make necessary adjustments. An endless war. This decision goes beyond a simple contractual dispute: marks a turning point in the relationship between two of the main powers of the generative AI. Anthropic was born in 2021 precisely from an Openai split, when several key researchers, including the brothers Dario and Daniela Amodei, left Altman’s company due to differences on the direction and safety of AI. Since then, The tension has been palpablealthough it had remained in the background. The justification of Anthropic. “Claude Code It has become the preferred option of programmers everywhere, so we were not surprised to know that Openai’s technical staff were also using our programming tools before the launch of GPT-5“said Christopher Nulty, spokesman for Anthropic. The company considers that this constitutes a direct violation of its commercial terms, which expressly prohibit using the service to” build a competitive product or service “or” make reverse engineering. “ Openai’s response. Sam Altman’s company He has defended its practices as “standard in the industry” to evaluate other AI systems and improve security. “Although we respect Anthropic’s decision to cut our access to the API, it is disappointing considering that our API is still available for them,” said Hannah Wong, director of communications of OpenAI. Between the lines. What we see now is the materialization of a cold war that has been being taken for years. Anthropic has positioned Claude as the “safer and more ethical” alternative against Chatgpt, while Openai has maintained his leadership for mass adoption and general abilities. This rivalry is not only business: it is also philosophical, with very different approaches on how to develop and market the AI. In addition, the blockade of the API is not an isolated case in the technological sector. As They mention In Wired, Facebook also blocked Vine In his day and Salesforce He recently limited access to competitors. What is clear is that this reflects how competition in AI is becoming more aggressive and territorial. Important nuances. Despite the blockade, Anthropic has clarified that will maintain OpenAi access “for benchmarking purposes and security evaluations”, a practice considered standard in the industry. However, the company has not specified how this current restriction will affect these activities. And now what. This climb arrives at the worst possible time for OpenAi, especially considering that we would be officially knowing GPT-5, which promises significant programming improvements. Therefore, everything indicates that Anthropic is willing to use all the tools at their disposal to stop the advance of its competitors. The worrying thing is that this could only be the beginning of a more open war between the Big Tech and the use of AI. In Xataka | The investment in AI already represents 2% of the US GDP. The problem is that it doesn’t even work well

Anthropic has seen that their users do not stop using the 200 euros plan a month of their AI. They had to stop their feet

Anthropic has announced new weekly limits for your CLAUDE They have already entered into force, especially affecting Claude Code, their programming tool through artificial intelligence. The measure arrives after some users have been executing the tool continuously 24 hours a dayconsuming resources equivalent to tens of thousands of dollars with the most expensive subscriptions of $ 200 per month. There are saturation and the accounts do not come out. The most intensive developers They are putting cane To the Anthropic servers, saturating them with a use that goes far beyond what the company had planned at first. Some programmers have configured Claude Code to permanently operate in the background, automating tasks and generating code without stopping. This has caused Claude Code to have suffered partial or total falls at least seven times in the last month, according to THE STATE PAGE of Anthropic. New limits. Of course, users will have limits that restart every seven daysin addition to the current five -hour limits. Users of the Pro Plan ($ 20 per month) will be able to use between 40 and 80 hours a week with the sonnet 4. those of the Max Plan of $ 100 will have between $ 140 and 280 hours with Sonnet 4 and up to 35 hours with Opus 4. The Max Plan of 200 dollars will allow up to 480 hours with Sonnet 4 and 40 hours with Opus 4. Anthropic ensures that less than 5% of its users will be affected by these changes. It is not an isolated case. Other companies in the sector are living similar situations. Cursor, the popular programming tool with AI, changed its price strategy In June to stop the most intensive users of their 0 dollars plan, although He had to apologize Then not to communicate the changes well. Replit, another rival in the space of the program assisted by AI, also implemented similar measures the same month. It is clear that companies dedicated to AI are experiencing their plans and studying the use given by their users. That there are price changes as soon is a symptom that they are still taking their pulse while looking for a way for the business to come out profitable. Hidden limits. One of Claude’s problems is that it does not offer an accountant that allows users to know how much they are consuming in real time. Developers have to go “blind” until they run directly with the limit, something that generates frustration especially among those who pay the most expensive subscriptions. In addition, the company has detected that some users are sharing and reverting accounts, which makes the use of these accounts even more intensive. An experimental business model. Anthropic has pledged to offer “other options” for cases of intensive use in the future, but for the moment the priority is to maintain stable service for most users. Subscribers of the Max Plan may buy additional use at API prices when they exceed their limits. Keep a generative ia service as Claude It’s a challengeespecially if you want to guarantee maximum performance to all users. That is why we see so many changes in their plans, and it will not be uncommon to see them in the rest of the alternatives. Cover image | Anthropic In Xataka | There are those who believe that the best AIs become more silly over time. It is no madness

If the question is “How to make a good prompt for AI”, Anthropic has just given us his guide

Anthropic, the creator of Claude, has published its definitive guide of Prompt Engineering. A free Bible that synthesizes years of research in practical techniques to get the most out of Claude. Why is it important. The majority of users barely scratch on the surface of what the generative AI can do, leaving below even what could benefit in their day to day, even if they are not engineering uses. The panoramic. The guide covers from the foundations to the most sophisticated. From how to be clear and direct in a request to advanced techniques such as Multishot Prompting or the chains of thought. Culminates with strategies such as the chain of Prompts complex And all enlightened with real examples of Claude. In detail. The nine central techniques of this guide function as a ladder: Clarity and direction. Say exactly what you want. Prompts Vagos produce vague results. Various examples. It shows Claude how to think with several cases of use. Thought chains. Ask Claude to think step by step before answering. XML Tags. Structure the answers with greater precision. System roles. The classic “You are a lawyer”, “You are a data analyst”, etc. To change the chatbot perspective. Prellenate of answers. Guide the tone and format starting the answer. The context. Anthropic explains that many of the problems with generative AI are solved with better Promptsnot necessarily with more powerful models. This is an attempt to democratize techniques that they only knew somewhat more advanced developers. Open the guts of what they know it works. For example. The guide includes concrete examples: Instead of ‘Create a Dashboard’, he writes ‘creates a complete analytical analytical dashboard with interactive graphics and filters’. For code Frontendadds specific modifiers: “Includes soft transitions, micro -interctions and visual effects that demonstrate advanced web development capabilities.” And for complex tasks, the guide recommends using XML labels as to structure Claude’s thought. The difference is in specificity: say exactly what you want produces much better than vague instructions. The AI ​​returns you what you give. Yes, but. There is an important nuance: this guide assumes that you already have clear success criteria and ways to evaluate results. Without that base, even the best Prompt It is a shot in the air. And now what. For developers, this is gold. For the user something else casual, It is worth dominating at least the first techniques for minimizing hallucinations and improving the results. He Prompt Engineeringthat some time ago it seemed a kind of black magic to invoke in a mystical way, it is every time a skill that can be learned, measured and better perfect. Outstanding image | Xataka with Mockuuuups Studio In Xataka | Chatgpt has been a tool. If you start remembering all our conversations, it will be something else: a relationship

Anthropic trained his AI with millions of books with copyright. To a judge that has seemed correct (with a great asterisk)

Anthropic has just achieved a very important legal victory in that legal battle that the world of AI maintains with copyright and copyright for years. The sentence, favorable to Anthropic, can sit a great precedent for the rest of the cases in which AI companies have been sued for training their models with works with copyright. But be careful, because it has not been a total victory. ANTOPIC WIN. In the demand of three authors against Anthropic, the company was accused of downloading millions of books with copyright, in addition to buying some of them to scan and digitize them. The objective: train their AI models. Judge William Alsup has made clear In his sentence that “the use for training was a fair use.” Companies that develop AI models have always shielded in that concept of just use to argue how their models with all kinds of works, including those protected by copyright. Fair use. This legal criterion maintains that limited use of protected material is allowed without needing permission from the owner of those rights. In the laws of Copyright, one of the ways that judges have to determine if that type of activity is a fair use is to examine whether that use was “transformer.” Or what is the same, if something new has been created from these works. For Alsup “the technology in question is one of the most transformatives that many of us will see in our lives.” A victory with a great asterisk. Although the judge indicated that this training process was a fair use, he also determined that the authors could lead Anthropic to trial for hacking their works. The company argued that this was justified because it was “at least reasonably necessary to train LLMS.” For Alsup the issue is precisely that although they ended up buying some of them, he built a huge library for which he did not pay: “Anthropic downloaded more than seven million pirate copies of books, did not pay anything and retained these pirate copies in his library even after deciding that he would not use them to train their AI (at all or never again). The authors argue that Anthropic should have paid for these pirate copies of the library. This sentence coincides with it.” Thomson-Reuters’ precedent. A few months ago Thomson Reuters won a 2020 demand Against a so -called Ross Intelligence Startup. According to them, the company had reproduced material from its legal research division, called Westlaw. The judge rejected the arguments of the defense and declared that the argument for fair use could not be applied in that case. The sentence against Anthropic is right in the opposite direction and blesses that type of use … while companies buy the works with which they train their models. The company of AI, by the way, had already achieved a small legal victory In a previous case against Universal Music. Anthropic downloaded piecework books. In the trial it was revealed how the co -founder of Anthropic, Ben Mann, downloaded in winter 2021 data sets such as The so -called Books3 or libgen (Library Genesis) that they are nothing more than gigantic book compilations, many of which are protected by copyright. Goal is in the same. All companies that develop AI models have been trained with all types of data, including works protected by copyright, and they all face a similar situation. Goal, for example, downloaded 81.7 TB of books with copyright via Bittorrent to train their AI models. That makes the company of Mark Zuckerberg can end up suffering a destination similar to that of Anthropic, which has before him a new very dangerous judicial process for his finances. A potential fine of billions of dollars. As indicated in Wired, the minimum fine for this type of copyright rape is $ 750 per book. Alsup indicated that the illegally unloaded library of Anthropic consists of at least seven million books, and that means that the company faces a potentially huge fine. At the moment there is no date for that new trial. The endless battle of AI and copyright. This is the last episode of a soap opera that we will undoubtedly see many more chapters. Companies like Google, OpenAI either Perplexity They have been equally voracious when training their models and have devastated public (and not so public) data on the Internet. Copyright’s rape demands are accumulating, and cases such as Anthropic may sit a predictive disturbing for all of them if they did not buy the books they used to train their models. Image | Emil Widlund In Xataka | 5,000 “tokens” of my blog are being used to train an AI. I have not given my permission

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.