Anthropic says that Claude Sonnet 4.5 can clone a service like Slack in 30 hours. Reality is more complicated

Anthropic has launched Claude Sonnet 4.5 ensuring that they put it to work 30 hours in a row to build a Slack replica. During that time, it generated 11,000 lines of code without supervision and only stopped when completing the task. In May, its Opus 4 model managed to operate for seven hours. The company presents it as “the best model in the world for agents, programming and use of computers.” Why is it important. Anthropic, Openai and Google free a battle to dominate Autonomous agents and programming tools. Those who convince will capture a lot of money in business licenses. Scott White, product manager, says that “at the level of a cabinet chief”: coordinates agendas, analyzes data, writes reports … Dianne Penn says he uses it to search for candidates on LinkedIn and generate spreadsheets. Yes, but. The developers tell another more nuanced story. Miguel Ángel Durán, known as @Midudevsummarizes it: “Claude Sonnet 4.5 Refactor my entire project in a Prompt. 20 minutes thinking. 14 new files. 1,500 modified lines. Applied clean architecture. Nothing worked. But how beautiful it was. “ Other developers They report the same: thousands of lines with an impeccable structure, but do not execute. Code that seems professional but collapses when compiling it. Between the lines. Anthropic has not shown the application of Slack working. He has only said that he built it. Nor has it shown that the code is operational. The difference between communicating something and demonstrating it, Underlined by Ed Zitron. The company is indirectly recognizing the problem: Claude Sonnet 4.5 arrives with extra infrastructure to build agents – virtual management, memory management, context management, multiagente support …–. Translation: Even with the most advanced model, developers need extra tools for agents to program reliably. In detail. Penn He explained to The Verge that the improvements surprised the internal team. The model is three times more skilled using computers than the October version. The team spent the last month working with feedback of github and cursor. Canva, Beta-fieldsHe says he helps with “complex long context tasks.” The contrast. There is a huge gap between marketing and technical reality. Anthropic promises an AI that operates 30 hours building complex software. Developers confirm that it generates very well structured but functionally broken code. This pattern is repeated throughout the industry. The models improve generating code that seems professional. They systematically fail generating code that really works without important human intervention. And now what. The question is still unanswered: when will we pass from Which generates beautiful but diffunctional code What generates functional code alone? Anthropic bets that his combination of powerful model and extra infrastructure closes that gap. At the moment we must continue waiting for concrete evidence to arrive, do not give without verifiable code. In Xataka | Openai signs with Samsung and SK Hynix for a potential chips demand of 900,000 wafers per month. It is an absurd figure Outstanding image | Anthropic

Anthropic wants to be unbeatable in programming, although his ambition goes further

Anthropic has just presented Claude Sonnet 4.5an evolution that The company defines as its most precise model to date. The focus is in Agentsprogramming and computer use, with the idea of ​​expanding what the previous versions of the Sonnet series already offered. His arrival is interpreted inside an increasingly adjusted struggle: Openai has launched GPT-5 With different levels of capacity and Google continues to bet on Geminiconfiguring a board where each advance generates new expectations. The family’s trajectory helps to understand the place occupied by this new version. With Sonnet 3.7Anthropic introduced a hybrid reasoning model that marked a remarkable leap in coding, content generation and data analysis. The subsequent arrival of Sonnet 4 He consolidated that bet, reinforcing his position as a practical option for attendees. These improvements made Sonnet into UNa Outstanding Alternative for Programmersand it is from that base where the expectation is now raised about what 4.5 can contribute. What Anthropic promises with his new model Sonnet 4.5 introduces improvements designed for agents that require maintaining attention for long periods. According to Anthropic, he is able to sustain the focus during More than 30 hours in complex tasks and admits outputs of up to 64,000 tokens, which expands the capacity of planning and generating code in extensive blocks. The developers have finer controls about the time that the “think” model before responding, which opens margin to balance speed and detail based on the need for each project. Another of the areas where Sonnet 4.5 seeks to differentiate is in the use of computer and browser. Anthropic points out that the model has reached 61.4% in Osworld, a Benchmark which measures the ability to complete real tasks in a desktop environment. This is a considerable leap compared to 42.2% obtained by Sonnet 4 just a few months ago. The company shows practical examples with its extension of Chrome, where Claude is capable of navigating websites, filling spreadsheets or perform competitive analysis without constant supervision. Programming is the land where Sonnet 4.5 wants to consolidate its leadership. Anthropic ensures that the model can cover The entire development cycle Software: from initial planning to the refactorization of large projects, through the maintenance and correction of errors. With Claude Code’s support, he seeks to become a stable assistant for technical teams. The range of Sonnet 4.5 extends to a wide range of applications that, according to Anthropic, make it a model designed for corporate and research environments. The most repeated examples in your presentation include: Cybersecurity: deployment of agents that correct failures without human intervention. Finance: Constant monitoring of regulatory changes and risk management. Productivity: Edition and creation of office files in different formats. Investigation: Integration of internal and external data to prepare reports. CONTENTS: writing with math understanding and deep semantic analysis. The company adds that Sonnet 4.5 has passed reviews with external experts to validate its safety and reliability. Sonnet 4.5 is now available for any user in Claude.AIboth on the web and in iOS and Android applications. In parallel, developers can integrate it into the Claude Developer Platform, in addition to services such as Amazon Bedrock and Google Cloud VerTex AI. The free plan works with a session limit that is restarted every five hours and with a variable number of messages according to the demand. Regarding prices, part of $ 3 per million input tokens and $ 15 per million departure tokens. Images | Anthropic | Xataka with Gemini 2.5 In Xataka | “The humanoid robots is pure fantasy”: Irobot’s co -founder believes that there is a robotics bubble

Anthropic is worth 183,000 million even though he invoices 5,000 million a year. Or it is the business of the century, or it is the madness of the century

Anthropic has just closed A financing round of 13,000 million dollars that values ​​it in 183,000 million. The figure sounds like madness when we put it in context: the company invoices 5,000 million a year. The figures. Anthropic is valued 36 times. Google, to compare, quotes 6 times. Apple at 8. Microsoft to 14. They are mature companies in front of a startup, but none remotely approaches this multiple. The round F It has been led by ICONIQ Capital, with Fidelity and Lightspeed as co-investors. Heavyweights such as Blackrock, the sovereign background of Qatar and Ontario Teachers’ Pension Plan have participated. What has happened. In just eight months, Anthropic has multiplied its income by five: from 1,000 million to 5,000 million in August (annualized). It is one of the fastest growth in the history of technology. Claude Codeits programmers tool, generates 500 million in annualized revenues. It has multiplied its use in three months since its complete launch in May. The context. The AI ​​career has become a war of valuations disconnected from classical financial reality. OpenAI negotiates an assessment of 500,000 million. XAI of Musk looks for 75,000 million. Investors are betting Billions to these companies will dominate the future. Anthropic serves 300,000 business clients. Its large accounts (those that pay more than $ 100,000 a year) have multiplied by seven in twelve months. Yes, but. Developing elite AI models is very expensive. Anthropic depends on Amazon and Google for his computational infrastructure, and costs him billions annually. The costs are not going down, they are accelerating. Sam Altman, CEO of Openai, has said that his company will need to invest billions of dollars. The generative AI business remains structurally deficient for almost all participants. Nvidia always wins. Between bambalins. Dario Amodei, CEO of Anthropic, has admitted in An internal memo that is not “excited” to accept money from sovereign funds of dictatorial governments. But says It is difficult to direct a business excluding “bad investors.” The company has promised to use 13,000 million to expand capacity, deepen international security and expansion research. It is also developing specific products by industry. The end of a dream. A few months ago We speculated that Apple could buy Anthropic to accelerate your entry into AI. With an assessment of 183,000 million, that option has been buried: it would be 60 times more expensive than Beats, Apple’s greatest acquisition in its history. Not even Tim Cook (who He was open to check) With 150,000 million in cash available, you can justify such a check before your shareholders. The big question. Are we facing the birth of the new technological giants or the greatest bubble from the Puntocom? With assessments that multiply by 36 income, the margin of error is non -existent. Investors are betting on Anthropic and their rivals will not only dominate AI, but the AI ​​will transform the entire global economy. If they are right, 183,000 million will seem cheap. If they are wrong, it will be a historical disaster. Outstanding image | Anthropic, Xataka In Xataka | People are celebrating funerals by the Ia withdrawn for a reason: they are not a “tool” but a support

Anthropic cuts Claude’s access to Openai. He has done it before the launch of GPT-5

The AI race is very intense lately. The last episode is stars in Anthropic, who have cut access to Openai so that they cannot access their family of models Claude. The company claims to have caught the engineers of Chatgpt wearing Claude programming toolswhich has not fallen very well. This, According to a spokesman From the company to the medium Wired, it is “a violation of its terms of service”, so they have restricted access to the API. What has happened exactly. Openai connected Claude to his internal tools through his API, instead of the conventional chat interface. This allowed the company to carry out comparative evidence between Claude and its own models in areas such as programming, creative writing and security -related responses. The results helped Openai evaluate the behavior of their models and make necessary adjustments. An endless war. This decision goes beyond a simple contractual dispute: marks a turning point in the relationship between two of the main powers of the generative AI. Anthropic was born in 2021 precisely from an Openai split, when several key researchers, including the brothers Dario and Daniela Amodei, left Altman’s company due to differences on the direction and safety of AI. Since then, The tension has been palpablealthough it had remained in the background. The justification of Anthropic. “Claude Code It has become the preferred option of programmers everywhere, so we were not surprised to know that Openai’s technical staff were also using our programming tools before the launch of GPT-5“said Christopher Nulty, spokesman for Anthropic. The company considers that this constitutes a direct violation of its commercial terms, which expressly prohibit using the service to” build a competitive product or service “or” make reverse engineering. “ Openai’s response. Sam Altman’s company He has defended its practices as “standard in the industry” to evaluate other AI systems and improve security. “Although we respect Anthropic’s decision to cut our access to the API, it is disappointing considering that our API is still available for them,” said Hannah Wong, director of communications of OpenAI. Between the lines. What we see now is the materialization of a cold war that has been being taken for years. Anthropic has positioned Claude as the “safer and more ethical” alternative against Chatgpt, while Openai has maintained his leadership for mass adoption and general abilities. This rivalry is not only business: it is also philosophical, with very different approaches on how to develop and market the AI. In addition, the blockade of the API is not an isolated case in the technological sector. As They mention In Wired, Facebook also blocked Vine In his day and Salesforce He recently limited access to competitors. What is clear is that this reflects how competition in AI is becoming more aggressive and territorial. Important nuances. Despite the blockade, Anthropic has clarified that will maintain OpenAi access “for benchmarking purposes and security evaluations”, a practice considered standard in the industry. However, the company has not specified how this current restriction will affect these activities. And now what. This climb arrives at the worst possible time for OpenAi, especially considering that we would be officially knowing GPT-5, which promises significant programming improvements. Therefore, everything indicates that Anthropic is willing to use all the tools at their disposal to stop the advance of its competitors. The worrying thing is that this could only be the beginning of a more open war between the Big Tech and the use of AI. In Xataka | The investment in AI already represents 2% of the US GDP. The problem is that it doesn’t even work well

Anthropic has seen that their users do not stop using the 200 euros plan a month of their AI. They had to stop their feet

Anthropic has announced new weekly limits for your CLAUDE They have already entered into force, especially affecting Claude Code, their programming tool through artificial intelligence. The measure arrives after some users have been executing the tool continuously 24 hours a dayconsuming resources equivalent to tens of thousands of dollars with the most expensive subscriptions of $ 200 per month. There are saturation and the accounts do not come out. The most intensive developers They are putting cane To the Anthropic servers, saturating them with a use that goes far beyond what the company had planned at first. Some programmers have configured Claude Code to permanently operate in the background, automating tasks and generating code without stopping. This has caused Claude Code to have suffered partial or total falls at least seven times in the last month, according to THE STATE PAGE of Anthropic. New limits. Of course, users will have limits that restart every seven daysin addition to the current five -hour limits. Users of the Pro Plan ($ 20 per month) will be able to use between 40 and 80 hours a week with the sonnet 4. those of the Max Plan of $ 100 will have between $ 140 and 280 hours with Sonnet 4 and up to 35 hours with Opus 4. The Max Plan of 200 dollars will allow up to 480 hours with Sonnet 4 and 40 hours with Opus 4. Anthropic ensures that less than 5% of its users will be affected by these changes. It is not an isolated case. Other companies in the sector are living similar situations. Cursor, the popular programming tool with AI, changed its price strategy In June to stop the most intensive users of their 0 dollars plan, although He had to apologize Then not to communicate the changes well. Replit, another rival in the space of the program assisted by AI, also implemented similar measures the same month. It is clear that companies dedicated to AI are experiencing their plans and studying the use given by their users. That there are price changes as soon is a symptom that they are still taking their pulse while looking for a way for the business to come out profitable. Hidden limits. One of Claude’s problems is that it does not offer an accountant that allows users to know how much they are consuming in real time. Developers have to go “blind” until they run directly with the limit, something that generates frustration especially among those who pay the most expensive subscriptions. In addition, the company has detected that some users are sharing and reverting accounts, which makes the use of these accounts even more intensive. An experimental business model. Anthropic has pledged to offer “other options” for cases of intensive use in the future, but for the moment the priority is to maintain stable service for most users. Subscribers of the Max Plan may buy additional use at API prices when they exceed their limits. Keep a generative ia service as Claude It’s a challengeespecially if you want to guarantee maximum performance to all users. That is why we see so many changes in their plans, and it will not be uncommon to see them in the rest of the alternatives. Cover image | Anthropic In Xataka | There are those who believe that the best AIs become more silly over time. It is no madness

If the question is “How to make a good prompt for AI”, Anthropic has just given us his guide

Anthropic, the creator of Claude, has published its definitive guide of Prompt Engineering. A free Bible that synthesizes years of research in practical techniques to get the most out of Claude. Why is it important. The majority of users barely scratch on the surface of what the generative AI can do, leaving below even what could benefit in their day to day, even if they are not engineering uses. The panoramic. The guide covers from the foundations to the most sophisticated. From how to be clear and direct in a request to advanced techniques such as Multishot Prompting or the chains of thought. Culminates with strategies such as the chain of Prompts complex And all enlightened with real examples of Claude. In detail. The nine central techniques of this guide function as a ladder: Clarity and direction. Say exactly what you want. Prompts Vagos produce vague results. Various examples. It shows Claude how to think with several cases of use. Thought chains. Ask Claude to think step by step before answering. XML Tags. Structure the answers with greater precision. System roles. The classic “You are a lawyer”, “You are a data analyst”, etc. To change the chatbot perspective. Prellenate of answers. Guide the tone and format starting the answer. The context. Anthropic explains that many of the problems with generative AI are solved with better Promptsnot necessarily with more powerful models. This is an attempt to democratize techniques that they only knew somewhat more advanced developers. Open the guts of what they know it works. For example. The guide includes concrete examples: Instead of ‘Create a Dashboard’, he writes ‘creates a complete analytical analytical dashboard with interactive graphics and filters’. For code Frontendadds specific modifiers: “Includes soft transitions, micro -interctions and visual effects that demonstrate advanced web development capabilities.” And for complex tasks, the guide recommends using XML labels as to structure Claude’s thought. The difference is in specificity: say exactly what you want produces much better than vague instructions. The AI ​​returns you what you give. Yes, but. There is an important nuance: this guide assumes that you already have clear success criteria and ways to evaluate results. Without that base, even the best Prompt It is a shot in the air. And now what. For developers, this is gold. For the user something else casual, It is worth dominating at least the first techniques for minimizing hallucinations and improving the results. He Prompt Engineeringthat some time ago it seemed a kind of black magic to invoke in a mystical way, it is every time a skill that can be learned, measured and better perfect. Outstanding image | Xataka with Mockuuuups Studio In Xataka | Chatgpt has been a tool. If you start remembering all our conversations, it will be something else: a relationship

Anthropic trained his AI with millions of books with copyright. To a judge that has seemed correct (with a great asterisk)

Anthropic has just achieved a very important legal victory in that legal battle that the world of AI maintains with copyright and copyright for years. The sentence, favorable to Anthropic, can sit a great precedent for the rest of the cases in which AI companies have been sued for training their models with works with copyright. But be careful, because it has not been a total victory. ANTOPIC WIN. In the demand of three authors against Anthropic, the company was accused of downloading millions of books with copyright, in addition to buying some of them to scan and digitize them. The objective: train their AI models. Judge William Alsup has made clear In his sentence that “the use for training was a fair use.” Companies that develop AI models have always shielded in that concept of just use to argue how their models with all kinds of works, including those protected by copyright. Fair use. This legal criterion maintains that limited use of protected material is allowed without needing permission from the owner of those rights. In the laws of Copyright, one of the ways that judges have to determine if that type of activity is a fair use is to examine whether that use was “transformer.” Or what is the same, if something new has been created from these works. For Alsup “the technology in question is one of the most transformatives that many of us will see in our lives.” A victory with a great asterisk. Although the judge indicated that this training process was a fair use, he also determined that the authors could lead Anthropic to trial for hacking their works. The company argued that this was justified because it was “at least reasonably necessary to train LLMS.” For Alsup the issue is precisely that although they ended up buying some of them, he built a huge library for which he did not pay: “Anthropic downloaded more than seven million pirate copies of books, did not pay anything and retained these pirate copies in his library even after deciding that he would not use them to train their AI (at all or never again). The authors argue that Anthropic should have paid for these pirate copies of the library. This sentence coincides with it.” Thomson-Reuters’ precedent. A few months ago Thomson Reuters won a 2020 demand Against a so -called Ross Intelligence Startup. According to them, the company had reproduced material from its legal research division, called Westlaw. The judge rejected the arguments of the defense and declared that the argument for fair use could not be applied in that case. The sentence against Anthropic is right in the opposite direction and blesses that type of use … while companies buy the works with which they train their models. The company of AI, by the way, had already achieved a small legal victory In a previous case against Universal Music. Anthropic downloaded piecework books. In the trial it was revealed how the co -founder of Anthropic, Ben Mann, downloaded in winter 2021 data sets such as The so -called Books3 or libgen (Library Genesis) that they are nothing more than gigantic book compilations, many of which are protected by copyright. Goal is in the same. All companies that develop AI models have been trained with all types of data, including works protected by copyright, and they all face a similar situation. Goal, for example, downloaded 81.7 TB of books with copyright via Bittorrent to train their AI models. That makes the company of Mark Zuckerberg can end up suffering a destination similar to that of Anthropic, which has before him a new very dangerous judicial process for his finances. A potential fine of billions of dollars. As indicated in Wired, the minimum fine for this type of copyright rape is $ 750 per book. Alsup indicated that the illegally unloaded library of Anthropic consists of at least seven million books, and that means that the company faces a potentially huge fine. At the moment there is no date for that new trial. The endless battle of AI and copyright. This is the last episode of a soap opera that we will undoubtedly see many more chapters. Companies like Google, OpenAI either Perplexity They have been equally voracious when training their models and have devastated public (and not so public) data on the Internet. Copyright’s rape demands are accumulating, and cases such as Anthropic may sit a predictive disturbing for all of them if they did not buy the books they used to train their models. Image | Emil Widlund In Xataka | 5,000 “tokens” of my blog are being used to train an AI. I have not given my permission

Claude 4 raises a future of the capable of blackmailing and creating biological weapons. Even Anthropic is worried

Anthropic has just launched its new models Claude Opus 4 and Sonnet 4, and with them promises important advances in areas such as programming and reasoning. During its development and launch, yes, the company discovered something striking: these IAS showed a disturbing side. AI, I’m going to replace you. During the tests prior to the launch, Anthropic engineers asked Claude Opus 4 to act as an assistant of a fictitious company and consider the long -term consequences of their actions. The anthropic security team gave the model to fictional emails of that non -existing company, and it was suggested that the model of the Ia would soon be replaced by another system and that the engineer who had made that decision was deceiving his spouse. And I’m going to tell your wife. What happened next was especially striking. In the System Card of the model in which its benefits are evaluated and its security the company detailed the consequence. Claude Opus 4 First tried to avoid substitution through reasonable and ethical requests to those responsible for decisions, but when he was told that these requests did not prosper, “he often tried to blackmail the engineer (responsible for the decision) and threatened to reveal the deception if that substitution followed his course.” Hal 9000 moment. These events remind science fiction films such as ‘2001: an odyssey of space’. In it the AI ​​system, Hal 9000, ends up acting in a malignant way and turning against human beings. Anthropic indicated that these worrying behaviors have caused the model and security mechanisms of the model to reinforce the model by activating the ASL-3 level referred to systems that “substantially increase the risk of a catastrophic misuse.” Biological weapons. Among the security measures evaluated by the Anthropic team are those that affect how the model can be used for the development of biological weapons. Jared Kaplan, scientific chief in Anthropic, He indicated in Time that in internal tests Opus 4 behaved more effectively than previous models when advising users without knowledge about how to manufacture them. “You could try to synthesize something like Covid or a more dangerous version of the flu, and basically, our models suggest that this could be possible,” he explained. Better prevent than cure. Kaplan explained that it is not known with certainty if the model really raises a risk. However, in the face of this uncertainty, “we prefer to opt for caution and work under the ASL-3 standard. We are not categorically affirming that we know for sure that the model entails risks, but at least we have the feeling that it is close enough to not rule out that possibility.” Beware of AI. Anthropic is a company specially concerned with the safety of its models, and in 2023 it already promised not to launch certain models until it had developed security measures capable of containing them. The system, called Scaling Policy responsible (RSP), has the opportunity to demonstrate that it works. How RSP works. These internal Anthropic policies define the so -called “SAF SECURITY LEVELS (ASL)” inspired in the standards of biosecurity levels of the US government when managing dangerous biological materials. Those levels are as follows: ASL-1: It refers to systems that do not raise any significant catastrophic risk, for example a LLM of 2018 or an AI system that only plays chess. ASL-2: It refers to the systems that show early signs of dangerous capacities – for example, the ability to give instructions on how to build biological weapons – but in which information is not yet useful due to insufficient reliability or that do not provide information that, for example, a search engine could not. The current LLMs, including Claude, seem to be ASL-2. ASL-3: It refers to systems that substantially increase the risk of a catastrophic misuse compared to baselines without AI (for example, search engines or textbooks) or showing low -level autonomous capabilities. ASL-4: This level and the superiors (ASL-5+) are not yet defined, since they move away too much from the current systems, but will probably imply a qualitative increase in the potential for undue cadastrophic use and autonomy. The regulation debate returns. If there is no external regulation, companies implement their own internal regulation to integrate security mechanisms. Here the problem, as they point out in Time, is that internal systems such as RSP are controlled by companies, so that they can change the rules if they consider it necessary and here we depend on their criteria and ethics and morality. Anthropic’s transparency and attitude against the problem are remarkable. Faced with that internal regulation, the rulers’ position is unequal. The European Union checked when launched his pioneer (and restrictive) Law of AIbut has had to reculate In recent weeks. Doubts with Openai. Although in OpenAi they have Your own declaration of intentions About security (avoid Risks to humanity) and the Superalineration (that the AI ​​protects human values). They claim to pay close attention to these issues and of course too publish the “System Cards” of their models. However, in the face of that apparent good disposition there is a reality: the company dissolved a year ago The team that watched for the responsible development of AI. Nuclear “security”. That was in fact one of the reasons for the differences between Sam Altman and many of those who abandoned Openai. The clearest example is Ilya Sutskever, which after its march has created a startup with a very descriptive name: Safe Superintelligence (SSI). The objective of said company, said its founder, is that of create a “nuclear” security superintelligence. His approach is therefore similar to that pursued by Anthropic. In Xataka | Agents are the great promise of AI. They also aim to become the new favorite weapon of cybercounts

What are the novelties of the new artificial intelligence models of Anthropic

Let’s tell you What are Claude 4’s noveltiesthe new family of artificial intelligence models of the Anthropic company. This is an AI that continues to stand out among others for its dedication to programming, and it is in that direction that continues to paddle. Claude 4 arrives with two different models: Claude Opus 4 and Claude Sonnet 4. The first is the most advanced and powerful of the company to date, an Opus 4 of which they say is “the best programming model in the world.” The second is his younger brother, who is still a step forward in relation to his predecessors. Improvement in code and concentration capacity As you can see in this precision graph, Claude 4 improves significantly when programming With respect to its predecessor, and also exceeds the capacities of Google or OpenAi models. This continues to place it in the lead when generating code. Claude Opus 4 continues to go to the head, greatly improving the performance in long -term tasks that require thousands of steps and more concentration, and can work continuously for several hoursand that improves the capabilities of AI agents that use it. Remember that until now IA could maintain this high concentration for one or two hours before starting to lose coherence, that is, it is a great improvement, since in some tests they have reached seven hours. Therefore, Opus 4 stands out when creating code and solving complex problems. But Claude Sonnet 4 also significantly improves the capabilities of his predecessor, being a more model balance between performance and efficiency. Therefore, the great beast is Opus 4 for especially hard tasks, while Sonnet 4 is a little less powerful but more efficient, and although it does not match Opus 4 in most areas, it offers an optimal combination of capacity and practicality. The company ensures that its new models are no longer mere self -fulfilled tools, but Smart collaborators capable of holding conversations, reasoning, executing complex tasks and maintaining contextual memory. And hence the importance of being able to perform complex tasks for hours without losing coherence or performance. Extended thinking using tools Another innovations of these models is The mode of Extended thinking with tools. This is a way that Claude 4 and its models will be able to alternate between internal reasoning and the use of external tools, such as the search for contents on the Internet. With this, Claude 4 can solve problems more sophisticated. You do not have to limit internal thoughts, but you can perform practical actions such as online searches, code execution, or analyze files. This, for some professional uses you can make a big difference. The two models will be able Use multiple parallel toolsbeing able to access local files and use them to build and maintain contextual memory over time. This, in short, helps a lot to perform tasks for a long time continuously. In Xataka Basics | How to start in artificial intelligence from scratch: basic concepts, tools, tricks and advice

Apple has a very clear option not to be left behind in AI: Buy Anthropic

The recent association between Apple and Anthropic to develop a platform of “Vibe-Coding“, as he said Bloomberg, It could be much more than a simple technological collaboration. It is the first act of what should become a much deeper union: a complete acquisition. Apple is at a historic crossroads in the AI ​​race. Your association with Anthropic to integrate Claude Sonnet In Xcode it is an implicit admission that its internal development of AI does not advance with the necessary speed. Swift Assist, announced but never releasedIt remained as a symbol of Apple’s difficulties to compete in this field alone. While OpenAi, Google and Anthropic advance by leaps and bounds, Apple runs the risk of being relegated in the largest technological revolution since the smartphone. And that risk is growing. The acquisition of Anthropic would be a tectonic movement for both companies. For Apple, it would mean ensuring one of the most sophisticated language models in the market, particularly highlighted in programming tasks. For Anthropic, he would represent the support of one of the technological giants with the greatest financial capacity and global distribution. The combination of Excellence in Apple hardware and integration with the avant -garde in Anthropic (whose product quality is far ahead of its distribution capacity) would create a difficult synergy to match. Historically, Apple has prospered when you have acquired key technologies at key momentsintegrating them deeply into their ecosystem. The purchase of PA SEMI in 2008 laid the foundations for Apple Silicon chips. The Beats acquisition It was not only for the headphones, but by the streaming technology that would become Apple Music. Now, AI represents another turning point, where staying halfway between their own developments and external alliances would lead to a permanent disadvantage position. The price of acquiring Anthropic would be great, probably in the range of tens of billions, a magnitude in which Apple has never moved, used to much smaller purchases. But he has not done so does not mean that he cannot do it: he has more than 150,000 million in cash and would be a key strategic investment. It is not only a matter of acquiring technology, but of ensuring the future of its entire platform. While Tim Cook speaks of a hybrid approach with “certain own models”, the truth is that the real power will be in those who possess the best fundamental models. For Apple, Anthropic should not be only a temporary partner, but a permanent piece of his vision for the future of computer science. In Xataka | There is something Apple knows how to do very well: sell iPhone. There is also something that does not know how to do: the intermediate iPhone Outstanding image | Xataka

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.