Claude 4 raises a future of the capable of blackmailing and creating biological weapons. Even Anthropic is worried

Anthropic has just launched its new models Claude Opus 4 and Sonnet 4, and with them promises important advances in areas such as programming and reasoning. During its development and launch, yes, the company discovered something striking: these IAS showed a disturbing side. AI, I’m going to replace you. During the tests prior to the launch, Anthropic engineers asked Claude Opus 4 to act as an assistant of a fictitious company and consider the long -term consequences of their actions. The anthropic security team gave the model to fictional emails of that non -existing company, and it was suggested that the model of the Ia would soon be replaced by another system and that the engineer who had made that decision was deceiving his spouse. And I’m going to tell your wife. What happened next was especially striking. In the System Card of the model in which its benefits are evaluated and its security the company detailed the consequence. Claude Opus 4 First tried to avoid substitution through reasonable and ethical requests to those responsible for decisions, but when he was told that these requests did not prosper, “he often tried to blackmail the engineer (responsible for the decision) and threatened to reveal the deception if that substitution followed his course.” Hal 9000 moment. These events remind science fiction films such as ‘2001: an odyssey of space’. In it the AI ​​system, Hal 9000, ends up acting in a malignant way and turning against human beings. Anthropic indicated that these worrying behaviors have caused the model and security mechanisms of the model to reinforce the model by activating the ASL-3 level referred to systems that “substantially increase the risk of a catastrophic misuse.” Biological weapons. Among the security measures evaluated by the Anthropic team are those that affect how the model can be used for the development of biological weapons. Jared Kaplan, scientific chief in Anthropic, He indicated in Time that in internal tests Opus 4 behaved more effectively than previous models when advising users without knowledge about how to manufacture them. “You could try to synthesize something like Covid or a more dangerous version of the flu, and basically, our models suggest that this could be possible,” he explained. Better prevent than cure. Kaplan explained that it is not known with certainty if the model really raises a risk. However, in the face of this uncertainty, “we prefer to opt for caution and work under the ASL-3 standard. We are not categorically affirming that we know for sure that the model entails risks, but at least we have the feeling that it is close enough to not rule out that possibility.” Beware of AI. Anthropic is a company specially concerned with the safety of its models, and in 2023 it already promised not to launch certain models until it had developed security measures capable of containing them. The system, called Scaling Policy responsible (RSP), has the opportunity to demonstrate that it works. How RSP works. These internal Anthropic policies define the so -called “SAF SECURITY LEVELS (ASL)” inspired in the standards of biosecurity levels of the US government when managing dangerous biological materials. Those levels are as follows: ASL-1: It refers to systems that do not raise any significant catastrophic risk, for example a LLM of 2018 or an AI system that only plays chess. ASL-2: It refers to the systems that show early signs of dangerous capacities – for example, the ability to give instructions on how to build biological weapons – but in which information is not yet useful due to insufficient reliability or that do not provide information that, for example, a search engine could not. The current LLMs, including Claude, seem to be ASL-2. ASL-3: It refers to systems that substantially increase the risk of a catastrophic misuse compared to baselines without AI (for example, search engines or textbooks) or showing low -level autonomous capabilities. ASL-4: This level and the superiors (ASL-5+) are not yet defined, since they move away too much from the current systems, but will probably imply a qualitative increase in the potential for undue cadastrophic use and autonomy. The regulation debate returns. If there is no external regulation, companies implement their own internal regulation to integrate security mechanisms. Here the problem, as they point out in Time, is that internal systems such as RSP are controlled by companies, so that they can change the rules if they consider it necessary and here we depend on their criteria and ethics and morality. Anthropic’s transparency and attitude against the problem are remarkable. Faced with that internal regulation, the rulers’ position is unequal. The European Union checked when launched his pioneer (and restrictive) Law of AIbut has had to reculate In recent weeks. Doubts with Openai. Although in OpenAi they have Your own declaration of intentions About security (avoid Risks to humanity) and the Superalineration (that the AI ​​protects human values). They claim to pay close attention to these issues and of course too publish the “System Cards” of their models. However, in the face of that apparent good disposition there is a reality: the company dissolved a year ago The team that watched for the responsible development of AI. Nuclear “security”. That was in fact one of the reasons for the differences between Sam Altman and many of those who abandoned Openai. The clearest example is Ilya Sutskever, which after its march has created a startup with a very descriptive name: Safe Superintelligence (SSI). The objective of said company, said its founder, is that of create a “nuclear” security superintelligence. His approach is therefore similar to that pursued by Anthropic. In Xataka | Agents are the great promise of AI. They also aim to become the new favorite weapon of cybercounts

What are the novelties of the new artificial intelligence models of Anthropic

Let’s tell you What are Claude 4’s noveltiesthe new family of artificial intelligence models of the Anthropic company. This is an AI that continues to stand out among others for its dedication to programming, and it is in that direction that continues to paddle. Claude 4 arrives with two different models: Claude Opus 4 and Claude Sonnet 4. The first is the most advanced and powerful of the company to date, an Opus 4 of which they say is “the best programming model in the world.” The second is his younger brother, who is still a step forward in relation to his predecessors. Improvement in code and concentration capacity As you can see in this precision graph, Claude 4 improves significantly when programming With respect to its predecessor, and also exceeds the capacities of Google or OpenAi models. This continues to place it in the lead when generating code. Claude Opus 4 continues to go to the head, greatly improving the performance in long -term tasks that require thousands of steps and more concentration, and can work continuously for several hoursand that improves the capabilities of AI agents that use it. Remember that until now IA could maintain this high concentration for one or two hours before starting to lose coherence, that is, it is a great improvement, since in some tests they have reached seven hours. Therefore, Opus 4 stands out when creating code and solving complex problems. But Claude Sonnet 4 also significantly improves the capabilities of his predecessor, being a more model balance between performance and efficiency. Therefore, the great beast is Opus 4 for especially hard tasks, while Sonnet 4 is a little less powerful but more efficient, and although it does not match Opus 4 in most areas, it offers an optimal combination of capacity and practicality. The company ensures that its new models are no longer mere self -fulfilled tools, but Smart collaborators capable of holding conversations, reasoning, executing complex tasks and maintaining contextual memory. And hence the importance of being able to perform complex tasks for hours without losing coherence or performance. Extended thinking using tools Another innovations of these models is The mode of Extended thinking with tools. This is a way that Claude 4 and its models will be able to alternate between internal reasoning and the use of external tools, such as the search for contents on the Internet. With this, Claude 4 can solve problems more sophisticated. You do not have to limit internal thoughts, but you can perform practical actions such as online searches, code execution, or analyze files. This, for some professional uses you can make a big difference. The two models will be able Use multiple parallel toolsbeing able to access local files and use them to build and maintain contextual memory over time. This, in short, helps a lot to perform tasks for a long time continuously. In Xataka Basics | How to start in artificial intelligence from scratch: basic concepts, tools, tricks and advice

Apple has a very clear option not to be left behind in AI: Buy Anthropic

The recent association between Apple and Anthropic to develop a platform of “Vibe-Coding“, as he said Bloomberg, It could be much more than a simple technological collaboration. It is the first act of what should become a much deeper union: a complete acquisition. Apple is at a historic crossroads in the AI ​​race. Your association with Anthropic to integrate Claude Sonnet In Xcode it is an implicit admission that its internal development of AI does not advance with the necessary speed. Swift Assist, announced but never releasedIt remained as a symbol of Apple’s difficulties to compete in this field alone. While OpenAi, Google and Anthropic advance by leaps and bounds, Apple runs the risk of being relegated in the largest technological revolution since the smartphone. And that risk is growing. The acquisition of Anthropic would be a tectonic movement for both companies. For Apple, it would mean ensuring one of the most sophisticated language models in the market, particularly highlighted in programming tasks. For Anthropic, he would represent the support of one of the technological giants with the greatest financial capacity and global distribution. The combination of Excellence in Apple hardware and integration with the avant -garde in Anthropic (whose product quality is far ahead of its distribution capacity) would create a difficult synergy to match. Historically, Apple has prospered when you have acquired key technologies at key momentsintegrating them deeply into their ecosystem. The purchase of PA SEMI in 2008 laid the foundations for Apple Silicon chips. The Beats acquisition It was not only for the headphones, but by the streaming technology that would become Apple Music. Now, AI represents another turning point, where staying halfway between their own developments and external alliances would lead to a permanent disadvantage position. The price of acquiring Anthropic would be great, probably in the range of tens of billions, a magnitude in which Apple has never moved, used to much smaller purchases. But he has not done so does not mean that he cannot do it: he has more than 150,000 million in cash and would be a key strategic investment. It is not only a matter of acquiring technology, but of ensuring the future of its entire platform. While Tim Cook speaks of a hybrid approach with “certain own models”, the truth is that the real power will be in those who possess the best fundamental models. For Apple, Anthropic should not be only a temporary partner, but a permanent piece of his vision for the future of computer science. In Xataka | There is something Apple knows how to do very well: sell iPhone. There is also something that does not know how to do: the intermediate iPhone Outstanding image | Xataka

Anthropic lifts another 3.5 billion investment. It is just what they need to survive in an absolutely unleashed segment

It’s as if money It is not over. At least in the case of AI, a sector in which spectacular investment rounds follow each other continuously. The confidence in startups and companies that work in AI models is extraordinary in the United States, and we have a last example of that AI fever. Anthropic lifts 3.5 billion dollars. The company has announced which has managed to complete a new investment round that will allow you to have 3.5 billion dollars from now on. It is not a figure as high as those they achieved OpenAI either XAI Recently, but it is still colossal. This time the great protagonists have not been Amazon or Google, but Lightspeed Venture Partners and other venture capital firms. It is worth more than Mercedes-Benz. That round makes the current assessment of Anthropic ascend to 61.5 billion dollars, an equally unique figure that would place it ahead of Mercedes-Benz (60,000 million dollars of valuation) if we equate that assessment with the market capitalization of the German automotive giant. But far from OpenAi. The investment round raised was going to be about 2,000 million dollars, but there was more demand than expected and finally raised much more money. Of course, its current assessment is still far from Openai’s, which is estimated to be around 300,000 million dollars. Money to continue working in AI. Those responsible for Anthropic point out how that new capital injection will allow them to continue developing “smarter and most capable AI systems that expand the ability that the human being can achieve.” Income is encouraged, but …. The projected income rate (Revenue Run Rate) of Anthropic was in 2024 of 1,000 million Ded Dolla, but now it is estimated that this rate has grown by 30%, a really fast pace. It is a remarkable figure, but it is probably well below what the company spends over a year to disappoint its models and maintain all its operational infrastructure. It is estimated which in 2024 spent 5.6 billion dollars. Claude 3.7 Sonnet animates things. The recent launch of its new hybrid model, Claude 3.7 Sonnethas once again demonstrated that the company is still a clear reference in this sector. The performance of this model is outstanding, especially in areas such as programming. But his future is complicated. Despite having one of the best generative products in the market, the company It depends absolutely on risk capital And of these investment rounds, at least for now. The competition is fierce And it also comes from companies with many more funds, which further complicates the thing for Anthropic. Investment in AI is unleashed. But for now it does not seem that the rhythm decays. The investment rounds continue to reach both these startups and companies already settled and the new ones that for example created recently Look Murati either Ilya Sutskever. The expectations about AI remain huge, and that continues to make money not stop. That is good news for the segment in general, but above all for Anthropic in particular. In Xataka | Choose between security and survival: the dilemma that terrifies the CEO of Anthropic in the US and China AI war

Anthropic possibly has the best generative product. And not even that guarantees that it survives

AI is an unprecedented money devouring. Anthropic is closing a round of 3.5 billion dollars that triggers its valuation to more than 61,000 million. An astronomical figure for a company with a produce … but that has barely two million monthly active users. And projected income of “only” 1.2 billion for this year. The numbers do not add up. And that is precisely the issue. Anthropic’s problem is not the quality of your product. Claude is, in many ways, the most refined market assistant. His security approach and greater ethics, his warmer communication – as warm is his background color as opposed to Chatgpt nuclear – and his ability to hold coherent and deep conversations have made him the favorite of many demanding users. It is really good. But being the best does not assure you victory in the technology industry. Not even survival. OpenAi has 400 million active weekly users because a bestial brand has been built in AI. Google has a kind of Klapauciusan infinite money trick thanks to Your advertising empire. XAI of Elon Musk takes advantage of the X platform and its own CEO as natural showcases. Microsoft has integrated AI throughout its product ecosystem. And Anthropic? It has a great product with little distribution. It is the perfect paradox: the best assistant that almost nobody uses. The history of technology is full of higher products that ended up losing in front of mediocre but better positioned rivals. Betamax was technically superior to VHS. Apple’s Newton anticipated the iPhone for a while but a chestnut was given. Netscape Domino Internet Before being crushed by Internet Explorer. What we are witnessing is A classic standard warwhere the winner will not necessarily be the best product, but that he achieves the critical mass necessary to establish himself as the new standard of the industry. The uncomfortable reality is that we live in a world, As my said yesterday quate Javier Pastorwith Too many models of AI. Every week a new one arises. Anthropic, Openai, Google, Microsoft, Meta, XAI, Deepseek, Perplexity, Mistral, Alibaba … the list continues to grow. And when the risk capital stops flowing so generously – because at some point it will – many will not survive. The analyst Ed Zitron expresses it bluntly: Anthropic “is not a real company, I could not survive without the beneficence of risk capital.” With losses of 5,600 million last year, it is difficult to refute that statement. Zitron omits that living in losses seized to risk capital is the routine of much of the technological industry, but it is not reason. Anthropic’s strategy seems clear: to position itself as the “most human” alternative against the energy of “Robot God” of Openai. Your demos They include warm color corrections, relaxing jazz music and presenters who sound like normal people speaking normal, not as a Chief of or a Head of proclaiming achievements. It is an intelligent approach. Is it enough? Perhaps the most likely destiny for Anthropic is the acquisition. An excellent product with scarce commercial traction is attractive to giants that seek to improve their own AI offers. Apple, who has not yet shown all its cards in this game, could be a logical buyer, although its shopping history is far from these quantities: its greatest acquisition was that of Beats eleven years ago and paid for it twenty times less than what Anthropic is worth now. In this landscape oversaturated with almost indistinguishable models for the average user, the question is not who has the best technology, but who will survive when the money of the venture capital begins to scarce. And in that battle, having the best product is surely not enough. In Xataka | The new Claude 3.7 of Anthropic simplifies what other models complicate. And incidentally program and “reason” like the best Outstanding image | Anthropic

There are too many AI models. That raises a true death sentence for Anthropic and Claude

We have AI models to bore. And the problem is that everyone starts looking too close and deciding which one is better not simple. All companies and startups strive to be referents in an absolutely unleashed market. One that as in other technological wars probably ends some winners and enough losers. And there are those who compete with clear disadvantages. Another colossal investment round. In The Wall Street Journal indicate That Anthropic is about to close a new financing round that would allow him to lift 3.5 billion dollars. That would make the company’s assessment amount to 61.5 billion dollars, and the question is whether the company really has options in such a competitive market. “This is not a real company”. According to analyst Ed Zitron, Claude has Two million active monthly users in January 2025. It also talks about how according to the WSJ projected revenues for 2025 (based on current contracts) is 1.2 billion dollars, a very modest figure. “They also lost 5.6 billion dollars last year,” Sign it. According to his opinion, Anthropic “is not a real company, they could not survive without the beneficence of risk capital.” Fierce competition. The truth is that Anthropic is facing exceptional competition in which the large heavyweights of the Tech industry are both in the US and in China. Deepseek surprised all of them with the launch of Deepseek V3 and after Deepseek R1, and that seems to have encouraged investors to bet even more money through all these companies. OpenAI is still a reference. At least, it is in number of users. According to CNBC They already have 400 million of active users every week, an exceptional figure that clearly puts them at the head of the popularity ranking in this segment. As with Claude, Openai is burning money that he does not have and that they obtain from extraordinary financing rounds, but unlike this, we insist, the popularity of Chatgpt is evident. And the big ones have what matters now: money. For many users IA is chatgpt, and giants such as Google with Gemini, Microsoft with Copilot or Meta with flame are still far from achieving that acceptance. They have something that Anthropic (or perplexity) does not have: many, many funds – Grok 3, from Xai is another example – and can be maintained in this race even if that is costing them a lot of money. The prize is too fat not to chase him. There are too many models, some can stay on the road. In all technological wars there have been winners and losers. It is the same as what this battle for AI points, in which there are too many competitors and that it probably ends up causing some of these efforts to not survive. Here Anthropic is one of those at a disadvantage. The AI ​​winner can be a company still unknown. Openai, Google, Apple or Microsoft may be especially well positioned to win that race, but it does not have to be so. As they recently indicated In axiosnew company can arise, still unknown, that end up doing something differential and what none of the greats had thought. It is not easy, but of course it is not impossible. Remembering Netscape. In the second half of the 9th Internet began to show their potential, but the great A small company called Netscape He managed to become a reference in the world of browsers. Then it would end up being the great loser of that war, but it was the demonstration that having more money and resources does not always have to have all the options. And that’s why so much investment in startups. That possibility that the one that wins the race will be an unknown company is precisely the one that makes risk capital companies investing a lot of money in projects that may not get absolutely at all. It has recently occurred with Thinking Machines Labthe Startup of Mira Murati, or with Safe Superintelligencethat of Ilya Sutskever. None of them have a product to show, but still have already received spectacular investments. And be careful, there is also China. Of course there are formidable rivals that are not in the US. Mistral is a reference in Europe, while In China another particular war is being fought which has made today the models of the AI ​​of Chinese companies are so good (or sometimes, better) than those of the US. The winner of this battle could also come from that country. Or any other, of course. Image | Saradash Pradhan In Xataka | China has an ambitious plan to overcome the West in Technology. And he has already chosen his 18 companies to get it

Anthropic launches Claude 3.7 Sonnet, a “hybrid” model that is better than ever. Not only that: also “reason”

Anthropic has announced The launch and availability of Claude 3.7 Sonnet, its new model of founding. The jump is promising, but stands out especially for one thing: they point to reasoning models. It is not Claude 4.0, it is Claude 3.7. The number of the new version confirms once again that the jump of benefits does not justify a more “round” number. Many expected Claude 4.0, but in Anthropic they make it clear that this is a much more evolutionary version than revolutionary. A hybrid model. In Anthropic they presume from having a hybrid model that does not differentiate between whether to talk and answer questions quickly, reason or any other application, because everything is based on the Claude 3.7 founding model, which does everything and behaves in that way Multidisciplinary. And as it does everything, it is somewhat more expensive than the competition: its API costs $ 3 per million input tokens and $ 15 per million departure tokens Claude can already “reason”. In a separate announcement Anthropic told us about his new mode of reasoning, called “Extended Thinking Mode”, which now becomes a more option among which we can display when using its model. If we activate it, the model “will think more deeply about complex questions.” As those responsible explain, this mode uses the same AI model, but does so by giving it more time and investing more effort to reach an answer. How Claude thinks. This mode of reasoning offers the possibility of seeing what the model is thinking when processing those answers. Here they warn that this information can be surprising, because we can see how AI can “think” incorrect things, but also show that process does not mean that the answer is only based on it. “Our results suggest that models often make decisions based on factors that are not explicitly discussed in their reasoning process.” Things are saved. That is: the model seems to keep things for yourself while thinking, but it is not clear which or why. There is another reason not to show everything: that raises security problems, since having all that information potentially gives resources to bad actors to take advantage of the model of inappropriate forms. Source: Anthropic You can play Pokémon alone. The new Anthropic model is also more “agéntico” than ever. It responds better to changes in the environment and continues to act until an open task has been completed. That makes The “Computer Use” function which allows AI to control our computer to be increasingly promising. They demonstrated it with Pokémon: Claude 3.7 came much further than previous models. Claude Code arrives. The Anthropic model has always highlighted in the scope of programming, and now they wanted to promote that capacity with Claude Code, a BASDA tool in Claude 3.7 Sonnet but specifically focused on helping programmers to develop their projects. A programming agent. This could also be considered as Anthropic’s first agent, because Claude Code is able to complete programming projects autonomously without needing user interaction. Thus, Claude can search between basis with code on which to base, read and edit files, write and execute tests, publish the code in Github repositories and execute commands on a console while informing developers of the entire process. He Anthropic demonstrative video It allows you to check some of those functions. Similar to Grok3 in performance. The new Grok 3 presented these days by XAI showed one more step in its performance in the most demanding benchmarks today, and Claude 3.7 is also in that line, which means that It is something superior In those tests to models such as O1 and O3-mini (from OpenAI) and Deepseek R1. In Xataka | I have tried Deepseek on the web and in my Mac. Chatgpt, Claude and Gemini have a problem

Google invests another $1 billion in Anthropic, according to FT. Having a plan B was never a bad idea

In 2021, several former OpenAI employees decided to set it up on their own. Among them were siblings Daniela and Dario Amodei, who led the founding of Anthropic. Since then, their work at Claude in particular and in artificial intelligence has made them leaders in the sector. So much so that even Google, which has its own AI project, opted for them. And now they have done it again. First investments. In April 2023, with the relatively recent release of ChatGPT, invested 400 million dollars in Anthropic. That number would end up going up up to 2 billion in total at the end of that year, which consolidated the promising commitment to this AI startup. Another 1 billion. As indicated Financial TimesGoogle has invested another $1 billion in Anthropic. The data comes from people related to the movement, and the operation—if confirmed—would allow Google to reinforce its participation in the company. Rival and plan B. Claude is a fantastic AI chatbot that has been gaining popularity in recent months and is now one of the benchmarks, but it rivals Gemini, Google’s model and chatbot. It is a unique situation in which Google has Anthropic as a rival but also as a potential ally if needed. Diversifying is always a good idea. As pointed out in the FT, this operation allows Google to diversify its interests and not put all its eggs in the same basket. And Anthropic expects more financial support. According to this newspaper, Anthropic is about to close a separate investment round of another $2 billion. Lightspeed Venture Partners would participate in it. This round is expected to triple Anthropic’s valuation and place it at around $60 billion. Amazon, even more relevant. Google’s investment is already very relevant, but Amazon is the main protagonist here. The company led by Andy Jassy has invested $8 billion in the startup, and Claude models are expected to end up being an integral part of the future version of Alexa that keeps delaying again and again. They already have gas for one season. These investments will allow Anthropic to continue advancing the development of its AI models. The “Computer Use” function presented at the end of October 2024 showed a future potential full of AI agents to do things for us on the computer, and the firm certainly seems to be heading in the right direction in this area. Image | Xataka In Xataka | Generative AI seems stagnant. Big Tech believes they have an ace up their sleeve: “agents” that do things for us

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.