Anthropic has announced The launch and availability of Claude 3.7 Sonnet, its new model of founding. The jump is promising, but stands out especially for one thing: they point to reasoning models.
It is not Claude 4.0, it is Claude 3.7. The number of the new version confirms once again that the jump of benefits does not justify a more “round” number. Many expected Claude 4.0, but in Anthropic they make it clear that this is a much more evolutionary version than revolutionary.
A hybrid model. In Anthropic they presume from having a hybrid model that does not differentiate between whether to talk and answer questions quickly, reason or any other application, because everything is based on the Claude 3.7 founding model, which does everything and behaves in that way Multidisciplinary. And as it does everything, it is somewhat more expensive than the competition: its API costs $ 3 per million input tokens and $ 15 per million departure tokens
Claude can already “reason”. In a separate announcement Anthropic told us about his new mode of reasoning, called “Extended Thinking Mode”, which now becomes a more option among which we can display when using its model. If we activate it, the model “will think more deeply about complex questions.” As those responsible explain, this mode uses the same AI model, but does so by giving it more time and investing more effort to reach an answer.
How Claude thinks. This mode of reasoning offers the possibility of seeing what the model is thinking when processing those answers. Here they warn that this information can be surprising, because we can see how AI can “think” incorrect things, but also show that process does not mean that the answer is only based on it. “Our results suggest that models often make decisions based on factors that are not explicitly discussed in their reasoning process.”
Things are saved. That is: the model seems to keep things for yourself while thinking, but it is not clear which or why. There is another reason not to show everything: that raises security problems, since having all that information potentially gives resources to bad actors to take advantage of the model of inappropriate forms.

Source: Anthropic
You can play Pokémon alone. The new Anthropic model is also more “agéntico” than ever. It responds better to changes in the environment and continues to act until an open task has been completed. That makes The “Computer Use” function which allows AI to control our computer to be increasingly promising. They demonstrated it with Pokémon: Claude 3.7 came much further than previous models.
Claude Code arrives. The Anthropic model has always highlighted in the scope of programming, and now they wanted to promote that capacity with Claude Code, a BASDA tool in Claude 3.7 Sonnet but specifically focused on helping programmers to develop their projects.
A programming agent. This could also be considered as Anthropic’s first agent, because Claude Code is able to complete programming projects autonomously without needing user interaction. Thus, Claude can search between basis with code on which to base, read and edit files, write and execute tests, publish the code in Github repositories and execute commands on a console while informing developers of the entire process. He Anthropic demonstrative video It allows you to check some of those functions.
Similar to Grok3 in performance. The new Grok 3 presented these days by XAI showed one more step in its performance in the most demanding benchmarks today, and Claude 3.7 is also in that line, which means that It is something superior In those tests to models such as O1 and O3-mini (from OpenAI) and Deepseek R1.
In Xataka | I have tried Deepseek on the web and in my Mac. Chatgpt, Claude and Gemini have a problem

GIPHY App Key not set. Please check settings