is that Claude looks like GPT and GPT looks like Claude

OpenAI launched yesterday its new foundational model, GPT-5.5. It did so just a week after Anthropic released Opus 4.7, and that confirms that frenetic cadence that several AI companies are involved in: there is not a week that does not have at least one important release. Each model is better than the previous one in the benchmarks, but the surprise is the sensation that the latest OpenAI and Anthropic models convey. It’s as if the roles had been exchanged. The sea of ​​good… GPT-5.5 is “our smartest and most intuitive yet.” At OpenAI they say that this version understands what you really need faster, and it is not necessary to give it so many details to “intuit” what you want. It is now available for subscribers of the Plus, Pro, Business and Enterprise plans. …and expensive. Access for API users will arrive “very soon” according to AI engineers, but be careful, because it will not be a cheap model. In fact, it will cost $5 per million tokens in and $30 per million tokens out. It is double what GPT-5.4 cost, but OpenAI seems to be sure that it is worth paying that price. And they may be right. There is an even more expensive version: GPT-5.5 Pro costs $30 per million input tokens and $180 for output tokens. It is the highest price we have seen in AI models, although in OpenAI the model is more efficient in tokens, which if met reduces the real cost per task. Agentic by design. The new GPT-5.5 is positioned as a model designed to complete tasks, and not so much to answer questions. The distinction is very intentional: previous versions required detailed prompts and constant monitoring, GPT-5.5 is intended for long agentic tasks where the model has to make autonomous decisions over multiple steps. The model uses algorithms designed by itself and which according to OpenAI allow generating tokens 20% faster than GPT-5.4, and some users seem have noticed that change. Benchmarks with nuances. The test comparison table published by OpenAI shows how GPT-5.5 wins in 14 of those benchmarks, compared to 4 for Opus 4.7 and 2 for Gemini 3.1 Pro. As always, they are internal tests and will have to be validated independently, but there are curious data. GPT-5.5 dominates in TerminalBenh, FrontierMath and ARC-AGI-2, while Opus 4.7 dominates in SWE-Bench Pro (programming), although according to OpenAI it does so with a “memorization” technique that could influence the results. Those responsible for the Artificial Analysis Intelligence Index they are clear that GPT-5.5 is currently the most powerful model on the market, and the leap with respect to its predecessor is notable. GPT now looks like Claude (and vice versa). The reactions of the user community have drawn attention not to the power of these models, but to their behavior. In The Neuron newsletter they explain that Opus 4.7 now seems more like a GPT because it consumes more tokens, writes more and does not respond with that tone so characteristic of Anthropic models. Just the opposite happens to GPT-5.5, and it seems to give the feeling that one is using Claude. He writes concisely, doesn’t seem as clumsy when he reasons quickly, and is more direct. Dan Shiper, CEO of Every, Indian that Opus 4.7 seems slow compared to GPT-5.5. For analysts like Dylan Patel, from Semianalysis, the reason is that Opus 4.7 is deliberately compute-intensive. OpenAI has an advantage. Here appears an interesting advantage for OpenAI, which has always trying to guarantee future computing capacity. It may not have achieved it because demand continues to grow, but here it seems to have room for maneuver and that allows its most advanced models not to have the infrastructure problems that Anthropic has. It’s as if Anthropic were a Ferrari with rationed fuel, and as if OpenAI had just bought the gas station and had (more or less) plenty for its models. Minipoint for OpenAI? It’s early to say, but the reception of Claude 4.7 has not been as good as we would have hoped, and if GPT-5.5 indeed confirms expectations, we could have a surprising change of leadership here. It seemed that Anthropic I had everything under control with Claude Code and Claude Opus 4.6, but the recent criticism of Opus 4.7 and the apparent virtues of GPT-5.5 could mean a battle won for OpenAI, which certainly needs them for its IPO. While, of course, There are other rivals lurking. In Xataka | Someone has had a simple idea so that data centers do not collapse in Spain: “unplug them” 18 days a year

Select the model to use between Claude, GPT, Gemini, Kimi, Grok or Sonar

Let’s tell you how you can choose the artificial intelligence model What are you going to use with? Perplexity in a prompt. This is a chatbot known for allowing you to access many cutting-edge models from third-party companies, something it does automatically depending on the request you make. However, if you are going to use Perplexity, it is advisable to know one of its functions basic, being able to choose by hand which model you want to use. And yes, every time Google, Anthropic or OpenAI launch a new model of artificial intelligenceat Perplexity they are going to add it to their catalog. The results will not be exactly the same as if you use the paid versions of ChatGPT, Grok, Claude or Gemini, because Perplexity may modify them a little. However, you will be able to take advantage of the reasoning power of these models. Choose the AI ​​model to use in Perplexity To choose the AI ​​you want to use in Perplexity, you have to look at the box where you write the prompt. In it, you must click on the option AI modelwhich will appear with the icon of what appears to be a chip. It is to the far left of the series of icons that appear at the bottom right in the prompt writing field. When you click on that button, it will appear a list of all models of artificial intelligence that you can use. Both the best and the latest available from Gemini, GPT, Claude, Grok, Kimi or Perplexity’s own Sonar will appear. This is something that you can do in its web version or in its mobile or computer applications. Here, you should know that you can choose the model with each prompt within a conversation with Perplexity. Come on, you can ask a question with one model, and then ask the next question with another. Also, below the list you will see the number of queries you can make with the most modern models. In Xataka Basics | The best prompts to save hours of work and do your tasks with ChatGPT, Gemini, Copilot or other artificial intelligence

Now reinforces your safety while GPT -5 appears on the horizon, according to ft

The irruption of Deepseek In the market it did not go unnoticed. On January 20, 2025 he put his Free chatbotand in a matter of days climbed up to the top of the ranking in the American App Store, surpassing Chatgpt. The impact was such that it even affected directly to the stock market value of NVIDIA. This success soon generated friction. Towards the end of that same month, Openai launched a serious accusation: the Chinese startup would have used its closed models to train its open source alternative. In statements to Financial Timesthe company directed by Sam Altman He claimed to have evidence of a distillation process that pointed to his Chinese rival. OpenAI reinforces your security measures Since then almost six months have passed. And if something has become clear, it was not allowed to pass the episode overlooking. According to the aforementioned British newspaperthe company initiated a thorough review of its internal practices and reinforced their security measures to reduce the risk of leaks or what internally they already refer as “corporate espionage.” “The episode motivated Openai to be much more rigorous,” said someone close to the security team. The American company has intensified its efforts to shield inside: it has not only expanded its cybersecurity workforce, it has also reviewed its protocols and hardened its internal policies. Among the most relevant measures is the isolation of much of its patented technology, which is now managed in out -of -line environments or separated from the rest of its networks. It has also implemented biometric controls of access to your offices, reinforced the physical security of your data centers and applies a policy of “Denegated output by default“To protect the weights of their models. If you wonder what this last measure consists of, the answer is quite simple: it implies that any critical data transfer is blocked by general norm, and it is only allowed if it has been expressly authorized. Thus, any attempt to extract the models – or sensitive parts of them – is automatically denied unless it is manually enabled, minimizing the risk of escape. This armor comes at a key moment for the company. The career for leadership in artificial intelligence has accelerated, and Openai is no longer seen as an unattainable force. The competition narrowstalent is expensive –Goal has begun paying exorbitant figures by their engineers– And all eyes are put on the arrival of GPT –5. Although OpenAi has not confirmed the exact date, Sam Altman has said that The new model is on its way and that will be “a great evolution” with respect to what we know. Everything indicates that GPT -5 will unify the best of the previous versionswith greater reasoning capacity, better customization, and, presumably, a broader context window. The pressure to launch it without errors, in an increasingly competitive environment, is maximum. Images | Solen Feyissa | Xataka with Grok In Xataka | Huawei and Alibaba’s grudge threatens to transform into a real war: that of AI between Chinese companies

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.