DeepSeek has launched its new reasoner model. It’s free and beats GPT-5

DeepSeek has introduced DeepSeek-V3.2 and DeepSeek-V3.2-Speciale. They are AI models that combine complex reasoning with the ability to use tools autonomously. Why is it important. The company of Hangzhou claims that DeepSeek-V3.2 matches the performance of GPT-5 in multiple reasoning tests. The Speciale model It reaches the level of Gemini-3 Pro and has achieved gold medals in international mathematics and computer science Olympiads. The context. DeepSeek surprised the world in January with a revolutionary model for efficiency and cost. Now it ups the ante with open source systems that throw down the gauntlet directly to OpenAI and Google in reasoning capabilities. Technical innovation. DeepSeek-V3.2 integrates “thinking” directly into tool usage for the first time. You can reason internally while running web searches, operating a calculator, or writing code. The system works in two modes: With visible reasoning (similar to the reasoning seen in ChatGPT and company). Or without any reasoning. The chain of thought persists between tool calls and is restarted only when the user sends a new message. How they have achieved it. Researchers have developed ‘DeepSeek Sparse Attention (DSA)’, an architecture that greatly reduces the computational cost of processing long contexts. The model maintains 671 billion total parameters but activates only 37 billion per token. In figures. DSA cuts the cost of inference in long contexts by approximately 50% compared to the previous dense architecture. The system processes 128,000 context windows tokens in production. Reinforcement training has consumed more than 10% of the total pretraining count. The team has generated more than 1,800 synthetic environments and 85,000 tasks to train agent capabilities. The results. DeepSeek-V3.2-Speciale has won a gold medal at the International Mathematical Olympiad 2025, the International Informatics Olympiad 2025, the ICPC World Finals 2025 and the Chinese Mathematical Olympiad 2025. Both models are available now. V3.2 works on app, web and API. V3.2-Speciale only by API, at least for now. Between the lines. DeepSeek has published the full weights and technical report of the training process. This transparency contrasts with what large American technology companies usually do. Even those that offer open source models such as Call, with an asterisk. The Chinese startup wants to demonstrate that open source systems can compete with the most advanced proprietary models. And it does so while continuing to reduce costs. Yes, but. The benchmarks Public settings do not always reflect performance on real-world tasks. Direct comparisons with GPT-5 either Gemini-3 Pro They depend on specific metrics that may not capture all relevant dimensions. Furthermore, the integration of tools in reasoner mode still needs to be tested in complex real-world use cases. The reduced cost is not as important if the quality of the responses does not hold up. In Xataka | DeepSeek Guide: 36 Features and Things You Can Do for Free with This AI Featured image | Solen Feyissa

We believed that no open model could outperform GPT-5. A Chinese startup proves us wrong

A Chinese startup called Moonshot just launched Kimi K2 Thinkinga gigantic open model with a trillion parameters that has done something that seemed almost impossible: surpass the best proprietary models from companies like OpenAI, Google or Anthropic. If we thought that “Open Source” models could never compete with GPT-5, Gemini 2.5 Pro or Claude, we were wrong. what has happened. This “AI laboratory” had already announced Kimi K2 in July with that gigantic size of one trillion parameters, but now they have released the “Thinking” version with that same size (32 billion active parameters, Mixture of Experts architecture). According to those responsible, the model is capable of maintaining stable use of agentic tools over between 200 and 300 sequential calls. Or what is the same: it can chain long sequences of actions autonomously and apparently without error. The best of all is not that: it is that it surpasses GPT-5 or Claude Sonnet 4.5 in various tests and costs much less than those models. The benchmarks. Those responsible for Moonshot explained how Kimi K2 Thinking achieves the highest scores in Humanity’s Last Exam (general knowledge, 44.9%) and BrowserComp (agent browsers, 60.2%). He is almost at Claude’s level in the SWE software development test, and is also almost the best in another of those benchmarks, LiveCodeBench v6. It is true that in some tests still slightly behind of its “western” rivals, but the achievement is spectacular. More benchmarks. Those responsible for Artificial Analysis have shown their first conclusions after evaluating it with various tests. Thus, they highlight its behavior in agentic tasks that simulate that the model is acting as a customer service agent. In this test it obtained 93% of the maximum, surpassing all its competitors by far (GPT-5 Codex High obtained 87%, for example). They will do more tests, but for now the prospects are fantastic. And on top of that, cheap. On CNBC indicate that training the model cost $4.6 million, a ridiculous figure considering that training proprietary models like GPT-5 It cost about 500 million dollars according to estimates. Using the Kimi K2 Thinking API is also very affordable: $0.6 per million tokens in and $2.5 per million tokens out. GPT-5 Chat costs $1.25/10 respectively, while Claude Sonnet 4.5 costs $3/15 respectively. The details. The model makes use of an INT4 quantization to improve its efficiency without compromising the precision and quality of its responses. Its context window—the “size” of the data we can enter when making prompts—is 256k, a relatively modest figure for large models but still notable. And as a good open model, we can download it to use locally… if we have a real monster at our disposal. The model weighs 594 GB, and for example joining two Mac Studio M3 Ultra It is possible to make it work locally relatively smoothly at about 15 t/s. Alibaba is behindyes. Although the model is developed by an independent startup called Moonshot, this firm has been financially supported by Alibaba, which is becoming an absolute powerhouse in this field. Already not only conforms with developing its own models, which are outstanding (Qwen is the clear example), but is also financing the development of other models such as Kimi K2/Thinking. China and its love for open AI models. During the last few months we have seen how China dominated in the field of open AI models —not “Open Source”—. The Asian giant has adopted an overwhelming philosophy with increasingly better models but which until now seemed to be several steps behind the large proprietary models of OpenAI, Anthropic or Google. This is no longer the case. The race is lively. This achievement represents a new vote of confidence for the open models coming from Chinese companies. It is true that they are huge and that makes it very difficult to use them in practice by end users, but they present an interesting alternative for companies. Image | idnaklss with Midjourney In Xataka | There are many “internal” races within the greater AI race. And Alibaba is winning Open Source

The world is waiting for Depseek’s new great model to compete with GPT-5, but Depseek has other plans: the agricultural AI

At the beginning of the year, the Chinese startup Deepseek put the world of AI up with Deepseek R1a free and open source model that was placed at the height of GPT-4 or Claude. After the coup on the table, in Depseek they have been quite quiet, but now we know what its next objective is: the agriculture. Before the end of the year. A few days ago Bloomberg reported that Deepseek is working on an advanced and very ambitious agent. He will be able to perform multiple tasks with minimal user intervention and will learn as he works. According to sources close to the company, the founder of the company Lian Wenfeng is pressing his team so that the new agentic model is ready before the end of the year. The company has already taken a step in this direction with the Deepseek v3.1 presentation Just two weeks ago. As detailed by the company in A post in Wechatits new model improves performance in reasoning tasks and agricultural abilities. A step back. Deepseek R2, the expected successor of the successful model with which Deepseek revolutionized the industry making begging. Instead they gave us Deepseek v3.1 and now the rumors suggest that their next great launch will be an AI agent. What is happening? There are voices, such as This Chinese journalistthat they see this turn to the agricultural AI as a way of taking a step back and getting away from the expensive and competitive career of the foundational language models. That The generative AI is reaching its roof It is something that is being talked about Since last year. GPT-5 is the test more recent than The big jumps are a thing of the past. If we add to this that China has a more conservative way of proceeding, with more long -term strategiesDeepseek’s turn towards an agriculture instead of launching Depseek R2 makes sense. Restrictions Although we have seen The most ingenious forms to make fun of themUnited States restrictions on chip export to China are also impacting the plans of many Chinese and Deepseek companies do not get rid. This also involves extra pressure that forces new routes with which to market their products. In fact, there is something striking in Deepseek v3.1 and it is that the model has been specially designed for Chinese chipswith the objective of Avoid dependence on foreign chips. Generate income. The agricultural AI opens another way for Deepseek, one in which you can get benefits more easily. Large language models have a problem: They cost a money and monetize them is not being a simple task. Given this, IA agents rise like a Most reasonable business model. Deepseek R1 has already given a whole lesson in Resource efficiencyIt makes sense that the company wants to opt for the fastest path to the benefits. A more conservative position. Although He has trimmed positionsChina lags in AI in terms of investments and access to the most advanced chips. Despite this, his approach in this AI race is being different. We see it in your Bet on the Open-Source wave “Personified“But perhaps the biggest difference is that, while their competitors in the United States continue to squander billions, in China they are choosing to be more conservative and not waste. This turn to the agents is in that conservative line to achieve a more sustainable industry. Image | Matheus Bertelli, via Pexels In Xataka | There is a city in China that is measured face to face with Silicon Valley: welcome to Hangzhou, the house of the ‘Six Little Dragons’

GPT-5 has left us a bittersweet feeling, playing who wants to be a millionaire and more in 1×19 crossover

Just a few weeks ago that Openai launched GPT-5but its relevance and impact has already been colossal. In fact, we had been waiting for this new model for so long that it was impossible not to make it an absolutely protagonist of 1×19 crossover. This episode, part of the collaboration between range of range and xataka, starts with A debate about this AI modelthat were supposed to It was going to be a remarkable qualitative leap and that at least for the moment it has not caused just changes in the current panorama of the industry. Here Jaume Lahoz and Carlos Santa Engracia are responsible for directing the debate, but they also give way to the rest of the program sections, which as always combines that technological debate with pure entertainment. Which is what Anna Boria brings us with a fun section that follows the trail of the mythical “Who wants to be a millionaire“To test Jaume and Carlos. The thing does not end there, because Carlos and Jose – another of the usual Crossover – have traveled to Los Angeles, in the United States, and tell us some curious things about their experience there. We hope you enjoy the episode! On YouTube | Crossover

Duolingo believed that AI was his ally. GPT-5 has just demonstrated that it can be its mortal competition

Duolingo sinks in the stock market. In early June its action was around $ 525, but now its value It has collapsed Up to 325 dollars, 38%. It is not entirely clear what this debacle has caused, but we have a clear suspect: AI. Be careful what you say. Three months ago Luis Von Ahn, CEO of Duolingo, made very controversial statements and indicated that he would replace part of his network of external (human) collaborators by generative AI systems. Although he stressed that they would continue to be a company that took great care to its employees, it also stressed that IA would take an increasingly notable role in the entire operation of the company, especially to “eliminate bottlenecks so we can do more” with the employees they already had. The value of Duolingo’s action has suffered important ups and downs in what we have been, but the last trend is clearly negative. Source: Google Finance. Boom and drop in actions. These statements were produced at the end of April. The initial impact for shares seemed to be positive, which went from $ 400 to $ 530 (32.5% growth) in a few days. But shortly after that optimism for the role of AI in the company vanished among investors: the action fell even below those initial levels, and now is around year principle levels. It seemed that Duolingo traced. The company presented financial results and corroborated the success of its business model. The feeling of progress when learning a language – gamification is a powerful (and as we will see, dangerous) tool— sold more than learning itself. That allowed to momentarily stop the fall of the actions a few days ago, but then something happened. GPT-5. During the presentation of the new OpenAI model there was a demonstration in which one of the company’s engineers launched a dart poisoned to Duolingo. That demo was to create a custom web application in just three minutes with which the user could learn French. With a simple prompt an app that competed directly with Duolingo, and that of course avoided paying for that application to learn languages. That PROMPT single made GPT-5 capable of creating an interactive website to teach you to speak in French. Source: OpenAi. Be careful with gamifying everything. Although that demo of GPT-5 becoming a personalized teacher is striking, the value of value of Duolingo’s action may also have been motivated by other causes. Especially, for that clear focus on the gamification of the learning process. Converting this process into a game is attractive and encourages many users to take that task in a more fun way, but criticism of excessive focus In Duolingo’s gamification they are frequent. As A user said In Reddit, “for me the reward to learn a language is to learn the language.” Other explained That Duolingo is not a learning application, but that it must be taken as something else: a game. The condemnation of advertising. Other criticisms are aimed at Excessive appearance of ads advertising when one uses duolingo to learn a language. Advertising occurs in the free version, because the premium version has the advantage of not showing it. The model is reasonable – study, after all, is a company and is there to earn money – but as with streaming, the presence of ads is increasing and is increasingly annoying for those users of the free version. Of betting on AI to be threatened by her. The truth is that although all these factors may have influenced that assessment, volatility may have been influenced by those expectations that are constantly lived with AI. The companies that are most committed to this technology are the ones that are going up in the stock market –to tell the “Trinity of the AI”-, although the real impact of this technology is for the very discreet moment. AI as a private teacher. What is unquestionable is that The potential of AI as a teacher of any discipline “Not only about languages,” is undeniable. It is something that GPT-4o already pointed out, whose demonstrations were in the same direction. For example, the boy’s video Learning to solve a mathematical issue —Hahere included – it was especially striking, and hunting a future in which whoever wants to get a lot of these “private teachers” that we can create with a prompt single in chatgpt (and other chatbots, of course). It is early to know, of course, but Duolingo, like many others, seems to be suffering the consequences of that future potential. In Xataka | We do not know if the AI is going to eat your work, but the CEO of some startups are determined to convince you of it

While everyone criticized GPT-5, Openai was winning the war that really matters: that of companies

He GPT-5 launch It has been, in broad strokes, disappointing. Openai needed this model with this model bigger in the history of AIbut we have encountered a model that improves, but not spectacularly. And yet, it is achieving something that is more important than it seems: to convince companies. Companies

Openai’s plan to get more money with GPT-5 has exploded in the face: total back

Shitting not only affects streaming. We begin to see indications that the chatbots of AI are no longer so beneficial for users, and they are not for a simple reason: they must be monetizing them. It is what OpenAi has just done when launching GPT-5, a model that promised to be easier to use and powerful than ever, but has ended up returning to what worked with GPT-4. The controversial router. When Openai launched GPT-5, he did it with a great novelty: to raise it as a unique model that adapted only to the needs of each user according to the question we asked him. The router detected if that question was more or less complicated and theoretically activated the most appropriate mode of operation in each case, but there was a problem: Always using the cheapest operating mode. Reverse. People-especially those who used chatgpt-quickly criticized both that decision and the option to use ancient models such as GPT-4O. The mutiny had effectbecause: He has recovered the old models he had killed (such as the aforementioned GPT-4O), although only for payment users Has enabled the function of choosing GPT-5 variant to free users It is a spectacular revenue just when Openai had sold us that unique and off -road model (and its router) as something differential. The options are good. Altman himself explained that from now on Chatgpt users can choose between “Auto”, “Fast” and “Thinking” when using GPT-5. “Most users will want to use a car,” he said, “but additional control will be useful for some.” He also remembered that GPT-4O is again available for payment users and clarified that “if at any time we deactivate it, we will warn with a long time.” Less root, more customizable. The OpenAi CEO also talked about another of the problems that were highly criticized in GPT-5: it was too neutral. Too cold and robotic. That could change very soon because as I said, “we are working on an update of the GPT-5 personality that should be perceived warmer than the current but not so annoying (for many users) as GPT-4O.” In Openai they have understood something important: people love to be able to customize everything they use … although many do not. A GPT-5 hypothesis. In Semiianalysis They have a theory Curious that they would explain the way Openai has launched GPT-5. According to them, the model of the model is not the model, but the router. This component was intended to monetize the service much more and convince users to pass the free service to one of the payment subscriptions. Altman already pointed to that approach. In fact, Sam Altman shared first revealing data on Sunday. The percentage of free users who were using the “reasoning” variant of GPT-5 had gone from less than 1% to 7%, while for Chatgpt Plus users that percentage went from 7% to 24%. That can imply that the basic model was not as good as users expected and preferred to make him think, but there is another striking fact. More subscriptions. According to Semiianalysis, the router and that best behavior when the “thinks” model seems to have convinced many more users, and subscriptions, they point out, have multiplied by 3.5. The router may have caused criticism of intensive chatgpt users, but it also seems to be a key element to try to achieve something OpenAi needs like water: convert free users – it has about 700 million – into paid users. Shit. Openai’s tactics, if really this, is not new. To degrade the free service with respect to the payment usually makes more users go to the payment (in addition to the initial criticisms). We have seen it for example in Netflix: when it began to close shared accounts and put internet ads, it was thrown over and the service seemed to have a complicated future. Today Netflix is more reference than ever and the fucking its service has worked perfectly. You may want to copy the idea in Openai. In Xataka | Sam Altman and Elon Musk hate each other publicly, so Altman has attacked where it hurts most: Neuralink

Chatgpt now has a “router” that chooses for us what GPT-5 model use. And is choosing the cheapest

The launch of GPT-5 It has not been up to the expected. Instead of being surprised with the generational jump, too many people have missed GPT-4o warmth. Sam Altman, CEO of OpenAI, anticipated the launch of the New model with the death starbut the Hype It has stayed very far from the moments wow like the Chatgpt images generationthat reached “melter” servers. One of the culprits has a name: “router” or router. What happened. Behind him Model nomenclatures mess of OpenAI (already Remember Xiaom’s mobile messi) and the fact of making the user have to choose which one more, the company chose to launch all (sub) GPT-5 models in a unified way. Now the system would choose the model that estimated the most to resolve the indication. The person in charge of this task is a “router”. This, “quickly decides which model to use based on the type of conversation, complexity, or the need for tools.” The problem is that in these first days, the router usually opts too many times by the basic GPT-5 model, in front of the “Thinking” model or reasoner. What has made the new generation seen as much more silly than Sam Altman came to affirm (he defended it was like have a team of doctors at your disposal). The necessary jump It seemed impossibleand indeed, many users have not felt it. The problem. Openai decides which model is more efficient for the user, but also which model is more efficient for their servers. At the launch of GPT-5, they have not been transparent about what leads them to use one model or another, and users who accuse them of choosing the cheapest model for Save costs. Aidan McLaughlin, employee of Openai, defended in x that prefers GPT-5 without reasoning the majority (65%) of times for how interacts with the user. He assured that they do not use the routing to save costs, but use the one they think is the best model for each consultation, for efficiency. It seemed a successbut it has arrived without complying with what users expect. Altman recognized launch failures. During a question session (AMA) on Reddit held on Friday, Sam Altman read and observed that for many users, GPT-5 did not work as well as 4th. To it He replied That the new model seemed “more silly”, because the router did not work well when they threw it on Thursday. Who warns is not a traitor. On July 19, after announcing great performance in Mathematics OlympiadAltman announced that GPT-5 was just around the corner, in front of images like that of the death star, he was cautious, claiming to want to establish realistic expectations: “This is an experimental model that incorporates new research techniques that we will use in future models” “We believe you will love GPT-5, but we do not plan to launch a model with the gold capacity level of the mathematical Olympiad for many months.” The case remembers Sora’s, who was shown Very green After the Initial announcement promisesor that of O3, which was launched with a performance in benchmarks inferior to the initially promised (Unleashing everything A debate about whether we were already in the AGI). Openai has had to make changes. Before the complaints, from OpenAi they took a great measure: return to the users of the subscription Plus the possibility that the subscribers of the plan (of 20 euros per month) will use 4th again. But, in addition, they have touched the router to better choose the appropriate model to each consultation. Of course, without explaining what leads to use one or the other. The trick (if you pay). When users perceived that the problem of the new chatgpt with GPT-5 was not the model itself, but that the router did not turn to the reasoning model, they launched to look for ways to make GPT-5 Thinking use. The way to do it has been to integrate phrases such as “think about your in -depth response” in the indications, as suggested and demonstrated DOTCSV. In this way, it is achieved that, for Plus users, chatgpt use the Thinking model Without spending the quota of 200 weekly messages Manual mode. This trick does not apply to the free model, which could only activate it without asking for GPT-5 Thinking automatically once a day. Use free reasoning. Another change confirmed by Altman has been to integrate a reasoning button again (hidden) to use the reasoning model in the free version. Trying it, Chatgpt acknowledges using the GPT-5 Thinking Mini model, and we have been able to use it in 10 messages. From there, he used the model without reasoning. Is the successor of O4-miniwhich in April also received for free Reasoning functions With a “Think” button. Result: More use of reasoning. After announcing an increase in reasoning limits, Sam Altman has offered Specific data of the number of users who use reasoning now before the launch, and what indicates is the little that was used before, even among payment users. Free users have gone from less than 1% to 7%, but what the most collides is that those who paid the plus only used 7% reasoning, compared to 24% of now. It surprises in the sense that the reasoning of O1 and O3 supposed one of the great advantages over the free plan, thus the capacities of Deep Research. Image | Xataka In Xataka | Chip restrictions are over to China: now technological war

It seemed that GPT-5 was going to be a resounding success. Until too many people missed GPT-4o warmth

GPT-5 has not fallen standing. Forbes He collects user testimonies with name and face that, as they count, they cried when they learned that GPT-4O was being removed from the chatgpt model selector. And friends who vomited. What has happened. OpenAI GPT-5 launched on Thursday And he eliminated the previous models, including 4th, with no option to go back. Another very popular model also died, the O3 reasoner. GPT-5 apply reasoning or not, and choose more or less advanced specific sub-models, depending on the consultation and whether the user pays a lot, little or nothing. The answer was immediate and 24 hours later Sam Altman himself had to calm the masses saying that They were valuing to return 4th to the subscribers plus. In X and Reddit came the threats of cancellation of the subscription. Why is it important. This reminds us of an awkward truth of AI: technical performance is not everything. Many users have developed emotional links with specific models. With his tone, his rhythm, his way of “thinking.” GPT-4O I was reputed to be warm, conversational, playful. Openai had scheduled it with an “excessively flattering” personality, as they admitted later. A butler. GPT-5, designed to be less “servile” and more as “a useful friend with doctoral intelligence,” has been cold and mechanical for some. Between bambalins. The launch had more problems. The automatic routing system-which decides if an answer needs more time to “think”-failed for hours, causing GPT-5 to seem “much more dumb,” Altman said. There are also the already famous cheat graphs: during the presentation, Openai used poorly made data visualizations, with higher bars for minor values. The contrast. Something striking: OpenAi presumed improvements in reasoning and programming by pulling Benchmarksbut many users regretted the loss of something intangible: the emotional connection. The one that many have developed with concrete models and therefore makes it difficult for them to fit their loss: Openai made GPT-4O write his own obituary during Thursday’s presentation, something that his enthusiasts did not like. A few days before, hundreds of people gathered in San Francisco To celebrate the funeral of Sonnet 3model retired by Anthropic a few days ago. And now what. Altman has promised to duplicate the limits of use for Plus users and improve transparency on which model each consultation responds. The CEO has acknowledged that “they underestimated how much people imported some things that they like in GPT-4O, although GPT-5 works better in most aspects.” Admission, but with recoven. The long-term availability of GPT-4O will depend on the real use that users give it. For now, Plus subscribers have the best of both worlds: GPT-5 for maximum capacity, GPT-4o for when they want a voice that is more familiar and pleasant. Openai has learned a lesson for the bad ones: a great technological can hardly change something with which millions of people interact daily without causing a rebellion. Especially when they develop a personal relationship with that technology. Outstanding image | Xataka with Mockuuuups Studio In Xataka | Good news, you don’t have to choose model using GPT-5. Bad news, it is GPT-5 who chooses it without notifying you

Sam Altman, after reactions to GPT-5 launch

Some changes arrive as an improvement and end up breaking what was already working. It happened when Microsoft eliminated the classic Windows 8 start menu, generating rejection in part of its users. Also when Instagram replaced its classic grid with vertical miniaturesaltering profiles designed at the millimeter. GPT-5 It adds to that list. It is not a debate about whether it is better or worse than the above, but about how it has arrived sweeping with dynamics that many had perfectly refined. The community’s response was swift. In Reddit, the community went from the surprise to the open complaint in a matter of hours. Reddit complaints mark the GPT-5 premiere A user described their situation: “For months I was in perfect tune by changing between O3, O3-Pro, 4.5 and 4th, depending on the task. I knew exactly what each model could offer. Now they have gone and it is my turn to readjust myself to GPT-5” Another, used by GPT-4OIt was more direct: “They just removed the best model so far to write fiction.” The messages are repeated: “RIP GPT-4O“,”Bring GPT-4o back”A coral lament that marks the tone of this weekend. What has changed for so many users to feel? The answer is in the way Openai reordered chatgpt. The arrival of GPT-5 is not a simple addition: it is the withdrawal of eight models in a single movement. OpenAi eliminates GPT-4O, GPT-4.1, GPT-4.5GPT-4.1-mini, O4-mini, O4-mini-high, O3 and O3-Pro, redirecting all the activity towards GPT-5 and its variants. It should be noted that GPT-4O has not completely disappeared. Openai keeps it operational in the chatgpt voice mode and in the integration of the application with macOS, iOS and ipados, although outside the model selector and with a much more limited use than before. The migrations were automatic. If a conversation was in GPT-4O, 4.1, 4.5 or its mini versions, it now opens in GPT-5. Those used by O3 go to GPT-5 Thinking, and those in O3-PRO move to GPT-5 Pro, accessible only for Pro and Team users. The impact depends on the plan: the free level is limited to GPT-5 with 10 messages every five hours and a daily use of Thinking; Plus goes up to 80 messages every three hours and 200 per week with Thinking; Pro and Team maintain unlimited access and the option of activating “legacy” models temporarily. With noise already on, Sam Altman went out to explain the situation. He acknowledged that the launch had been “more rugged than expected” and explained that, during part of the previous day, a failure in the Autoswitcher He prevented GPT-5 from alternating between fast responses and GPT-5 Thinking when the task required it, which caused it to seem less capable of the usual. Among the immediate measures, he announced adjustments in the way in which the system decides when to activate its way of deep reasoning and greater transparency so that the user knows which model is responding at all times. He also advanced a change in the interface to manually activate GPT-5 Thinking and double the use limits for Plus plans once the deployment ends. “We are studying allow PLUS users to continue using 4th. We want to collect more data on the commitments/costs of that decision” In addition, he opened the door to a possible return of GPT-4O for Plus users, although he warned that the decision will depend on the data they collect in the coming weeks. Beyond the controversy, Openai insists that GPT-5 represents an important leap for Chatgpt. For Openai, GPT-5 is not just a new model: it is the centerpiece of a simplified chatgpt. Replace the list of models with a system capable of deciding for yourself when to give a quick response and when to activate your deep reasoning mode, GPT-5 Thinking. The company ensures that this architecture allows you to pay better in all types of scenarios, from complex works – code, data analysis, information synthesis – to daily writing or search tasks. With support for all Chatgpt tools, the goal is to be the fastest, most precise and versatile model they have launched. The bet fits with an idea that Altman had been defending time: simplify access to models. The change that GPT-5 brought responds to an idea that Sam Altman had been defending time: simplify chatgpt. The model selector, with names and versions that for many were cryptic, was a recurring complaint among users. It was not always clear if a model was faster, more precise or more creative, and the experience varied even with similar tasks. GPT-5 intends to end that uncertainty. A single system automatically decides when to respond instantly and when to activate your deep reasoning. However, this order comes to the cost of limiting control to those who used the selector as a strategic tool. That is why the discussion has not closed. The community pressure has already caused an official response and some adjustments in record time. It remains to see if OpenAi will go further and allow GPT-4O to return, at least for part of the users. Meanwhile, the debate is still open: better a single model that decides everything or the freedom to choose according to the task? Tell us: Do you miss the model selector? Images | OpenAI | Xataka screen capture In Xataka | Good news, you don’t have to choose model using GPT-5. Bad news, it is GPT-5 who chooses it without notifying you

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.