GPT-4.5 It is the demonstration that using more GPUS and more data is no longer useful

In the last two years we have seen how companies that develop AI models have not stopped showing voracity almost without limits. They bet on climbing and using more data and more GPUS to improve those models. However, there has been a surprise: it turns out that this strategy no longer works.

GPT-4.5 will be the last of your lineage. We have always associated with chatgpt with the traditional models “that do not reason”, although in recent times it also gives Access to reasoning modes. Even so, its current base is GPT-4Oand that model will have a last successor. It will be GPT-4.5, which will not be renewed. That is precisely the interesting thing.

Climbing no longer serves much. As they point out experts like Gary MarcusGPT-4.5 It seems to be the finding that spending more and more money on climbing, using more and more GPUS and data to train models no longer makes no sense. OpenAi’s hope was Orionwhich aimed to be GPT-5, but it is not: it is (probably) GPT-4.5.

Shock against a wall. The jump in performance and capacity It was never the expectedwhich resulted in the deceleration of AI. At least, of the generative AI that does not reason. That of course seems to have collided with a wall, and can no longer improve. We are, in the face of a change of total focus towards reasoning models.

It is happening to all. GPT-4.5 is the acceptance of this new reality by OpenAI, but there are many other AI companies that are in the same situation. The new versions of the models “that do not reason” do not just arrive. Grok 3 does not arrive and Xai is staying behindbut we have also not seen Claude 3.5 successor and we don’t know what Anthropic is working. Google just Present Gemini 2.0but the leap in capabilities with respect to Gemini 1.5 is not spectacular, at least if we do not take into account its reasoning version, Flash Thinking.

I told you. Experts like Yann Lecun, head of goal, since warning that this strategy of “more data and more GPUS” had an expiration date. Ilya Sutskever, Openai co -leaflet and now with her own startup of AI, It also made it clear months ago. For him the massive training of an AI model using a large set of data without labeling so that the model detects patterns and structures no more than itself, and even trying to do it more and larger, also did not offer too many advantages.

So, why spend so much money? If traditional models can no longer advance with that climbing, the question is obvious: why are companies investing billions of dollars in data centers? The answer is diverse. First, the climb is still useful to improve the models and make them behave better and comment less errors.

Data centers make sense. But it is also the section of inference: that gigantic infrastructure in which companies are investing is not so much to train models with the traditional approach, but so that hundreds or even billions of people end up using AI in their day continuously. That is the current bet.

That live the models that reason. The deceleration of the AI ​​that takes time speaking is not “of the whole AI”, but as we say of the traditional generative models that did not reason. The new models such as O1, Deepseek R1 or Gemini 2.0 Flash Thinking are clearly the trend: increasingly precise and with answers that have more and more quality and really help us to trust them. To do work for us almost “blind.”

We have advances in AI for a while. The AI ​​still has a long way forward. That the climbing approach (more Gpus, more data, this is war) does not make much sense, because there are other paths. Many. And that of reasoning models is just one of them.

Image | Amazon

In Xataka | OpenAi wants to be the new Google with GPT-5: You will ask and the AI ​​will already decide how it answers you

Leave a Comment