The skeptics of the AI warned that we were exciting a lot and we did not believe them: the AI is tontal
GPT-5 is something better that GPT-4. The problem of that phrase is in the word “something.” Openai’s new “unified” model does not seem to represent the qualitative leap that many expected, and that The alarms have sounded again. One might ask what if the AI no longer becomes much better than it is now? But maybe that is already happening and the question is another: what do we do then? The climb works, but less. In 2020 a team of Openai researchers published A study entitled “Laws of climbing for neuronal language models”. They raised a kind of Moore law of the AI: the more data and computation dedicated to training models, the better they would be. That observation was clearly demonstrated when they launched GPT-3, which was 10 times larger than GPT-2 but it was a lot, much better than that model. Deceleration. Gary Marcus, professor of psychology and neuronal sciences at the University of New York, explained in 2022 That this study did not make much sense: “The so -called lawyer laws are not universal laws such as gravity, but simply mere observations that could not be maintained forever.” Even Satya Nadella agreed With this statement a few months ago in the Ignite 2024 event. And as we are seeing, their doubts have come true. The climb works and the models are somewhat better than their predecessors, but The deceleration seems to be there. But GPT-5 is not so bad. The truth is that GPT-5 has improved in relevant metrics. Those responsible for Epoch AI They evaluated His behavior in FrontiermathFor example. The results were a bit better than their O4-mini predecessor, but there were no big yield jumps. Even so, they highlighted how GPT-5 has been the first model to solve a specific problem as if “had fully understood the problem.” In the field of mathematics, GPT-5 behaves something better than its predecessors, but the difference is not radical. The most difficult problems (Tier 4) are still almost impossible for AI models. Source: Epoch AI. And think better. Another independent analysis of the ZVI Mowshowitz analyst He pointed out that although the GPT-5 base model It was correct without further adoits advanced variants (GPT-5 PRO and GPT-5 Thinking) were a substantial improvement with respect to O3-PRO and O3 respectively, especially when mitigating hallucinations. According to your data, “GPT-5 Auto” (the base version) seems like a poor product unless you use the free chatgpt plan “. The same what we need is a symbolic AI. The symbolic (“classic”) represents knowledge using symbols and rules, and is based on logic and formal reasoning to solve problems and make decisions. This type of AI He dominated the panorama From AI until 90, but the lack of notable advances made that discipline stagnate and live a winter of AI. “We leave” with the connectionist, the neural networks that represent knowledge through connections and weights of the nodes of a network of artificial neurons. This discipline was the one that gave rise to the ia generative and the overwhelming success of Chatgpt and its rivals. His surprising good behavior unleashed the current AI fever, But performance advances are slowing down. The skeptics of the AI redouble their speech. Analysts like Ed Zitron – more extreme – or Gary Marcus – defense of the symbolic AI— They have always warned of the exaggerated expectations generated by the generative AI. Even those who were instrumental in the creation of Chatgpt, such as engineer Ilya Sutskever, They warned of the scaling limitations. Reasoning models have softened criticism and are a great alternative for that apparent stagnation of standard models, but even with them the feeling is that AI will not go much further. Thus we will never get to an AGI. Thomas Wolf, co -founder and Chief Science officer of Hugging Face, reflected on the problem A few months ago and concluded that the IAS have become “a country of men who say yes to all servers.” For him things began to be disturbing: “To create an Einstein in a data center we do not need a system that has all the answers, but rather one that is able to wonder things that nobody had thought or nobody had dared to ask.” As this expert pointed out, the current AI does not generate (usually) new knowledge, and “simply fills the holes of what humans already knew.” The current AI is like a fantastic and very applied student, but that student does not challenge what has been taught. He does not question it and does not propose ideas that go against the data with which he has been trained. Yann Lecun, one of the pioneers of the AI, has already concluded on the current generative AI: It’s silly. Lowering expectations. The panorama is worrying for those who are investing billions of dollars in data centers or in training new foundational models, especially because that impact may not be as gigantic as they had forecast and promised. Ed Zitron indicated In The New Yorker that “this is a 50,000 million dollar market, not one of one billion dollars.” Marcus agreed. “50,000 million, yes. Maybe 100,000.” What happens if AI has stagnated. If it is effectively, what we can expect is that AI becomes a useful tool to save time and improve the result of certain tasks – it is doing it – but not to provoke That seismic impact In society and employment What personalities such as Altman, Musk, Amodei or Zuckerberg defend with their investments. If that happens we will undoubtedly have a powerful tool to do things better and faster. That was just what allowed us other fantastic disruptions such as the PC or the Internet. But many probably expected more. A lot more. And there is the problem. In expectations. Image | Levart Photography In Xataka | There are too many AI models. That raises a true death sentence for Anthropic and Claude