In December 2022 chatgpt He left us speechless to all. However, two and a half years later we have a problem: it does not seem that after all this time I can go to much more. It has improved, yes, but in the meantime we are moving away from the great promise of AI, which is none other than going beyond and that someone manages to reach what is known as the General Artificial Intelligence. And it seems clear that this path, that of Chatgpt, is not the good to get it.
Promises, promises. A few months ago Sam Altman called the president of the United States, Donald Trump, and He said that the AGI would arrive before it ended its mandate. It is a message that has been repeating for months, although then spoke of “A few thousand days“Dario Amodei, CEO of Anthropic, believes that It could arrive beforein 2026. Elon Musk – who promised that he would have a totally autonomous Tesla in 2016 – agreed and pointed at 2026 as the year we will have an AGI. All are hypeptimists for a simple reason.
Money. Like Altman, all who defend the rise and development of AI and the imminent arrival of the AGI do so to raise more and more money for their companies. We know that developing, training and running models of ia costs true fortunes, but the progress in this field seems to be slowing down.
Doubts with climbing. There are many who believe that the current strategy of climbing the models – give more GPUS and get more data to train them – no longer compensates as much as before. The latest versions of the great foundational models exceed their predecessors, yes, but not in a striking way. It’s as if we had touched the roof.
This is not the way. And for months the voices of experts have begun to be heard making it clear that other solutions must be sought. Nick Frosst, a student at Geoffrey Hinton and founder of Cohere, is clear that current technology is not enough to reach an AGI. What the generative AI does is “predict the next most likely word”, but that is very different from the way humans think.
Lecun believes that we will take a long time to achieve an AGI. Personalities respected in the world of AI such as Yann Lecun, head of the division of AI in the finish line, are clear. Models as chatgpt They will not be able to match human intelligence. Also ensures that achieving a human level AI It will take a long time: Nothing “a few thousand days” as Altman said.
And Sutskever coincides. This openai co -founder, is also skeptical with the potential of the generative AI, which according to him It is barely improving. His new startup, Safe Superintelligence, aims to create a superintelligence with “nuclear” securityalthough at the moment there have been no details about the strategy they are following to achieve it. It is not of course the one that followed when it helped create chatgpt. A recent survey to an academic association of experts in this field They thought the same: Three quarters of those who responded do not believe that current methods serve to end up developing an AGI.
The generative AI is not a miracle. As they point out In The New York Timeswhat chatbots like chatgpt or other developments in this field is to do one thing very well, “but they are not necessarily better than humans in others.” According to him there is a certain temptation to think of these chatbots as something magical, but “these systems are not a miracle. They are very impressive gadgets.”
Chatgpt does not challenge what he knows. Thomas Wolf, co -founder and Chief Science officer of Hugging Face, is clear that the generative AI is very good, but is far from taking us to an AGI. What we have, he explained a few weeks ago, is like “a country full of people who tell us yes to everything.” Chatgpt does not challenge us, but he does not challenge what he knows either. “We need a system that is able to ask yourself things that nobody had thought Or that nobody had dared to ask, “he said.
Many challenges ahead. Among the differences between AI and human intelligence is that the latter is linked to the physical world: part of our intelligence is to know when to turn the toast, for example. There are advances in robotics and sensors that can help solve such problems, but this is a good example of how there are still many challenges to overcome to achieve that general artificial intelligence that is supposed to match (or overcome) to human intelligence in all disciplines.
And the Ias that reason? The generative AI companies have found a small respite with the modes of reasoning of their chatbots. Here we find a singular advance that allows AI to respond more precisely and detailed thanks to “thinking” their answers and following a process of “reasoning” that tries to imitate the human.
However, this does not seem to take us to an AGI, and again these modes of reasoning are rather a way to try that the answers are something better and do not see “hallucinations” by the chatbots. In spite of everything, Chatgpt and its rivals continue to make mistakes in this and the rest of the ways.
Odds. On the horizon some possibilities appear. The current approach based on neural networks accompanies the approach of symbolic systems (based on rules) that can help provide elements such as deductive reasoning or abstract knowledge management to current models. It also works on training of models with physically precise virtual environments and in the so -called systems of meta-learningwhich allow to train new neural networks quickly and with a limited data set.
But companies need products to sell us. These approaches to the development of new research roads are there, but the problem is that companies remain apparently very focused on generative AI and climbing. They continue to invest huge amounts of money in trying to improve their current models or apply them to new problems. For example, with the striking programming agents, such as cursor, Windsurf or the new OpenAi codex. It is certainly interesting to improve and that is what companies need to convince us that we use their AI platforms and end up paying for them. But it also distracts from what should be the true final objective: achieving an AGI that a priori seems to be much further than what Altman, Amodei or Musk tell us.
In Xataka | Openai’s hypothetical social network does not want to connect people. Want your data to train your AI
GIPHY App Key not set. Please check settings