So we will never create an Einstein or a Newton

We are almost playing the AGI With the fingers, apparently. We do not say it, they have been telling it for some time true personalities of the technological world. Let’s see:

  • Sam Altman believes that he will arrive In a few thousand daysbut of course, you need to create Hype To raise more and more funds.
  • XAI’s slogan, Elon Musk’s startup – known by His unfulfilled promises-, indicates that with its AI we can “understand the universe.”
  • Jensen Huang, CEO of Nvidia, also believes that We will create an AGI in five years (And meanwhile, he will swell Gpus).
  • And demis Hassabis, CEO of Deepmind, It seems to coincidealthough we must admit that Google seems to be something more cautious with those statements.

But all that are promises. Expectations Smoke. The unfortunate optimism of this industry has caused a colossal gold fever in which investments Bets in new startups and especially in data centers –Hello, Stargate– They are absolutely spectacular and more typical of a bubble. Can these expectations meet? Clear. But nothing ensures 1) When we can have an AGI and especially 2) that in fact we get to have it.

And that is an important problem, because the expectations about AI have shot themselves, and that is dangerous. Is it a promising advance? Definitely. Is our world changing? For now, rather little.

It must be taken into account that other past technological revolutions also took time and generated distrust and skepticism in the beginning. We have in fact famous cases of true zascas throughout the mouth in the technological field:

  • Thomas Watson, president of IBM, said in 1943 “I think there is a world market for about five computers.”
  • Bill Gates allegedly said –Although then denied it— that “640k (memory) should be enough for anyone” -.
  • His great friend Steve Ballmer – who He has more money than him, That is to say— I laugh on the iPhone When it was launched.
  • Robert Metcalfe, coinventor of the Ethernet standard, said In 1995 “I predict that it will soon become a supernova and collapse catastrophicly in 1996”. Then he recognized his mistake and literally He ate his words.

There are many and important leg weather for people who theoretically knew a lot about what I was talking about. And all of them demonstrate one thing: predicting the future is not only impossible, but dangerous. And that makes us clear that perhaps we have to give a (great) opportunity to AI, no doubt.

So we will never have an AI that Einstein or Newton just

But today we may be waiting too much of her. It is just what he raised In a brief but brilliant essay in x Thomas Wolfco -founder and Chief Science officer of Huggin Face. According to him what they promised us – or what we are still talking about – is very dissent to what we really have.

And what was promised and told is that the world of science will revolutionize. What will we have New medications, new materialsnew discoveries. And the reality is that although there is some news Really promisingthere are no revolutions for the moment.

For Wolf what we have is “A country of men who say yes to everything on servers“, that is, that AI, although it is assertive and expresses its opinion firmly and safely, does not usually challenge users. And most importantly: it does not challenge what he knows.

As he explained, many people fall into the mistake of thinking that people like Newton or Einstein were excellent students, and that genius comes when you manage to extrapolate those great students. As if making AI have the best student in the world was enough. And it is not:

“To create an Einstein in a data center we do not need a system that has all the answers, but rather one that is capable of ask yourself things that nobody had thought Or no one had dared to ask. “

It is a powerful and probably true message. While Sam Altman said the Superintelligence I could accelerate the scientific discovery and Dario Amodei – the anthropic assured That AI will help us formulate priests for most types of cancer, reality is another.

And the reality according to Wolf is that AI does not generate new knowledge to “by connecting previously unrelated facts. (…) simply fills the holes of what humans already knew.” Here perhaps the statement is somewhat negative, because the AI ​​does generate new knowledge and new content connecting precisely those data with which it is trained. We saw it recently In the field of microbiologyfor example, and also in all those works of text, image and video that make us consider what creativity is and if the machines can become creative.

Wolf is not alone in that speech. Google François Chollet’s exingenero, now in front of the Benchmark Arc Prizecoincides. According to him, AI is able to memorize reasoning patterns – which are those used in reasoning models, such as O1 or Deepseek R1 – but it is not likely to reason alone and adapt to new situations.

Thus, according to Wolf the current AI is like a fantastic and very applied student, but one who Do not challenge what has been taught. He has no incentives to question what he knows and propose ideas that go against the data with which he has been trained. It is limited to answering questions already formulated. This expert affirms that we need an AI that wonders itself “what happens if everyone is in error with this?” although everything that has been published on a certain subject suggests otherwise.

The solution it proposes is to leave the current benchmarks. He talks about an “evaluation crisis” that makes the tests focus on questions that have clear, obvious and closed answers. Instead, it should be valued especially that AI is capable of bold approaches and go against the facts. “That may ask” questions not obvious “that lead her to opt for” new research paths. “

“We do not need an honorary registration student or one who can answer any questions thanks to his general knowledge. We need a remarkable student who detect and question everything that the rest of the people overlooked

And you may be right, of course. It is something debated with the problem of scaling – the models are no longer much better even though more resources and data have been used – and it does not seem that this is the way to achieve an AGI.

It seems that companies They have noticed And they are looking for other paths. The new reasoning models seem a more promising path and in fact manage to look for bold solutions. We recently saw it with those Models of AI that cheated to win chessFor example. Ilya Sutskever, co -founder of Openai and now looking for an AGI, has also made it clear that she is following Another different path which led to develop chatgpt.

Will you succeed where others are failing? Who knows. Both for him and for the rest, however, Wolf’s reflection is important. Perhaps what we precisely need is an AI that does not tell us that yes to everythingif not one that questions what we know.

Or what we think we know.

Image | Xataka with Freepik

In Xataka | The Government of Spain wants all the AI ​​content to wear a “label”. It sounds very good, but it is a tremendous challenge

Leave a Comment