You have to put very large quotes on the "reasoning" of the AI: Welcome to the "irregular intelligence"

You have to put very large quotes on the"reasoning"of the AI: Welcome to the"irregular intelligence"

The second definition of reasoning according to The Spanish Language Dictionary is to “order and relate ideas to reach a conclusion.” That is just what do models of AI such as O3-mini, Openai or Deepseek R1. They collect information, order it and build an answer in which they reach a conclusion.

Are these models models that “Reason” really?

It is an inevitable question with a difficult answer. From that term began to be used, in Xataka We often came to quote itbecause comparing the theoretician “reasoning” of these machines with human reasoning is delicate.

And it’s how they indicate In Voxscientists are still trying to understand how reasoning in the human brain works. There are in fact various types of reasoning such as the deductive (from a premise we reach a conclusion) or the inductive (we perform a wide generalization from a series of observations).

{“Videid”: “X95L504”, “Autoplay”: False, “Title”: “Counting with OpenAi O1”, “Tag”: “OpenAi”, “Duration”: “88”}

Divide a problem into parts to solve it It is also reasoning about him. It is in fact the idea after which is the so-called “chain of thought” (“Chain-of-Thought”) of which OpenAI I already talked in September 2024when o1 was launched. It is a process that mimics to some extent to human reasoning in such problems, but is the machine reason as people do?

For some experts one of the things that distinguish us (for the moment) of the machines in this type of tasks is that we can discover “a rule or pattern from limited data or experience and to apply this rule or pattern to new and unknown situations “

This was stated by Melanie Mitchell – of the Santa Fe Institute – and his colleagues In a study about the “reasoning” capabilities of AI models in November 2023. At that time there was still almost a year for O1 and other rival “reasoning” models, but the data remains valid, because the models of AI follow needing to be training with vast amounts of information.

But Mitchell analyzed The spectacular O3 performance in ARC tests a year later, and was surprised at how well he had behaved. He also caught the large amount of computer resources that required that “reasoning” capacity, and wondered if the machines were really using the type of abstraction that were really needed for those tests.

There are studies that precisely question that AI is “reasoning.” October 1, 2024 signed by four researchers from the Institute of Technology of Israel and the Northeastern University wondered if the LLMS (Large Language Models, great language models) Solve these reasoning tasks learning robust and generalizable algorithms, or do so memorizing the data with which they have been trained. Do they use heuristics and experience, or “think”?

The conclusion reached after their evidence is that there is apparently a mixture of both: they implement a set of heuristics – a combination of memorized rules – to carry out their arithmetic “reasoning”. They do not “reason”, or at least they don’t do it as human beings do. Above all, experts criticize, they apply heuristics and a series of memorized data to solve the problem. Their ability to extrapolar them and adapt to new problems is limited. They are applied students, but they are not “great.”

Screen capture 2025 02 24 at 13 53 02

The ARC-AGI test tries to raise evidence that for human beings are relatively simple, but with which the models of AI have a really bad time.

For other experts, such as Shannon Vallor, from the University of Edinburgh, what AI does is once again, imitate human behavior. They do traditional chatbots such as Chatgpt when generating text, and also do these “reasoning” models by imitating RL human reasoning process, decomposing the problem and trying to solve it by stages.

Some researchers They speak Of an irregular intelligence, because as Andrej Karpathy explained – Exoes ARC tests that for human beings are very simple.

Until they stop getting stuck, of course. That is what all the companies of AI pursue with increasingly advanced and versatile models. Less irregular. And when they arrive (if they arrive) they may not have too much importance if they “reason” or not. And either Let’s quote That word.

Image | Todd Martin

In Xataka | Copilot, Chatgpt and GPT-4 have changed the world of programming forever. This is thought of programmers

(Function () {Window._js_modules = Window._js_modules || {}; var headelement = document.getelegsbytagname (‘head’) (0); if (_js_modules.instagram) {var instagramscript = Document.Createlement (‘script’); }}) ();


The news

You have to put very large quotes on the “reasoning” of AI: welcome to the “irregular intelligence”

It was originally posted in

Xataka

by
Javier Pastor

.

Leave a Comment