AI is very comfortable inventing everything it doesn’t know. Some researchers think they know how to stop him

The hallucinations have been the Achilles heel of AI since chatbots began to be part of our lives. Companies like OpenAI promised that hallucinations could be mitigated with adequate training processes, but years later both ChatGPT and its direct rivals They keep making up answers when they are not sure what to say. Shuhui Qu, a researcher at Stanford University, believes she has found a way to address the problem. A structural problem. Current language models have a factory defect: they respond with complete security even when they have no idea nor the necessary information. This has to do with how they progress when processing any answer, since LLMs have no problem completing the missing information, even if they are not being faithful to reality and are working with assumptions. First thing, recognize it. Shuhui Qu, a researcher at Stanford University, publishes an article in which she introduces what she calls Bidirectional Categorical Planning with Self-Consultation. An approach that starts from a simple idea, but uncomfortable for large technology companies: forcing the model to explicitly recognize what it does not know and not move forward until solving it. A more scientific method. The idea is not that the model think betterBut stop pretending you know everything. The approach of What starts from a basic premise: every time the model takes a step in its reasoning, it should ask itself if it really has the necessary information to do so. When an unknown condition appears, the model cannot continue. You are not allowed to fill the gap with an assumption, and you have to stop to resolve the uncertainty before moving forward. You can do this in two ways: Well asking a specific question to obtain the missing information Either by introducing some intermediate step (verification, additional consultation) that becomes part of the chain of reasoning. The method. The researchers, using external code, made models like GPT-4 They responded only when they had complete information. They did it with simple tasks, asking about cooking recipes and Wikihow guides. The key? They purposely withheld information to force him to stop. The conclusion of the research was that making preconditions explicit and verifying them before moving forward significantly reduces LLM errors when information is missing. Of course, along the way it is admitted that even this is not enough to make the hallucinations disappear completely. not so fast. Although the researcher’s idea sounds brilliant, it is quite unlikely to see it in the short and medium term. This way of processing breaks the natural flow of current LLMs, designed to return complete answers. To make such a system work, it is necessary to add an additional layer to the structure, some preconditions that force it to control the calls, interpret the responses themselves, classify them and self-block from asking questions if they do not have all the information. In other words, for the moment, AI will continue to score the triples to which we are already accustomed. Image | Xataka In Xataka | ChatGPT invents data and that is illegal in Europe. So an organization has set out to fix it with a lawsuit

Retocate a photo without inventing it. Google has succeeded and has won 10 million users

Recently Gemini was news for the integration of a new image editor, called Nano Banana. Its great virtue is that it has achieved something that seemed impossible: finally an AI is able to make touch -ups in a photo without changing it completely. The launch has gone well to Google, who has announced that thanks to Nano Banana they have added no less than 10 million users to Gemini. The numbers. We have known it thanks to Central Androidwho echoed the last publication of Josh Woodward in X. The Vice President of Google announced that since the launch of Nano Banana on August 26, more than 200 million images have already been published and the Gemini app has won 10 million users. Woodward jokes that TPUs are burning (it refers to ‘Tensioning Processing Unit’, used in neuronal networks). The milestone The generation of images was at a very high level in which distinguish what is AI and what is not It is almost impossible. However, he was unable to make a small modification in an image without inventing things. And let’s not talk about asking him to put a text. The striking of Nano Banana is that you can add, change or remove something from a photo without changing anything else. For example we can ask to change our clothes or add a person in a photo. Viral. Artificial intelligence is already mainstream; practically everyone knows what a chatbot is and many people use it for all kinds of consultations (even As if it were his psychologist). If there is something that has contributed to the AI ​​reaches the general public, it has undoubtedly been the generation of images and the most viral case we have it with the Ghibli style photos. Although not with such an impact, during the days that Nano Banana has been available, several have been viralized Prompts like the one Turn any person or pet into an action figurine. It trembles, Photoshop. As we said, the image edition was a point that escaped the AI, but Nano Banana is able to remember and apply small modifications. At the moment Nano Banana does not reach the requirements for professional use, but if it continues to evolve it can be A great threat to image editing softwares like Photoshop. In Xataka | Cancel the adobe subscription has been hell for years. USA just denounced him

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.