His reasoning models finally do what until now was impossible for them

For more than half a year, users of Chatgpt – and also those of the API – we have access to two types of models: the GPT models, such as GPT-4Oand reasoning -oriented models, such as O1. The problem is that, until now, we were forced to alternate between them for tasks that require complex thinking, web navigation or interpretation of images. As of today, that begins to change. New models on stage. O3 and O4-mini models, presented this Wednesdayshorten distances with GPT models. For the first time, reasoning -centered models access tools that, so far, were exclusive to classic models. Namely: Analyze images (and think with them). Analyze files. Generate images. Navigate the web. Do research. Use advanced voice mode. The keys to “visual reasoning”. The interesting thing is not that O3 read what is in a photo. It decides how to look at her. He knows if he has to turn it, expand a detail or ignore the irrelevant. That process is now part of its reasoning chain. He does not describe an image, but “think” about it to give us a better answer. A remarkable jump. We are facing a series of improvements that the most demanding users will undoubtedly know how to appreciate. The reasoning models, it should be remembered, “think” before responding. They generate an internal chain of thought before offering us an answer. They are not the best option for those looking for texts with literary dyes or quick responses on any subject. But they are ideal to code, reason scientifically and plan complex workflows, especially in environments where several steps are involved and Agents. Taking this into account, and as expected, Openai has also improved the main capabilities of these models, making them more capable and precise. OpenAI O3: This model achieves a prominent performance in Swe-Bench verified (without customization), a test that measures coding skills, with a score of 69.1%. According to OpenAI, in evaluations carried out by external experts, O3 makes 20% less important errors that OPENAI O1 in difficult real -world tasks. OPENAI O4-MINI: It offers similar yield, with a score of 68.1%. To put it in context: O3-mini obtained 49.3% in the same test, while Claude 3.7 Sonnet reached 62.3%. This model is optimized for rapid and profitable reasoning, and yields especially well in mathematical, coding and visual tasks. The model that was not going to launch. OpenAi has changed their minds. In February, Sam Altman ruled out launch O3 as an independent product. But just a few weeks ago, toHe gave a “change of plans”. This turn materializes today with the arrival of O3 and O4-mini, the new models that mark a new chapter in the company’s strategy. On the way to chatgpt. From today, users of Chatgpt plusPro and Team can already start using O3 and O4-mini. In the coming weeks, O3-PRO will arrive, a more powerful version of the reasoning model, which will be available for subscribers of the Pro Plan. Meanwhile, these users can continue working with O1-PRO. Images | OpenAI In Xataka | Openai’s hypothetical social network does not want to connect people. Want your data to train your AI

A European Startup and Microsoft have allied to overcome a major challenge: simulate brain reasoning

Simulate the human brain It is one of the great scientific challenges of our time. It is not just a matter of calculation power: for years, laboratories around the world have invested millions in trying to replicate their operation, but even with advances in artificial intelligence (AI) and supercomputing, the goal is still far away. Why is it so difficult? Because the brain is not only an information processor, but an ultra -efficient system of just 1.4 kg with more than 86,000 million interconnected neurons. Imitating their cognitive, emotional and linguistic abilities remains a border that we have not yet crossed, although some believe we are close. INAIT simulation technology and Microsoft’s computational power Inait, a Swiss startup founded in 2018 under the premise that “the only proven form of intelligence is in the brain”, has closed an alliance with Microsoft to develop models of the capable of simulating The reasoning of mammals. Their objective is to apply them in sectors such as finance, risk management and personalized advice. In theory, everything fits perfectly. Inait says having a simulation technology based on decades of research financed by the Swiss government, and Microsoft puts on the table the infrastructure and the business model necessary to transform this commitment into products ready to reach millions of customers worldwide. But the idea is not to develop human brains simulations with all its faculties overnight. Inait is training Digital models of different sizesdesigned to address specific challenges. For example, for advanced trading or the development of industrial machines capable of adapting to complex and dynamic environments. Digital visualization of a region of the Neocortex and the thalamus, including its network of blood vessels. Here the advantage is clear: take advantage of one of the most amazing faculties of the brain, which is its ability to face completely new scenarios and adapt quickly and continuously using previous knowledge. Even when it comes to sensory, emotional or social stimuli never before experienced. According to Financial TimesAdir Ron, director of AI and Cloud of Microsoft for emerging companies, highlighted the approach of the Swiss startup: “Inait is a pioneer in a new AI paradigm: it goes beyond traditional models based on data towards digital brains capable of a True cognition” For his part, Henry Markram, co -founder of INAIT, said that AI models based on brain simulations could not only learn much faster than current deep reinforcement systems, but also have significantly lower energy consumption. This would mean a key advantage in terms of efficiency and sustainability. Now we only have to wait to see if this vision translates into tangible advances or if, like so many other technological promises, You can’t advance enough. The possibility of building AI models that imitate the learning and adaptability of the human brain is a monumental challenge, but also an extremely interesting goal. Images | Milad Fakurian In Xataka | We already know at what speed our brain processes: just 10 bits per second In Xataka | Figure creates a system to make large -scale humanoid robots. And of course, there will be robots manufacturing robots

How to use Gemini 2.0 Flash and 2.0 Flash Thinking with reasoning on the web or your mobile

Let’s explain How to use Gemini 2.0 Flash and 2.0 Flash Thinking Experiment on the web and the application of Google AI. In this way, you can use The new models launched by the companyincluding the reasoning, which becomes free for all users. We are going to tell you where you are going to find this option, both in the web version and in the mobile version of Gemini. And we are also going to tell you how to use these models once you have activated them. Change the model that uses Gemini Gemini has an option to choose the model you want to use. In the web version, it is located on the left, and by default you will see that under the name of Gemini the model you are using appears. When you press on the model selector button, A window will open with the available Gemini models. This is where you can choose the normal 2.0 flash model, but also the Experimental Thinking To test the reasoning model, there is even one of reasoning to use with Google applications. This option It is also available in the mobile application. In this case, the models selector is in the upper central part of the screen, and the list of models will open down. When you activate a specific model, it will appear marked in the selection button on the left. In addition, you may have a message warning you of some details about its use. Now, simply Write the prompt you want To interact with this concrete model. And after writing your question, you will have the answer. The Flash Thinking model will show a window with reasoning that has followed step by step before building the answer. You can close this window if you are not interested, and the answer will appear at the bottom. In Xataka Basics | Gemini guide: 36 functions and things you can do with Google’s artificial intelligence

free access to your reasoning model

Google has announced he Gemini 2.0 launchtheir new family of AI models. They actually launched experimental versions of Gemini 2.0 Flash in December And above all they showed their preliminary reasoning model, Gemini 2.0 Flash Thinking, but now all those movements are consolidated with more mature and stable versions. Best of all, Gemini 2.0 Flash Thinking, OpenAi O1 or Deepseek R1is already available as one of the available models In Gemini’s mobile and desktop application. Gemini 2.0 is more “agetic”, and according to Google it can “interact with YouTube, Search and Google Maps.” The company announced these days bittersweet fiscal results: they have more money, users and benefits than ever, But they are on the defensive. SUCTAR PICHAI, CEO of Alphabet, highlighted how the company will invest 75,000 million dollars in AI for 2025, when its CAPEX (capital expenses) in 2023 was 32,300. The decision shows that clear intention to invest but not necessarily to lead, but to try not to be left in front of a competition that also seems willing to spend colossal amounts of money. One of the outstanding notes of this Google announcement is the launch of a Experimental version of Gemini 2.0 Pro. This model, successor of Gemini 1.5 Pro, is more precise and “factual”, and according to Google exceeds clearly to the previous version in programming tasks and mathematics related. Google qualifies Gemini 2.0 pro as “your most capable model so far”, and will be available for GEMINI Advanced subscribers and for those with access to VERTEX AI and AI STUDIO. Together with Gemini 2.0 Pro is Gemini 2.0 Flash-Lite, a lighter version that offers more quality in its response than the previous Gemini 1.5 Flash, but that maintains its speed and cost. This model has a gigantic context window (One million tokens) and multimodal support, and for example you can generate a brief description of a line for a set of 40,000 different photos … and all for less than a dollar, indicate those responsible. In Xataka | Google demonstrates what is capable: presents an amazing AF agent who can use the browser for you

The new reasoning model lands in the free chatgpt accounts

We are witnessing real time of the evolution of artificial intelligence (AI). A little over two years ago we were surprised by the capacities of GPT-3.5the language model behind Chatgptnow we have Reasoning models increasingly powerful at hand. And everything seems to indicate that this industry will continue to evolve. Early today we talked about Microsoft’s decision To make a version of O1 available to Copilot users, a product that had been limited to payment users. A few hours later we have received news from Openai, which A new reasoning model has just launched. It is tata from O3-mini, who has just begun its initial deployment. O3-mini, the new OpenAI reasoning model The latest OpenAi responds to a strategy that we have already seen with GPT-4o Mini. Instead that everything revolves around its flagship, the company promotes a series of models aimed at addressing different needs. In this way, O3-mini is part of the O3 family presented at the end of last yearbut it is focused on speed and efficiency. It should be noted that those led by Sam Altman had already launched in September 2024, and decided to skip O2 apparently for reasons of registered trademarks. So now we are facing a proposal that promises not only for its fantastic scores in different tests, but because it lands in the chatgpt ‘free’ accounts, also in Spain. But what does this mean? Well, reasoning models differ from the rest of the models, such as GPT-4for its ability to Verify thoroughly and step by step your answers. They channel our questions through a “reasoning.” This does not mean that they cannot be wrong, but the answers are much more successful. Reasoning means time, so this type models are not particularly fast. In any case, for certain scenarios in which reasoned responses are needed in a reasonable amount of time, O3-mini could be a great option. For example, to boost certain applications thanks to access through the API. O3-mini is a model that embraces Stem problems. That is, if what you are looking for is to perform a literary essay, this will probably not be the ideal model. Now, if your requests are related to Science, Technology, Engineering and Mathematics It is likely that this reasoning model can give you some more consistent responses to your needs. OpenAI has measured the reaction of users with O3-mini and O1-mini. According to their data, external testers opted for the responses of the most recent model. In addition, O3-Mini reduced the “important errors” in the “difficult questions of the real world” in 39% during A/B tests, surpassing O1-mini. A notable fact for free access. If you have one Pro $ 200 per month You can use the model unlimitedly ” and with a higher speed than free accounts. OpenAI also has a business plan, Chatgpt Enterprise. In this case, users will have to wait until next week to use the company’s latest reasoning model. Images | OpenAI In Xataka | Mistral AI is the French startup that opted for efficiency before Deepseek. His future is uncertain

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.