everything we know so far about the new version of Google’s operating system

We are going to give you all the information about Android 17so you can learn about everything that Google’s next mobile operating system offers. For now, we are going to tell you everything we know at the moment, but between now and its launch we will update the article periodically to always keep it up to date. As usual, the basic version of Android 17 will be the one that reaches Google’s Pixel phones, while then the rest of the manufacturers will adapt it with their customization layers, which may have additional functions. But we are going to focus on the base version of Android and what we know about its next version. When do we expect Android 17 to be released Google launched the first beta for developers of Android 17 last February. These are very unstable versions and not recommended for conventional users, and where visual innovations are gradually being implemented. The objective is to have the software base so that app developers have time to adapt them. IF Google continues to maintain the accelerated pace launched in 2025, it is expected that the first public beta arrives in May 2026. It will be during Google I/O, the search engine company’s annual event, and these beta versions will already have the main visual novelties. They will launch successive versions where they implement the new features that are to come. And then, The final version will begin to arrive between June and July for Pixel phones, which are always the first to update. Then, the rest of the manufacturers will adapt Android 17 to their customization layers and launch it on their devices between the second half of 2026 and the first half of 2027. What news do we expect from Android 17 Let’s start by telling you what they are the confirmed news of Android 17. They will not be the only ones, nor will they be implemented at the same time in the betas. However, looking at the Canary versionswe know that Google prepare a before and after for the gamersin addition to the beginning of the end of Chrome OS to not have two operating systems, but one. Let’s talk to you about it. ‘Aluminum OS’ and the end of ChromeOS The most disruptive novelty of Android 17 could be at its foundations, although has not yet been confirmed. This is a project with code name ‘Aluminum OS’which seeks to unify both mobile phones, tablets and laptops in the same operating system. Google currently has two operating systems, Android for mobile phones, tablets, watches or cars, and Chrome OS for laptops and computers, although it focuses on the low ranges. Now what they want is have a single operating system that works for everythingand this would mean the disappearance of ChromeOS. The idea with Aluminum OS is that Android can be used on both mobile and desktop. And when you use it on the desktop, it will have the interface of a full operating system. With this, Android will no longer only focus on mobile phones and tablets, but also on laptops and computers of all ranges. Just like we have been learning in recent monthswith this Google wants to attack three different fronts: Unify resources having a single development: Google currently has two parallel developments, Android and Chrome OS. With this, all efforts are focused on a single development. Assault on the high ranges: Chromebooks with Chrome OS are mainly cheap entry-level laptops, and are used for browsing the Internet and not much else. But now, The leaks talk about premium devicesand that Google wants Android to be an option for high-end laptops. Gemini integrated into its core: We have also learned that Google wants to integrate Gemini into the bowels of its operating system, so that it can be used natively on laptops without further complications. Here, what should be clear is that It is not yet confirmed that it will arrive on Android 17. It has been leaked that it is an ongoing development, but we do not know if it will arrive in the next version of its operating system or if it will be fully integrated or it will be a progressive integration. We will be attentive to new information. This is the rest of the confirmed news Remapping of controls for gaming: Android 17 will bring very good news for gamers. The first is that there will be native support for button remapping, to be able to adapt the actions to the controllers and prevent the buttons from not doing what you want if you connect an Xbox or PlayStation controller. Virtual command function: It will allow you to translate touches on the screen into signals from a physical controller. With this, you will be able to use games that were designed only for touch screens with your favorite controller. Universal clipboard: Google prepares a system to copy and paste between mobile and PC, a universal clipboard. The great advantage of Apple is that what you copy on the Mac you can paste on the iPhone without doing anything, and vice versa. Google wants to have technology to do the same between Android and PC. This will allow for more fluidity between devices and put an end to one of the classic advantages of using Apple devices. AI built into the core with AppFunctions: AppFunctions a local framework which allows applications to expose their functions so that assistants like Gemini can execute them directly using natural language. This tool will allow AI to perform complex, multi-step tasks in the background within third-party apps. News in Material 3 Expressive: Google also adds new features to the design of Material 3 Expressive the Android interface. First, you will have a transparency effectsomething similar to Apple’s Liquid Glass. Thus, elements such as the volume bar will have a semi-transparency that allows the color underneath to pass through. It is also expected that all icons They must respect the accent color of the … Read more

What is Antigravity, how it works and what you can do with Google’s artificial intelligence IDE

Let’s explain to you what is Antigravity and what you can do with this Google development tool. It is a popular developer environment powered by artificial intelligencewhich makes creating web projects as easy as possible. We are going to start the article by explaining to you in a simple way what it is. Antigravity. Then, we are going to tell you its main functions and what you will be able to do with it. Next, we will mention how you can use it, giving you the essential news, and we will end by talking about its price and availability. What is Antigravity Antigravity is an integrated development environment (IDE)one of those programs that developers use for programs. Come on, it is used to write code and create applications or web pages. The difference is that Antigravity is an IDE powered by artificial intelligence. This means you can delegate complex coding tasks to autonomous AI agents to write the bulk of the code and perform checks. Come on, instead of writing all the code by hand as in classic IDEs, simply you explain to the Antigravity assistant what you wantand then it will use artificial intelligence to plan it, schedule it, test it, and show you the final results for you to review. Just as there are other AI programming tools whose agents simply assist you while you write and you do the bulk of the work, Antigravity is the opposite. Here the burden of writing code falls on the AI, while you are just telling it the concept you want and reviewing everything. So, it is an IDE for whoever doesn’t know how to program can do it. It’s like one of those generative artificial intelligence that responds with text or creates images or videos of whatever you ask, but instead what it does is write code. There are other AI services like Claude that have very good capabilities in writing code. However, Antigravity is capable of not only generating the code, but also testing it, detect and correct errors that could have been generated. And as for the artificial intelligence to which it delegates, its agents use the Gemini, Claude and GPT models. As new versions are released, they are added, but today Gemini 3.1 Pro, Gemini 3 Flash, Claude Sonnet and Opus 4.6, and gpt-oss-120b are available. In essence, making a website with Antigravity can be a little more complicated than making it with Claude if you are an inexperienced user. But if you are a developer, you will have much more controland you will also be able to take advantage of AI to review other projects you have created. What you can do with Antigravity Antigravity is not simply a code editor, but goes much further. For a start. Let’s leave you a list with the main functions What does this tool have: Planning and autonomous execution of tasks. Antigravity agents can autonomously plan, execute, and verify complex tasks through the editor, terminal, and browser. You simply give it the instruction in natural language, which can range from creating a website to reviewing an existing one, and the agent will take care of the process, from planning to implementation. Management of multiple agents in parallel. In the Manager view, a developer can launch five different agents working on five different bugs simultaneously, effectively multiplying their productivity. Verifiable artifacts. Agents produce tangible deliverable artifacts such as task lists, implementation plans, screenshots, and browser recordings. This way, you can verify the logic that the agent is following, and leave comments on the artifact to make corrections or leave feedback without stopping their workflow. Browser control for automatic testing. Antigravity’s browser subagents can launch Chrome, interact with your application interface or website, and verify its operation automatically. Come on, in addition to creating a website you can have the AI ​​verify that everything works well. Two modes of work. Antigravity offers a Plan mode, which generates a detailed plan before acting on complex tasks, and a Fast mode, which executes instructions instantly, great for quick fixes. You can also choose the level of autonomy you give the agent. Compatibility with the existing ecosystem. Antigravity works on top of your existing toolchain: Git, language runtimes, package managers, CLIs, and browsers, so you can open the same repositories and run the same commands you already use. How Antigravity works The way Antigravity works is simple. The main screen is divided into two. On one side you will have the code, where you can open and see the content of the projects. And on the other side you will have the artificial intelligence agent, with a writing prompt where you will only have to describe the website you want. If you want to create a project from scratch, then go to the agents section and Describe the website or application you want to create. You’ll need to do this as completely and thoroughly as possible, talking about what you want it to be able to do, and describing the design you have in mind. Then, send it the prompt and Antigravity will first start thinking about how to do it, and then it will start writing the code, which you will see in the other part of the application. During the process, you will be able to see how the agent is thinking, and this will ask your permission to make changes or actions. You can also, when you launch Antigravity, open a project you already have to see its code. Then, in the agent section you can ask them to make the changes or checks you want. Price and availability You can use Antigravity with your free Google account. This means that you will be able to use it to create any website or application without problems. It is designed for occasional and not very demanding use. However, if you pay for a Google AI Pro or Ultra subscription, you will have much broader limits if you are a professional developer … Read more

Google’s Pixel 10 Pro drops in price again and has just reached its minimum in this store

Within high-end mobile phones, the Google Pixel 10 Pro is one of the undisputed leaders. If you have been thinking about buying it for a while, now is a good time to do it. On Amazon, it has just reached its minimum price and you can get it for 749 euros. Google Pixel 10 Pro – 128GB The price could vary. We earn commission from these links Google’s most Pro mobile now at a minimum price He Google Pixel 10 Pro is (along with the Google Pixel 10 Pro XL) the best mobile of the brand and it is perfect, thanks to its 6.3 inch screenperfect for those who want a smartphone that can be easily held in one hand. In addition, this screen offers a 120 Hz refresh rate and a good level of brightness, so you can see it in perfect conditions, even in sunlight. Internally, the processor it mounts is the Google Tensor G5accompanied by 16 GB RAM and 128 GB internal storage. Its battery lasts for more than a day and supports fast charging at 30 W wired and wireless at 15 W. In addition, it comes with pure Android operating system with guaranteed updates for seven years. One of its strong points is the photography section. On the front, it incorporates a selfie camera with a more than decent 42 MP while the rear camera incorporates a 50 MP main sensor, 48 MP wide angle and a 48 MP 5x telephoto. Likewise, the Pixel camera app is another of its great assets, since it offers very good results. ⚡ IN SUMMARY: offer for the google pixel 10 today ✅ THE BEST Your processor: The Tensor G5 is much more efficient and heats up less than previous models, as well as offering more stable performance in heavier tasks. The 3,300 nits: The Google Pixel 10 Pro’s display is one of the brightest on the market in 2026 and is visible even in direct sunlight. ❌ THE WORST The design is continuous… In this sense, the Pixel 10 Pro is almost identical to the Google Pixel 9 Pro, being even somewhat thicker and heavier. Base storage… In the middle of 2026, the fact that the base model starts at 128 GB is somewhat stingy, especially if we are talking about a Pro mobile that records in 8K. 💡 BUY IT IF… You come from a Pixel 7 or 8, since the jump in battery and efficiency is brutal, something that you will not notice if you come from the previous model. ⛔ DON’T BUY IT IF… If you are eager for fast charging, this mobile phone will fall short, since it is still anchored in a 30 W wired charging. While other mobile phones charge to 100% in 20 minutes, the Pixel needs more than an hour. The best accessories for this Google Pixel 10 Pro Pixelsnap Case for Google Pixel 10 Pro The price could vary. We earn commission from these links Google Pixel Buds 2a – Wireless earbuds The price could vary. We earn commission from these links Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Images | Álex Alcolea (Xataka) and Google In Xataka | Mega-guide to set up a home theater: projector, screen, sound system and more In Xataka | Best sound bars in quality price. Which one to buy and seven recommended models from 140 euros

The change of Google’s search engine with AI was a mystery about its monetization. Finally it will be another subscription

For months, the technology industry has been closely watching how Google resolves its particular dilemma: how to integrate artificial intelligence into its search engine without destroying the advertising business that supports its empire. The doubts are being cleared up little by little, and everything indicates that the company has already solved it: through AI Plusa subscription with a cost of 7.99 euros per month. Dilemma. The results of traditional search with blue links They generate billions in advertising, being one of the company’s most lucrative businesses and also one of the reasons why it is where it is. On the other side we have his foray into the AI ​​careera business in which they are burning money on infrastructure in the hope that it will be profitable in the long term. This last business also clashes with the traditional advertising system, with which Google also takes great advantage. Embracing the new potentially means burying what feeds you. The company is looking for a solution to this dilemma with Google AI Plus. What does the 8 euro subscription include? AI Plus has recently reached 35 new countriesamong them Spain. For €7.99 per month, users get enhanced access to Gemini 3 Prothe image generator Nano Banana Prothe research tool Deep Research200 GB of cloud storage and the possibility of using Gemini directly in Gmail, Docs, and other Google apps. Also includes 200 monthly credits for flow and Whiskthe company’s AI video creation platforms. Duel with OpenAI. The price is tight and even lower than the offer. ChatGPT Gowhich is found in Spain at a price of 9.99 euros per month. Both companies are fighting to attract users who want more than the free version, an opportunity to obtain more financing for their AI operations and, over time, attract even more customers who want to immerse themselves in more complete and higher-cost plans. Limitations to justify the price. The version of Gemini 3 Pro included in AI Plus has significant restrictions compared to the AI ​​Pro subscription of 22 euros per month. For example, the context window is drastically reduced from 1 million tokens to 128,000, which means that the model will “forget” information much sooner in long conversations or when analyzing long documents. Monthly credits for creation tools are also five times lower: 200 versus 1,000 in the Pro version. Google gives away AI to its storage customers. The company is adding all AI Plus features automatically to existing subscribers of Google One Premium (2 TB for 9.99 euros per month) at no additional cost. This avoids the absurd situation where paying more would result in having fewer features, but it also shows Google’s commitment to getting its users who pay for storage familiar with Gemini without them having to think twice. A change for the media. Google is building a monetization strategy around AI, and that affects the media. In this way, the media goes from being the user’s final destination to becoming data providers to train and feed AI responses. When Gemini responds directly instead of displaying blue links, traffic to the original sites evaporates, along with the advertising revenue they generated. The issue is somewhat tricky and it is still unknown how all the parties involved are going to agree. Subscriptions. Google is betting on a freemium model that allows it to make its investment in AI profitable without completely abandoning its traditional advertising business. The question is whether users will be willing to pay for something that until now they considered free. Unlike Netflix or Spotify, AI subscriptions They are still a relatively new concept to the general public. We will have to wait to find out if this tightrope walk balancing exercise by Google ends up convincing in the long term. In Xataka | The number of new apps coming to the App Store has skyrocketed. We have a culprit: “vibe coding”

Qwen3-Max-Thinking rivals Google’s Gemini 3 Pro more than ever. The key is in what is not being told

There are days when it feels like we open the phone and the dashboard changes again. Since ChatGPT broke out in November 2022the artificial intelligence race has continued to accelerate, and every few weeks a new model appears which promises to push the bar a little further. Sometimes it is an update, other times it is a “flagship” with a different surname, but the pattern repeats itself: more power, more ambition and an increasingly global story. In this context, China is gaining visibility in an increasingly evident way, and the name that is now entering the conversation is Qwen3-Max-ThinkingAlibaba’s proposal with which it wants to play in the same league as the great references of the moment. At first glance, Qwen3-Max-Thinking might seem like just another name in the endless list of models. But there is a relevant nuance here: he presents it as his star model for reasoning tasks, and explicitly places it in the same conversation as Gemini 3 Pro. The company says it has scaled parameters and invested computing resources in reinforcement to improve several dimensions at once, from factual knowledge and complex reasoning to instruction following, alignment with human preferences and agent capabilities. In other words: you are not just selling raw power, but a way to “think” better. What benchmarks teach To land that promise, the most useful thing is to look at the comparative table that we have in hand, with 19 benchmarks and a direct count: Gemini 3 Pro leads in 11, Qwen3-Max-Thinking does it in 8. This data, by itself, does not decide “who is better”but it does help to understand the type of fight that Alibaba poses when faced with Google. Here it is worth being very literal with what we are measuring: each benchmark focuses on a specific skill, from general knowledge to programming, use of tools, following instructions or long context analysis. If we look for the point where Qwen3-Max-Thinking really hits home, there is one that stands out above the rest: following instructions and aligning with what humans prefer in a conversation. In Arena-Hard v2Qwen wins with 90.2 compared to Gemini’s 81.7, which is the largest difference in its favor in the entire table (8.5 points above). It is not a minor nuance, because this type of benchmark does not reward only the technical “success”, but rather the final result that a person considers most useful when blindly comparing answers. Added to that IFBenchwhere Qwen wins by the minimum (70.9 versus 70.4). Translated into real life: when the user does not formulate a perfect instruction, when the assignment has ambiguity or requires interpreting intent, Qwen seems more oriented to nailing what is asked of him and doing it in a way that feels natural. The other area where Qwen supports his “thinking model” narrative is mathematical reasoning and logical problem solving. On HMMT, in both the November 2025 and February 2025 issues, Qwen is ahead (94.7 vs. 93.3 and 98.0 vs. 97.5, respectively). And in IMOAnswerBench it also wins, although by a minimal margin: 83.9 versus 83.3. These numbers do not suggest a beating, but they do suggest a consistent pattern: when the problem demands several steps of logic and it is not solved only with memory or a nice answer, Qwen tends to take advantage. To these improvements Alibaba adds a component that is already becoming the new standard: that the model does not remain in the text, but can act. In its presentation, the company talks about an adaptive use of tools that allows information to be retrieved on demand and a code interpreter to be invoked. And this orientation also appears in the benchmarks: in HLE (w/ tools), Qwen wins with 49.8 compared to 45.8 for Gemini, which suggests a better ability to perform when the model can rely on external tools. Here the fundamental change is important: it is no longer just “what he responds”, but how he investigates, how he decides what tool to use and how he synthesizes what he finds. There is a part of this comparison where the Gemini 3 Pro feels more “engineer” than “conversational,” and it is precisely where many professional users put the focus. The Google model wins in MMLU-Pro and MMLU-Redux, two tests closely associated with general knowledge, and also in GPQA and HLE, which in this table appear as demanding evaluation benchmarks and complex questions. In code, Gemini prevails in LiveCodeBench v6 and also in SWE Verifiedwhich reinforces the idea that, for programming tasksis still a very solid bet. Added to this is AA-LCR, where it leads in analysis of long documents. The fine print hides beyond the price At this point, there is a question that weighs as much as any benchmark: how much does it cost to use these models seriously. In standard prices per 1M tokens, the contrast is clear. On Gemini 3 Pro, the entry moves between 2 and 4 dollars depending on the tranche of input tokens, while in Qwen3-Max The input is listed at $1.2. But the most important difference appears at the output, which is where the “thought” of the model is paid: Gemini marks 12 to 18 dollars compared to the 6 dollars of Qwen. Translated into proportions, in standard use Gemini is approximately 1.67 times more expensive in entry and 2 times more expensive in exit in the usual section. If the tranche exceeds 200,000 entry tokens, the distance increases to 3.33 times in entry and 3 times in exit. Gemini is approximately 1.67 times more expensive on entry and 2 times more expensive on exit in the usual section. And here we come to the part that is usually left out of the conversation when everything focuses on power and price: what happens to your data when you use the model, and under what rules. In the case of Qwen, two worlds must be clearly separated. On the one hand there is the consumer web chat, whose terms They contemplate the use and storage … Read more

Google’s secret weapon against CUDA dominance is called TorchTPU. And it’s an NVIDIA waterline missile

Google has launched an internal initiative called “TorchTPU” with a singular goal: to make their TPUs fully compatible with PyTorch. For the not so initiated, we translate it: what Google intends is to destroy once and for all the monopoly and absolute control that NVIDIA has with CUDA. Why is it important. NVIDIA has become the first company in the world by market capitalization for two big reasons. The first, for its AI GPUs. And the second, much more important, for CUDAthe software platform that is used by all AI developers and that has an important peculiarity: it only works on chips from NVIDIA itself. So if you want to work in AI with the latest of the latest, you have to jump through hoops… until now. What happens with Google and its TPUs. Google’s Tensor Processing Units (TPUs) were until now optimized for Jax, Google’s own platform that was similar to CUDA in its objective. However, the majority of the industry uses PyTorch, which has been optimized for years thanks to the aforementioned CUDA. That creates a barrier to entry for other chipmakers, which face a huge bottleneck in attracting customers. Goal is in the garlic. Anonymous sources close to the project indicate in Reuters that to achieve its goal and accelerate the process Google has partnered with Meta. This is especially striking because it was Meta who originally created PyTorch. Mark Zuckerberg’s company has ended up being just as much a slave to NVIDIA as its rivals, and is very interested in Google’s TPUs offering a viable alternative to reduce its own infrastructure costs. Google as a potential AI chip giant. The company led by Sundar Pichai has made an important change of direction with its TPUs, which were previously reserved exclusively for it. Since 2022, the Google Cloud division has taken control of their sale, and has turned them into a fundamental revenue driver because they are no longer only used by Google: Tell Anthropic. A spokesperson for this division has not commented specifically on the project, but confirmed to Reuters that this type of initiative would provide customers with the ability to choose. All against NVIDIA. This alliance is the last attempt to put an end to that great ace in NVIDIA’s sleeve. In these months we have seen how companies like Huawei prepare your own alternative ecosystem to CUDAbut they also participate in a joint effort of several Chinese AI companies for the same purpose. Hardware matters, software matters more. CUDA has become such a critical component for NVIDIA that if other semiconductor manufacturers have not been able to compete with it, it is not because of their chips, but because they cannot support CUDA natively. We have a great example in AMDwhich has exceptional AI GPUs. In fact, they are superior to NVIDIA in certain sections, but their software is not as powerful. In Xataka | Google’s TPUs are the first big sign that NVIDIA’s empire is faltering

stand up to Google’s Nano Banana Pro phenomenon

You may remember when, a few months ago, half the internet started creating Studio Ghibli-style images with ChatGPT to upload them to social networks. The “magic” behind that fever was OpenAI’s new image generation model. But everything moves so fast that the conversation lasted just long enough. Some time later, our attention was already elsewhere: on how difficult it was to distinguish some images created with Nano Banana Pro. In a very short time, half the world began to talk about the benefits of Google’s new generative model, and there were many who placed it ahead of that of OpenAI. But this is an open race, with technology giants fighting for AI leadership. And, as expected, the company led by Sam Altman has responded. This Tuesday it launched ChatGPT Images, which comes boasting several improvements for users. Editing as a key element. One of the great historical challenges of image generation tools has been the specific editing of specific elements. ChatGPT Images directly aims to solve this limitation, allowing us to modify only what interests us, from a specific object to lighting, composition or even the appearance of people. This opens the door to combining elements or introducing very specific changes without having to remake the entire image, something that until now used to be a weak point in this type of model. See yourself in an ad or “travel” to your favorite place. Another section where ChatGPT Images makes a leap is in creative transformations. Simply upload your own photo and accompany it with a simple prompt to obtain, in a matter of seconds, surprisingly convincing results. It is worth clarifying that this idea is not completely new. In fact, it is one of the most outstanding virtues of Nano Banana Pro, a model that our colleague Javier Lacort was able to test thoroughly and was already pointing in this direction. Let’s look at some examples with ChatGPT Images. Original image: “Create an image of this man, but in Time Square, New York, with clothes, looks, surroundings, etc., that are believable for winter 2025” “It places this person in full body in a Japanese city during a rainy night, with neon, reflections on the ground and cyberpunk aesthetics” Precision as a flag and improvements in the text. OpenAI also places emphasis on improving accuracy. How many of us have had to ask for something specific and receive just the opposite, or find that the model has not understood the instruction correctly? Part of that problem, according to the company, should be left behind. If we provide detailed instructions, the system should be able to honor them more faithfully. In addition, the generation of text within images is reinforced, a key aspect for creating posters, promotional ads and other content where typography and the message are as important as the image itself. Images | OpenAI In Xataka | We believed that Microsoft had already put Copilot everywhere. LG shows us that we were very wrong

Gemini 3 has left all its competitors behind. It’s Google’s definitive punch to the table: Crossover 1×32

Three years ago, panic on Google. The launch of ChatGPT made Google will declare a “code red” before an AI model that proposed a clear revolution and a clear threat to the search business. Sundar Pichai began to make moves, but the truth is that the first movements with Bard They were disastrous. There were more problems and blundersbut since then Google’s trajectory has been spectacular, and its AI models have not stopped achieving success. We saw it with Gemini 2.5 Pro and with Nano Bananabut now they have proven it again with Gemini 3which has managed to become the model with the best features in most areas, at least according to the benchmarks offered by the company. It is somewhat surprising, especially considering that OpenAI seemed to have controlled the market with a ChatGPT that continues to be more popular but is little by little being cornered by the competition. In fact Google seems to be doing everything right lately in this area. DeepMind is the great reference for “serious AI”and Google’s enormous resources—which has its own cloud, its own chips, and its own model—point to a bright future for this company. We talk about all of this precisely in this episode Crossover 1×32 in which we review those hesitant beginnings of Google and how the company has managed to get rid of its fears to bet everything on AI. That in itself is surprising, because that bet is also risky for them. Exciting times! On YouTube | Crossover

a maneuver that aims to cut ground on Google’s Gemini 3

In the race to lead the development of artificial intelligence, the pace has become a succession of linked movements. GPT-5.1 arrived on November 12an update aimed at polishing the experience and keeping users satisfied. Just a few days later, on November 18, Google responded with Gemini 3an evolution of its star model that left very good feelings among those who began to try it. As a result of that launch, rumors began to circulate: the startup led by Sam Altman had activated a supposed “code red” when seeing how its direct rival was taking advantage. And this seems to be the first result of that internal movement. Not even a month has passed since the previous update of its flagship model and GPT-5.2 is here. The promise here is to solve some known problems, decrease latency and gain reasoning. An evolution within the 5 series. GPT-5.2 appears as a version designed to boost knowledge work, with advances in coding, vision, document analysis and multi-step projects. OpenAI incorporates it as the direct evolution of GPT-5.1, not as a generational leap. According to the company, the update improves the management of long contexts, reduces errors and increases the ability to coordinate tools. More differentiated layers of use. In GPT-5.2, the three usual variants are somewhat more differentiated in their use, not because of new functions, but because of the way in which they integrate the improvements announced by OpenAI. Thinking absorbs much of the progress in reasoning, handling large documents, and coordinating tools. Pro raises the bar in specialized tasks, especially in code and technical calculations. Instant, for its part, benefits from more stable explanations and a reduction in errors. The result is a clearer separation between everyday tasks, complex jobs and expert needs. A visible improvement in multiple evaluations. OpenAI presents figures that show GPT-5.2 ahead of GPT-5.1 in very different areas, from scientific reasoning to programming and knowledge tasks. In GDPvalthe assessment that measures well-specified jobs in 44 occupations, the model achieves 70.9% wins or draws against human professionals. In GPQA Diamond It rises to 92.4% and in AIME 2025 it achieves 100%. The trend is repeated in technical tests such as FrontierMath either ARC-AGIwhere performance is also increased compared to the previous version. The improvements are seen when moving from figures to day-to-day tasks. In internal evaluations of financial analysts’ own work, such as three-state modeling or leveraged buyout simulations, Thinking raises its average score from 59.1% to 68.4%. The company also promises advances in generating spreadsheets and presentations with a clearer structure. In addition, companies such as Notion, Box, Shopify or Harvey, according to OpenAI, have observed improvements in long-range reasoning and in the use of tools in their own workflows. If these results are consolidated in real environments, they could reduce manual work in processes that require precision and consistency. A more stable environment for developers. GPT-5.2 Thinking, they say, achieves higher performance in demanding software tests, especially those that evaluate the ability to apply complete and consistent changes in real projects. The company indicates that the model better coordinates sequences of steps, something that is reflected in internal evaluations and feedback from platforms such as Windsurf or Charlie Labs. Fewer errors in sight. OpenAI claims that GPT-5.2 Thinking reduces the frequency of responses with errors by around 30% relative to GPT-5.1. This is an improvement that they associate with more stable reasoning and a greater ability to detect errors before generating the final response. The company also points out advances in the management of sensitive situations, such as conversations linked to emotional distress or mental health. Although he remembers that the model is still imperfect, he maintains that these adjustments contribute to a more reliable experience in everyday use. Where you can use GPT-5.2 today. OpenAI indicates that GPT-5.2 will begin rolling out to ChatGPT for paid plans, including PlusPro, Go, Business and Enterprise. In the API, GPT-5.2 Thinking is available as gpt-5.2 and the Instant version appears as gpt-5.2-chat-latest. The company has also promised to keep GPT-5.1 for three months on ChatGPT before removing it from paid plans. In terms of pricing, GPT-5.2 stands at $1.75 per million input tokens and $14 per million output tokens, more expensive than GPT-5.1, although OpenAI maintains that its greater efficiency reduces the final cost in demanding tasks. Images | OpenAI In Xataka | OpenAI knows that it needs to continue generating memes and virals. That’s why she’s willing to pay Disney a lot of money for her content.

Google’s TPUs are the first big sign that NVIDIA’s empire is faltering

It was 2013 and Jeff Dean, one of the directors of Google, he realized something along with your team: if each Android user used their new voice search option for three minutes a day, the company would have to double the number of data centers to cope with the computational load. At the time, Google was using standard CPUs and GPUs for this task, but they panicked and realized they needed to create their own chips for those tasks. This is how it was born Google’s first Tensor Processing Unit (TPU)an ASIC specifically designed to run the neural networks that powered its voice services. That grew and grew and in 2015, before the world knew it, those first TPUs accelerated Google Maps, Google Photos and Google Translate. A decade later, Google has created TPUs so powerful that they have almost unintentionally become a surprising and unexpected threat to the almighty NVIDIA. There it is nothing. Blessed panic. Google TPUs keep their promise Until now when an AI company wanted to train its models, turned to advanced NVIDIA chips. That has changed in recent times, and in fact we have seen two recent signs that certainly pose a turning point. Missing from that timeline is the last and most striking member of this family, Ironwood, presented in April 2025. Source: Google. The first is the release of Claude Opus 4.5, an exceptional modelespecially in programming tasks. Those responsible for Anthropic already they explained that this new model does not depend only on NVIDIA, but combines the power of three different proposals: that of NVIDIA, but also Amazon’s Trainium and Google’s TPUs. But it is also that Google has given the bell because your brand new AI model Gemini 3 He has been exclusively trained using the new Ironwood TPUs that were presented in April and have become a real sensation. As we said, Google started that project in 2013 and launched its first TPU in 2015, but that internal need became a blessing, because what Google I couldn’t know is that these TPUs would end up arriving at the right time: the launch of ChatGPT turned them into a fantastic opportunity to strengthen your AI infrastructure, but also to be used for training and inference of your AI models. From there we end up reaching the current Ironwood TPUs, which in their seventh generation are exceptional both in inference as in training (as its use has demonstrated for Gemini 3). Google has managed to squeeze even more out of its chips and has doubled the peak FLOPS per watt compared to its previous generation. Source: Google. The efficiency and power of these chips gives a very notable jump compared to their predecessors, and for example they achieve double FLOPS performance per watt which was achieved with Trillium chips. If we compare them with the TPU v5p of 2023, the chips manage to reach 4,614 TFLOPS, 10 times more than the 459 TFLOPS of those models from two years ago. It’s an extraordinary leap in performance (and efficiency). The key to 2025: Google now lets others use its TPUs But in the evolution of TPUs there is another differentiating element in 2025. This has been the year in which Google has stopped “being selfish” with its TPUs. Before only she could use them, but in recent months she has reached agreements with OpenAI —which also seeks make your own chips— and especially with Anthropic. The performance of Ironwood is already comparable to that of the GB200 and even the GB300 from NVIDIA. Source: SemiAnalysis. That second alliance is especially monumental as part of that outsourcing strategy. Google is not only renting capacity in its cloud, but facilitating the physical sale of hardware. The agreement covers one million TPUs: 400,000 units of its TPUv7 Ironwood sold directly through Broadcom, and 600,000 rented through Google Cloud (GCP). In a deep report in SemiAnalysis It is revealed how from a technical perspective, the TPUv7 Ironwood is a formidable competitor. The performance gap with NVIDIA is closing, and Google’s TPU is practically the same as NVIDIA’s Blackwell chip in FLOPS and memory bandwidth. However, the real advantage lies in the cost. The Total Cost of Ownership (TCO) of an Ironwood server is estimated to be 44% lower for Google than for an NVIDIA GB200 server, allowing the search giant to offer very competitive prices to clients like Anthropic. To help even more in that race, they point out in SemiAnalysis, Google has another ace up its sleeve. This is Google’s Inter-Chip Interconnect (ICI), a network architecture that allows up to 9,216 Ironwood chips to be connected using a 3D torus topology. Google also uses optical circuit switches that allow optical data to be routed without electrical conversion, reducing both latency and power consumption. This allows you to reconfigure the topology of that network on the fly to avoid (or mitigate) failures and optimize different types of parallelism. NVIDIA’s “moat” with CUDA is narrowing We have often repeated that although semiconductor manufacturers already have flashy chips —tell AMD– In fact the true strength from NVIDIA is in CUDAthe software platform that has become the de facto standard for AI developers and researchers. Google also wants to change things here. During the last few years the company tried to focus on Python libraries such as JAX either XLAbut in recent times has started prioritizing native PyTorch support —a great competitor of TensorFlow— in its TPUs. That’s crucial to making it easier for engineers and developers to start migrating to their TPUs instead of NVIDIA GPUs. Before it was possible to use PyTorch on TPUs, but it was cumbersome, as if one had to speak a language using a dictionary in real time, while for NVIDIA GPUs that was the “native” language. With XLA Google used an intermediate library as a translator to be able to use PyTorch, but that was a nightmare for developers. Native support allows Google TPUs to behave just like NVIDIA GPUs in the … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.