Chinese startups have been relying on NVIDIA chips to train their models for years. That is already changing

The name of the Chinese startup Zhipu AI (Z.ai) may not sound familiar to you, but perhaps GLM, its AI model, does a little more than its latest version, GLM-4.7already competes with Claude Sonnet 4.5 or GPT-5.1. The real surprise of this “Chinese AI tiger” is the launch of GLM-Image…and not so much for what he does, but for how he has managed to do it. what has happened. GLM-Image is a multimodal generative AI model that focuses on image generation. The idea, of course, is to compete with options like Nano Bananafrom Google. That’s interesting, but even more striking is the fact that the model has not been trained with conventional chips. Trained with Chinese chips. According to those responsible for Z.ai, this model is the first developed in China that has been fully trained with “local” chips. Specifically, it has been trained with Huawei’s Ascend chips thanks to the use of servers Huawei Ascend Atlas 800T A2 and a framework called MindSpore. Thus, traditional NVIDIA AI chips, which are usually the usual choice for AI model developers in Chinese startups, have not been used. Turning point? This milestone demonstrates the real feasibility of training high-performance generative AI models on a platform developed entirely in China. We are not dealing with something minor: it is validation that it is possible to continue innovating in this area despite the restrictions imposed by the US. In fact, Zhipu AI — included last year on the US blacklist — has intensified its collaboration with other local manufacturers, such as the promising firm Cambricon that has risen from the ashes thanks to tariffs. Threat to NVIDIA. The news comes at a unique time, because NVIDIA has not stopped pressuring the US government to once again allow it to sell its advanced AI chips to Chinese companies. He has obtained that permission—which It won’t be free—, but now the one that might not be interested is China, which he hasn’t said anything at all. That chips from companies like Huawei are a valid alternative for training quality AI models can change many things in this area. Zhipu goes like a shot. The Chinese startup has also just gone public, and since it has done so its shares they have shot up more than 80%. Investors see the company no longer as a rival to Google or OpenAI, but as a banner. One that shows that it is possible to compete without depending on the US and its companies. Huawei, great beneficiary. If the trend continues, Huawei can become the Chinese NVIDIA, and the company prepares an increase in production of its AI chips. It is not the only one: Cambricon plans triple your production by 2026, which seems to make it clear that the Chinese industrial machinery is moving quickly to neutralize the impact of US vetoes. Challenges…Despite everything, Zhipu already has warned that the price war in the AI ​​sector will become international. If Chinese companies end up controlling the entire chain (or rather, their chain), they could offer AI services at much lower costs than their Western competitors, who must pay NVIDIA’s margins and Big Tech’s cloud infrastructure. …and unknowns. This technological achievement raises other questions. One of the most important is how powerful and capable Huawei chips are compared to NVIDIA’s in these processes: is training much slower? Is it more expensive in time and resources? The efficiency of the MindSpore framework compared to Pytorch or TensorFlow is another of the key components of these developments. In Xataka | Faced with the US strategy, China has a plan to revive its technology industry: that AI belongs to everyone

Qwen and open models

Alibaba’s Qwen family of open AI models is quietly taking the world by storm. Until the current month of January these models have overcome and to 700 million downloads on the Hugging Face platform. The milestone is significant and confirms the supremacy of Chinese companies in this type of models. what has happened. Those responsible for the development of this family of models explain how Hugging Face data They don’t lie: these projects have become the most popular open models worldwide, at least if we look at the number of downloads. Unstoppable. In October 2025 the Qwen model family managed to surpass the previous leader in this segment, the Llama de Meta family of models. Two months later, the Qwen models had been downloaded so much that the total number exceeded the combined figure of the eight AI models next in popularity. That group is made up of the models from Meta, DeepSeek, OpenAI, Mistral, NVIDIA, Zhipu.ai, Moonshot and Minimax. Alibaba is a steamroller. Since Qwen launched in 2023, the advancement of the models in this family has been unstoppable. Although initially accessing them was more uncomfortableAlibaba has taken advantage of its infrastructure and its size to popularize them little by little, but above all its engineers have done something else: not stop launching models. The pace has been frenetic, but the models are also notable and comparable to proprietary models from major US technology companies. Qwen wants to be the Android of AI. The family’s model catalog is enormous. Hugging Face currently includes 300 different models that cover both slightly older versions and various variants of each new major version. For example, we have models specialized in visual data recognition (Qwen VL), in programming (Qwen Coder) or models in image generation (Qwen Image Edit) or “generic” models absolutely gigantic or others like Omni that already compete with Grok 4 or GPT 5 Pro. The intention is obvious: to be the “standard” AI models on the market. Even if it is due to saturation. Surprise: the most popular is not the most powerful. a study Published in October, it did provide surprising information on the growth in popularity of this family of models. One would expect that the most downloaded would be some of the versions of Qwen 3, the most modern and capable version. Actually the most downloaded is Qwen 2.5-1.5B-Instruct, a “light” model that can even run on modest mobile phones and laptops. Tiny but bully. Currently the Hugging Face list indicates that the most popular in downloads is Qwen2.5-3B-Instruct, more modern and somewhat less lightweight, but still “small” by today’s standards. It seems clear that there is notable interest in being able to run this model on mobile phones, tablets and computers with little video memory. Thousands of derived projects. The possibility of obtaining and using these open models in a simple way has made many developers and companies take advantage of them to customize them and adapt them to their own needs. That has made according to Xinhua That family has been used in more than 180,000 derived versions. Flame fades. Meanwhile, Silicon Valley confirms that its vision is different. Goal, what led this area initially thanks to Llama, has given a rudder stroke. The company is still expected to launch new versions of this model, but in the meantime most technology giants focus on their closed and proprietary models. Of course: they keep some models open with a promising future, as we have seen with Gemma (Google) Phi (Microsoft) or gpt-oss (OpenAI). Without forgetting that Mistral is a great European benchmark and also offers open variants. In Xataka | China and the United States have started an antagonistic race in AI through a simple question: whether to be open source or not

the new Siri will be based on Gemini AI models

In the midst of the rise of artificial intelligence, with increasingly sophisticated voice assistants like those of ChatGPT either PerplexitySiri begins to show the passage of time too clearly. He doesn’t always understand what we ask of him and often stumbles as soon as we stray from a few predefined patterns. Among promises that have fallen by the wayside, Internal tensions and leadership changesApple seemed to be losing its footing in one of the most decisive technological races of the decade. And, although it is still too early to know if it will be able to reverse this dynamic, the company has just made a move with a major decision: to ally with one of its great rivals. Agreement with Google. The Cupertino company has signed a collaboration multi-year agreement with the search giant by which the next generation of the so-called Apple Foundation Models will be based on the models Gemini and in the search giant’s cloud technology. The next functions of Apple Intelligenceincluding a more personalized Siri whose arrival is “this year.” With privacy at the center. The statement adds that, despite this change, the system will continue to run on the devices and on its platform. Private Cloud Computingfollowing their privacy standards. Apple insists that the operational heart of Apple Intelligence does not leave home. The starting point of everything is at WWDC 2024. There Apple presented Apple Intelligence as its great response to the rise of generative AI and placed Siri at the center of that strategy, promising a much deeper understanding of personal context, the ability to “see” what appears on the screen and to chain actions between applications. In practice, this meant that the assistant had to be able to interpret emails, messages, appointments or files and act on them without the user having to jump from one app to another. It was a leap in ambition much greater than that of traditional Siri. From promises to reality. At the end of 2024, Apple publicly maintained the pace. In a December press release, it reiterated that Siri’s most advanced capabilities would arrive “in the coming months,” while launching other Apple Intelligence pieces such as Image Playground or Genmoji. In that same context, Apple once again spoke of awareness of personal context, vision of what is on the screen and “hundreds of new actions” within and between its own and third-party apps. Three months later, in March 2025, the tone changed. In an official statement to Daring Fireball, the company admitted that some of those features would require more time than expected and went on to talk about a “more personalized” Siri that would be released “over the next year.” June 2025 arrived and, at WWDC that year, Siri did not show a jump equivalent to the one that had been hinted at twelve months earlier. This lack of news ended up pushing Apple to give explanations in public. Craig Federighi, chief software officer, and Greg Joswiak, head of marketing, addressed the issue in interviews after the event. Federighi went on to explain that Apple had had a “version 1” of the new Siri prepared to arrive between December 2024 and spring 2025, but that they decided to stop it after evaluating that it would not meet customer expectations or the company’s internal standards in that period. In the end, everything comes back to the same point. The company now places a more personalized version on its immediate roadmap, after months of back-and-forth with the calendar. The announced alliance changes the technical basis to get there, but it does not eliminate the acid test. It will be actual use, when users start asking complex things from their iPhone or Mac, that will determine whether Apple has managed to catch up in a race that never lets up. Images | Apple | Google In Xataka | Google has found a way to monetize its AI: adding advertising while you shop without leaving it

Their models not only perform just as well but are much cheaper

Sometimes it is difficult to keep up with the innovations that artificial intelligence is leaving us. However, it is also a good sign that there is voracious competition and that, even if we thought that companies like OpenAI or Google had every chance of completely dominating this sector, not all the fish has been sold yet. And what began as a race to create the most powerful models has become a battle to deliver the best performance at the lowest possible cost. And in this new competition, China leads. Change of third. For years, the conversation in AI focused solely on which model was more capable: who passed more benchmarks, who solved more complex problems, who generated better answers. But that phase is being relegated by another in which price is once again a determining factor in making decisions. This transition marks a turning point, since it is mainly Chinese startups that are demonstrating remarkable ability to produce powerful and extraordinarily economical models. Qwen leads the revolution. As Kai Williams highlights in the ‘Understanding AI’ newsletterAlibaba’s open model ecosystem, known as qwenhas become the most downloaded model family in the world, according to Hugging Face data analyzed by the ATOM Project. “Qwen alone is roughly matching the entire US ecosystem of open models today,” counted Nathan Lambert, researcher at the Allen Institute for Artificial Intelligence, at the PyTorch conference. The Chinese company has achieved something that seemed difficult: create competitive models in practically all sizes, from small to 235,000 million parameters, offering options for any business need. Real business adoption. In addition to the technical figures of their models, the use cases also deserve special mention. In October, Brian Chesky, CEO of Airbnb, caused a bit of a stir stating that his company “relies a lot on Alibaba’s Qwen model” because it is fast, cheap and powerful enough. This statement is interesting because of the context, since we are talking about a top American company preferring to use an open Chinese model, something that is powerful enough to change the perception of the industry. Williams points out in the text that, in addition to Airbnb, there are also other companies that would prefer to adopt Qwen’s models, but for reasons of image or regulatory compliance, they cannot. Mainly because they are models from China. And that could be the great barrier for Qwen and the rest of the Chinese models that would make its adoption difficult. So Chinese startups have a big job ahead of them to try to change that perception in an increasingly complicated geopolitical context. Kimi K2 gave the surprise. If Qwen dominates through volume and versatility, Kimi K2 Thinking It stands out for being possibly the best open model in the world in terms of benchmark test scores. Just like share Williams in the newsletter, Artificial Analysis currently ranks it as the most powerful model not created by OpenAI, Google or Anthropic. DeepSeek and the domino effect. The launch of DeepSeek R1 in January was the catalyst that unleashed this wave. It arrived just four months after OpenAI announced your first reasoning model, o1but with a crucial difference: DeepSeek openly published the model parameters. The noise was such that even the DeepSeek app briefly surpassed ChatGPT as the most downloaded app in the iOS App Store, Nvidia shares fell almost 20% days later, and Chinese companies rushed to integrate the model into their products. Today we are still waiting for DeepSeek to speak again with its next deep reasoning model, about which not much is known yet. On the other side, putting out fires. The United States has not sat idly by. When it comes to open weight models, OpenAI launched theirs in AugustIBM published its Granite 4 models in October, and Google, Microsoft, Nvidia and the Allen Institute for AI have also introduced new ‘semi-open’ models this year. But none have reached the level of the main Chinese open models. Lambert, who has led efforts to advance a new generation of American open models, acknowledges that progress has been slow and the gap is widening. Everything indicates that 2026 will be decisive in determining the pace of adoption by companies and, above all, in determining the choice of model in an increasingly immense ocean. Cover image | Xataka with Mockuuups Studio and Kimi AI In Xataka | NVIDIA begins to make moves with China: it is considering increasing production of the H200 in the face of an avalanche of orders, according to Reuters

V16 beacons with associated application. Five models to comply with DGT regulations as of January 1

There is nothing left for us to have to carry a mandatory one in the car. V16 beacon. Which one to buy? There are many models, so in this article we are going to focus on five V16 beacons that come with their own smartphone application. Help Flash IoTan affordable beacon compatible with myIncidence. Help Flash IoT+a beacon with a good number of candles that is also compatible with myIncidence. LEDOnea V16 beacon that stands out for its format and is compatible with its own app. LEDOne for trucksincludes the beacon and an arrow for commercial vehicles. FlashLEDcomes with an anti-shock case and is compatible with its own app. Help Flash IoT The first V16 beacon that we have introduced in this list is from Netun Solutions: the Help Flash IoT. It has a similar design to the Help Flash IoT+, but has different characteristics: it offers visibility up to 1 km, its autonomy is approximately two hours and it offers more than 40 effective candles. Plus, it connects to the app myIncidence to be able to quickly contact the insurance company and emergency services. The price could vary. We earn commission from these links Help Flash IoT+ Secondly, we have introduced the most complete version of the previous V16 beacon: the Help Flash IoT+. It has a more interesting technical sheet and its price is usually similar: it offers approximately 290 candlesits autonomy is up to 2.5 hours and it also connects to the myIncidence app. The price could vary. We earn commission from these links LEDOne The LEDOne It is a particularly interesting beacon due to its format, since it incorporates a support so that it can be placed a little higher, thus improving visibility. Offers 120 effective candlesits autonomy is approximately two hours, the brand mentions that it is suitable for all vehicles and can be connected to the LEDOne app to notify insurance and emergency services. The price could vary. We earn commission from these links LEDOne for trucks An alternative to the previous V16 beacon—or rather a more complete option—we have it at Leroy Merlin. The LEDOne is available in a truck pack which, in this case, includes both the beacon that we have mentioned before and a signaling for industrial vehiclesthus improving visibility. In addition, since it is the same beacon, it is compatible with the LEDOne app. The price could vary. We earn commission from these links FlashLED Finally, PcComponentes has the V16 beacon FlashLEDwhich in this case comes along with a anti-shock hard case. It works using a single battery and is compatible with its own app SOS alert. Of course, the brand does not mention either the theoretical autonomy or the figure in candles. The price could vary. We earn commission from these links Some of the links in this article are affiliated and may provide a benefit to Xataka. In case of non-availability, offers may vary. Image | Netun Solutions, LEDOne, FlashLED In Xataka | Safety, organization and entertainment gadgets and accessories for cars on long trips In Xataka | Clarifying all the mess that the DGT has on its hands: the V-16 light, the V-27 signal and the emergency triangles

The elite of the open models spoke in Chinese. Mistral has just placed Europe at a level that not even the US managed to reach

Over the last year, the elite of open models for assisted programming, at least in benchmarks as SWE-Bench Verifiedhas spoken with a Chinese accent. Names like DeepSeek, Kimi either qwen They had settled into the top positions in testing and were setting the pace in complex software engineering tasks, while Europe was still searching for its position. The arrival of Devstral 2 alters that distribution. It does not displace those who were already at the top, but it places Mistral at the same level of demand and turns a European company into a real contender in a field that until now seemed reserved for others. League change: the technical leap that had been brewing for some time. During recent months, the open models developed in Europe and the United States had shown constant evolution, although still without the performance necessary to compete in the most demanding tests. The progress was evident, but there was a lack of a project capable of consolidating it at a higher level and demonstrating that this path could give results comparable to those of the sector. Devstral 2 in data: performance, size and licenses. The new Mistral model reaches 123B parameters in a dense architecture and offers an expanded context of 256K tokens, accompanied by a modified MIT license that facilitates its adoption in open environments. Its compact version, Devstral Small 2, reduces the model to 24B licensed parameters Apache 2.0. In the SWE-Bench Verified figures published by the companyDevstral 2 obtains 72.2%, a mark that places it in the most competitive section of the open models evaluated and that confirms its presence among the most advanced alternatives in the segment. It is reflected by a panorama concentrated in the upper part of the benchmark. Among the open models, DeepSeek V3.2 leads the group with 73.1%, followed by Kimi K2 Thinking with 71.3% and for proposals such as Qwen 3 Coder Plus and Minimax M2, which are around 69 points. At lower levels GLM 4.6, GPT-OSS-120B, CWM and DeepSWE appear, with more moderate results. In the closed commercial environment (proprietary models), the graph incorporates higher scores: Gemini 3 Pro reaches 76.2%, GPT 5.1 Codex Max rises to 77.9% and Claude Sonnet 4.5 scores 77.2%, all of them above the best brands registered for open models. What SWE-Bench Verified Really Measures and Why It Matters. SWE-Bench Verified is a test designed to evaluate whether a model can solve real programming tasks, not synthetic exercises. Each case presents a bug in an open source repository and requires a patch to pass the previously failed tests. The evaluation seeks to measure whether the system understands the structure of the project, identifies the cause of the problem and proposes a coherent solution. It is a useful and demanding metric, although limited to Python repositories and a specific set of situations that do not cover the full breadth of software work. From co-pilots to agents who act on the project. The arrival of Devstral 2 coincides with a broader change in the way of working with programming tools. It is no longer just about receiving suggestions in the editor, but about having agents capable of exploring an entire repository, interpreting its structure and proposing changes consistent with its real state. In this context, Vibe CLI appears, a tool that allows Devstral to analyze files, modify parts of the code and execute actions directly from the terminal, bringing these capabilities closer to the daily workflow of developers. Cost and deployment: what each type of user can do with Devstral. The model will be available for free for an initial period and will then cost $0.40 per million tokens for input and $2.00 per million for output, while the Small 2 version will be priced lower. Its deployment also makes a difference: Devstral 2 requires at least four H100-class GPUs, aimed at data centers, while Devstral Small 2 is intended to run on a single GPU and, according to Mistral documentation, the Devstral Small family can also run in CPU-only configurations, without a dedicated GPU. This variety allows both companies and individual developers to find a suitable entry point. The appearance of Devstral 2 introduces an unexpected element in a space where Chinese companies set the pace and where not even the United States, despite its leadership in artificial intelligence, had an open model in this high performance range in SWE-Bench Verified. Mistral does not displace those who were already at the top, but it does broaden the conversation and shows that Europe can compete in a field where it did not appear until now. It is a movement that does not alter the general hierarchy, although it does open a new margin for the evolution of assisted programming tools. Images | Xataka with Gemini 3 In Xataka | OpenAI and Google deny that they are going to put ads in ChatGPT and Gemini. The reality is that accounts do not come only with subscriptions

To the question of what sense it makes to compete with Google, OpenAI or Anthropic in AI, Mistral has an answer: small and local models

French startup Mistral AI Mistral 3 has been launcheda family of 10 open source artificial intelligence models that represent its most ambitious commitment to date. The Parisian company, which is often considered the main European hope in the development of AI, seeks to differentiate itself from the large American technology companies by betting on flexibility and deployment in all types of devices instead of raw power. Under these lines we tell you all the news. What Mistral has presented. The Mistral 3 family includes a flagship model called Mistral Large 3, with 675 billion parameters, and nine compact models grouped under the name Ministral 3 (in three sizes: 14,000, 8,000 and 3 billion parameters). All models are released under Apache 2.0 license, allowing unrestricted commercial use. The large model also has multimodal capacity, being able to process text and images. It is also multilingual, with a special emphasis on European languages. On the other hand, small models can run on devices with just 4 GB of memory, making them perfect for modest laptops, mobile phones and embedded systems without the need for an internet connection. Why strategy matters. While OpenAI, Google and Anthropic focus on increasingly powerful and closed systems with agentic capabilitiesMistral has focused on the breadth and scope of its models, efficiency and what its co-founder Guillaume Lample calls “distributed intelligence.” According to declared told VentureBeat, the company believes the future of AI is defined not by scale, but by ubiquity: models small enough to run in drones, vehicles, robots and consumer devices. The economic and practical argument. Lample explained It means that in more than 90% of cases, a small, specifically tuned model can get the job done, especially if it is trained with synthetic data for specific tasks. According to Lample, this is not only cheaper and faster, but it eliminates concerns about privacy, latency and reliability. The company also has teams that work directly with customers to analyze specific problems and fine-tune small models that perform specific tasks. This, above all, can attract companies that become frustrated when choosing the best possible model for a specific task and, if it does not perform adequately, they end up giving up. Europe is lagging behind. If we talk about innovation and technology around AI, we do not hesitate to say that Europe is leagues away of what companies in the United States and China are offering. This is why Mistral AI advocates a different approach in which it prioritizes massive deployment in devices and the flexibility of its smaller models. The capacity offered by open models can be a great asset to continue betting on these technologies. In China, for example, the open models of DeepSeek, Alibaba or Kimi are emerging widelyabove in certain tasks even competitors as large as ChatGPT. Lample explained that most leading Chinese models are exclusively text-based, with separate image processing systems. For this reason, they also want to opt for a multimodal approach. A complete ecosystem. Mistral no longer only offers language models. The company has built an entire ecosystem that includes Mistral Agents APIwith connectors for code execution, web search and image generation; Masterlyyour reasoning model; Mistral Code for programming assistance; and AI Studioan application deployment platform that also has analytical and logging capabilities. Furthermore, his assistant Le Chat It has incorporated a deep research mode, voice capabilities and a list of more than 20 enterprise integrations. Thus, in addition to its model offering, the company can provide other companies with a whole layer of personalized products and services, with the aim of being their main source of financing. Digital sovereignty. Although Mistral is often characterized as Europe’s answer to OpenAI, the company prefers to consider itself as ‘a transatlantic collaboration’. Its CEO, in fact, is in the United States, has teams on both continents and trains these models in collaboration with American teams and infrastructure. However, its positioning as a defender of European digital sovereignty has earned it strategic partnerships with the French army, the country’s employment agency, the Luxembourg government and various European public organizations. The European Commission presented in October a strategy to promote European AI tools that provide security and resilience while boosting the continent’s industrial competitiveness. Offline capabilities for democratization. The use cases that Mistral has designed for its small models include, above all, local applications, such as factory robots that use sensor data in real time and without relying on the cloud, drones in natural disasters or rescues that operate offline, and smart cars with functional AI assistants in remote areas. Lample stood out that there are billions of people without internet access but with laptops or cell phones capable of running these small models, which he considers potentially revolutionary. Additionally, by running on the device, these apps preserve the privacy of user data. Real “open source” debate. Not everyone celebrates Mistral’s approach. Some critics question his decision to opt for models’open weight‘, that is, free to access but providing less information about their code than truly “open source” models, which provide the code and training data necessary to train a model from scratch. Andreas Liesenfeld, assistant professor at Radboud University and co-founder of the European Open Source AI Index, declared to the Financial Times that data at scale is the missing key in the European AI innovation ecosystem and that Mistral does not contribute to that at all. The long-term strategic bet. Lample recognize that their models are “a little behind” the most advanced closed systems, but argued that the important thing is that “they are catching up quickly.” Time will tell if Mistral’s approach to low-cost, versatile models with local applications ends up working for them to end up positioning themselves as one of the great European bets on AI. Cover image | Mistral AI In Xataka | China already has an army of 5.8 million engineers. His new plan involves accelerating doctorates

These are the 13 models that are already updated, and the other nine from Redmi and POCO that will update later

Let’s tell you to what models of Xiaomi phones and tablets HyperOS 3 has started to arriveso that if you have one of them you know that you can now start looking for the update. We already told you how to check if your mobile will update using an app, but now the moment of truth has arrived. We are going to start the article with the models where the update of this customization layer based on Android 16. And then we will tell you What are the next mobile phones that will receive it?so that if you have one you know that Xiami also counts on you. Xiaomi phones that already receive HyperOS 3 These are all the mobile phones and tablets from the Chinese company that You can now update to HyperOS 3.0. If you go into your mobile settings and check for system updates, you can start enjoying Android 16 after updating. Xiaomi Pad 7 Xiaomi Pad 6S Pro 12.4 Xiaomi 14 Ultra Xiaomi 14 Ultra Titanium Special Edition Xiaomi 14 Pro Xiaomi 14 Pro Titanium Special Edition Xiaomi 14 Xiaomi MIX Fold 4 Xiaomi MIX Flip Xiaomi Civi 4 Pro Redmi K70 Pro Redmi K70 Ultimate Edition Redmi K70 Redmi K70E The normal thing is that the update arrives without you having to do anything, a notification appears that alerts you that it is available so that you only have to click on it. But if not, in the settings click on about the phonewhere after clicking on your version of HyperOS it will search for updates. Mobile phones that will also receive HyperOS 3 In addition to the models that are already receiving the update, we also have a list of mobile phones that will soon receive it. For them there are no specific dates, but if you have one you should know that yes, you will have Android 16. In Xataka Basics | Android 16: 17 functions and some tricks of the new version of Google’s mobile operating system

The two most important weather models in the world are discussing whether Santander is going to freeze next week. And the cold is winning

Where has all the cold gone? So far this fall (with the sole exception of Siberia), temperatures have been relatively mild on all continents. And it seems that the situation is going to continue like this: it is true that the forecasts speak of a progressive decrease in temperatures in the southeast of Canada, the eastern United States and northern Europe; but no model paints a scenario that is particularly cold (except some very long term prediction). However, all eyes are on the polar vortex. If the models are right, it is very possible that the vortex will experience an unprecedented disturbance in November, leading to an interesting weather period starting in December. “There is no way this is fulfilled.” While November continues with its strange meteorology, the models draw increasingly strange scenarios. At this point in the week, we cannot rule out that on the 18th and 19th we have a more than considerable winter storm with the ‘beast from the east‘looming over Western Europe. In the next few hours we will have a war between models: The American marks a cold entry on Santander, the European said no. Little by little, the two seem to be converging towards a cold scene. It’s too early to say, but in a very few hours the daisy will be shedding its leaves. Anyway, the central issue is that all of this is minute sin. The breaking of the vortex. Except for that event in the middle of next week, autumn will continue to be very warm and mild on almost all continents. However, this could change if sudden stratospheric warming appears. That is, the vortex breaks. Sudden stratospheric warming? To understand it simply, we have to remember that the atmosphere is a kind of “lasagna of air layers” and each of them follows its own logic. That is, they work quite differently and independently. As far as it affects us: the circulation of air in the troposphere (the one closest to the surface) and the circulation in the stratosphere (the layer directly above) are related, yes; But, in general terms, they each do their own thing. During the “sudden stratospheric warming“, a part of the troposphere warms rapidly and, as a consequence, invades the stratosphere, causing a profound alteration of the circulation at high altitude. That is, for a few days, everything turns upside down. And what happens? The most common consequence of this is that the polar vortex weakens and may break down. The polar (arctic) vortex is a current of air that runs from west to east around the north pole and contains cold air at high latitudes. When this current is strong and stable, preventing it from flowing towards places like Spain. If the vortex It destabilizes and its winds lose strength (due to, for example, “sudden warming”), it is relatively common for cold air masses to escape on their way south. What if it doesn’t break? In reality, the vortex does not even need to break. It only needs to move from the Arctic region to lower latitudes. By moving a huge mass of cold air with it, the result is always very similar: an icy cold that can turn any country upside down (even the best prepared ones). And that seems to be what we are going to see. It’s hard to know if it will affect us or not, but there’s no doubt that the late fall weather is getting “interesting.” Image | Meteociel In Xataka | The last hope of winter in Spain is desperate, but increasingly possible: the breaking of the polar vortex

The industry became obsessed with training AI models, while Google prepared its masterstroke: inference chips

In recent years, what was truly relevant was training AI models to make them better. Now that they have matured and training it no longer scales as noticeablywhat matters most is inference: that when we use AI chatbots they work quickly and efficiently. Google realized this change in focus, and has chips precisely prepared for it. Ironwood. This is the name of the new chips from Google’s famous family of Tensor Processing Units (TPUs). The company, which began developing them in 2015 and launched the first ones in 2018now obtains especially interesting fruits from all that effort: some really promising chips not for training AI models, but for us to use them faster and more efficiently than ever. Inference, inference, inference. These “TPUv7” will be available in the coming weeks and can be used to train AI models, but they are especially aimed at “serving” these models to users so that they can use them. It is the other big leg of AI chips, the really visible one: one thing is to train the models and quite another to “execute” them so that they respond to user requests. Efficiency and power by flag. The advance in the performance of these AI chips is enormous, at least according to Google. The company claims that Ironwood offers four times the performance of the previous generation in both training and inference, and is “the most powerful and energy-efficient custom silicon to date.” Google has already reached an agreement with Anthropic so that the latter has access up to one million TPUs to run Claude and serve it to its users. Google’s AI supercomputersand. These chips are the key components of the so-called AI Hypercomputer, an integrated supercomputing system that according to Google allows customers to reduce IT costs by 28% and a ROI of 353% in three years. Or what is the same: they promise that if you use these chips, the return on investment will be multiplied by more than four in that period. Almost 10,000 interconnected chips. The new Ironwoods are also equipped with the ability to be part of joining forces in a big way. It is possible to combine up to 9,216 of them in a single node or pod, which theoretically makes the bottlenecks of the most demanding models disappear. The size of this type of cluster is enormous, and allows for up to 1.77 Petabytes of shared HBM memory while these chips communicate with a bandwidth of 9.6 Tbps thanks to the so-called Inter-Chip Interconnect (ICI). More FLOPS than anyone. The company also claims that an “Ironwood pod” (a cluster with those 9,216 Ironwood TPUs) offers 118x more ExaFLOPS FP8 than its best competitor. FLOPS measure how many floating-point math operations these chips can solve per second, ensuring that basically any AI workload is going to run in record times. NVIDIA has more and more competition (and that’s a good thing). Google chips are a demonstration of the clear vocation of companies to avoid too many dependencies on third parties. Google has all the ingredients to do it, and its TPUv7 is proof of this. It’s not the only oneand many other AI companies have long sought to create their own chips. NVIDIA’s dominance remains clearbut the company has a small problem. In inference CUDA is no longer so vital. Once the AI ​​model has been trained, inference operates under different game rules than training. CUDA support remains a relevant factorbut its importance in inference is much less. Inference focuses on obtaining the fastest possible answer. Here the models are “compiled” and can run optimally on the target hardware. This may cause NVIDIA to lose relevance to alternatives like Google. In Xataka | When you’re OpenAI and you can’t buy enough GPUs, the solution is obvious: make your own

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.