China prepares a 2nm AI chip to end NVIDIA’s dominance. Your problem is how you are going to manufacture it

A new chip designer for artificial intelligence (IA) is preparing to take the field in China. And he intends to make a lot of noise. In fact, it is already doing so. It’s called Dishan Technology, and, according to SCMP, is already verifying the prototype of a 2nm AI GPU that uses a hybrid integration technology that combines FinFET and GAA transistors (Gate-All-Around). However, this is not the only thing that has emerged. According to Dishan Technology, this chip will be 40% more energy efficient than its predecessor and will be compatible with CUDA (Compute Unified Device Architecture), from NVIDIA. This latest technology brings together the compiler and development tools used by programmers to develop their software for NVIDIA GPUs, so if Dishan’s chip is really compatible it will be much easier to integrate it into facilities that already have GPUs from this American company. Although, as I mentioned above, Dishan already has a prototype of its chip, it will take another year or two to refine its technology enough to make large-scale manufacturing possible. Be that as it may, what has not been revealed is who is going to manufacture it. SMICthe largest Chinese semiconductor producer, can currently only manufacture 7nm chips using the multiple patterning. And TSMC, Intel and Samsung, which could produce it, will hardly do so in the current geopolitical context due to the demands of the US sanctions on China. We will see how Dishan Technology solves this challenge. China already has three “champions” in its AI chip ecosystem The country led by Xi Jinping you already have three alternatives very clear to NVIDIA. Although not as well-known as Huawei or Moore Threads, Cambricon Technologies is one of the companies specialized in designing GPUs for AI with the greatest growth potential. In fact, in August 2025 it received approval from the Shanghai Stock Exchange (China) to raise $560 million. He is allocating them to the design of four chips for training and inference of AI models, and also to the development of an alternative to CUDA. Moore Threads has developed several GPUs that rival advanced solutions from NVIDIA, AMD or Huawei On the other hand, Moore Threads has developed several GPUs for AI applications that, on paper, rival some of the advanced solutions that NVIDIA, AMD or Huawei have placed on the market. The cards MTT S4000 and MTT S3000 They are its most interesting proposals right now, although, curiously, the MTT S80 card also appears in its portfolio, a proposal for games and content creation that, according to Moore Threads itself, has a computing capacity of 14.4 TFLOPS in single-precision floating point operations. The other indispensable player in the Chinese AI chip industry is Huawei. His most ambitious proposal right now is the chip Ascend 950PRwhich aims to surpass the performance of the GPU NVIDIA H100. However, this Chinese company also launched its chips last year Ascend 910D and 920. This last solution is clearly intended to compete in the Chinese market with NVIDIA’s H20 GPU. Presumably at the end of 2026 it will launch its Ascend 950DT chip, and the Ascend 960 and 970 GPUs will arrive in 2027 and 2028 respectively. Image | Generated by Xataka with Gemini More information | SCMP In Xataka | TSMC acknowledges that it has considered taking its factories out of Taiwan. It’s impossible for a good reason. In Xataka | The looming bottleneck in AI is neither RAM nor gas: it’s that TSMC’s N3 node is absolutely saturated

Samsung is NVIDIA’s best friend. AMD just got into the relationship and TSMC looks askance

Lisa Su has been at the head of AMD since 2014. Captaining such an important ship, it is assumed that on some occasion he will have visited one of its main component suppliers. But it turns out that, in his role as CEO, he had never traveled to South Korea, home to one of the world’s leading foundries. The journey has paid off and AMD turns with latest generation memory. But the one who is happiest is the one who is going to allow AMD and NVIDIA to create their new platforms for AI. Samsung. Visiting. In Pyeongtaek, south of Seoul, is one of the main factories from Samsung. The South Korean company is expanding and has the objective of becoming one of the names of the American industry while maintaining its local muscle, and the plant inaugurated a few years ago is an example. This is how Samsung makes money: the secret is in the IPHONE As it could not be otherwise in these times, the facility is focused on the creation of memory chips to power the AI ​​hardware. SK Hynix and Micron are the two big competitors of Samsung in this field and are also opening and purchasing plants to increase their memory production. And AMD wants a piece of that pie because Samsung is, right now, the main supplier of next-generation memory. The agreement. The trip, apart from seeing the facilities, was the perfect setting to make the announcement that Samsung was will convert in the main memory supplier HBM4 from AMD. Specifically, the Instinct MI455X GPU, the next generation of the American company. Because when we talk about GPUs for AI, we talk more about NVIDIA (which also they just presented news) because they are pulling with everything (and in all sectors), but AMD is the other big one that doesn’t want to be left out of the conversation. They are achieving billion-dollar agreements with companies like Metathey have some growth forecasts stratospheric and although far from NVIDIAthey want to be in charge of providing the hardware for AI. Happy managers | Photo: Samsung HBM4. That Samsung is the one that supplies the HBM4 memory to AMD is great news for them because they are the ones that, at least for the moment, have the most refined manufacturing process for this type of memory. In the past they had already supplied the HBM3E for AMD’s current MI350X and MI355 accelerators, but the new agreement means that they will access the same type of memory as their own Samsung exclusively supplies -for now- to NVIDIA. Memory is not everything, obviously, but it plays a fundamental role. The higher and faster bandwidth, the more data per second it can handle. Think of this memory as a very wide and perfectly paved highway. And Samsung was the only one that had managed pass demanding NVIDIA tests for your new architecture Vera Rubin. Samsung at its best. And in this agreement it is evident that both parties win, but Samsung is achieving extreme recognition in recent months. Achieving the agreement first with NVIDIA and now with AMD implies that they separate from their main rivalalso South Korean SK Hynix, which is somewhat further behind with the development of its HBM4 chips. But, furthermore, the release AMD indicates that Samsung will also supply DDR5 memory to AMD’s EPYC servers and the possibility of them manufacturing some of AMD’s future chips has been discussed. Because Samsung manufactures memory, yes, but also other processors. There they have their own Exynos for the Galaxy S26but in the past they manufactured the most powerful Qualcomm Snapdragons and it has been proposed again that the South Korean company be the one make 2 nanometer chips from Qualcomm. On the other hand, they have already won a contract of more than 16,000 million with Tesla to create chips focused on AI. It is clear that TSMC is the main foundry in the world, but Samsung is determined to be one of the main hammers with which to build the future of AI. And, speaking of the king of Rome, the agreement means that Samsung manages to take over TSMC and AMD achieves a second role to reduce its dependence on the Taiwanese company. because there We already know that there is a best friendand it is undoubtedly NVIDIA. In Xataka | “It’s not a temporary squeeze, it’s a tsunami”: we are seeing live how the cheap smartphone disappears

If anyone was waiting for the AI ​​bubble to burst, NVIDIA’s results have a message: sit tight

NVIDIA just published your results of the fourth quarter of its last fiscal year and has left Wall Street speechless. Revenues of $68.1 billion, a net profit that almost doubles that of the same period of the previous year, and a forecast for the following quarter that has far exceeded analysts’ expectations. And all this in a turbulent context where more efficient models and other alternatives are beginning to appear. The crash of DeepSeek is far away, and the demand for chips does not slow down. We tell you the numbers in detail. In case your position was not clear. Only a handful of companies in history have exceeded $100 billion in annual profit. Alphabet, Microsoft and Apple are in that club. NVIDIA has just joined them, with $120 billion in profits in the last twelve months, according to the report. The difference is speed: just three years ago, its annual profit was 4.4 billion. We can say with certainty that no technology company has ever grown so quickly on that scale. AI, and more AI. The engine that has driven these profits is its data center business, which generated $62.3 billion in the quarter, 71% more than a year ago. Within that segment, if we focus on their Blackwell chips, they have gone from entering 32.6 billion to 51.3 billion, while the networks (NVLink, Spectrum-X and InfiniBand) grow from 3,000 to 11,000 million. Gross margin is 75%, and earnings per share nearly double to $1.76 in GAAP terms (which is the official rulebook that companies follow to demonstrate transparent accounting). What Jensen Huang says. “Without computing, there is no way to generate tokens. Without tokens, there is no way to grow revenue.”, counted directly the CEO of NVIDIA in the meeting with investors. Their thesis is that in the new AI economy, computing power directly equates to revenue for their customers. That is why the large cloud service providers (Google, Amazon, Microsoft, Meta) continue increasing your capex budgetswhich together will exceed 500,000 million dollars in 2026 to build AI data centers. And NVIDIA is the main beneficiary of that expense. What DeepSeek has not broken, but accelerated. At the beginning of 2025, the emergence of the Chinese DeepSeek model generated an unprecedented tremor in the markets, leaving a simple question in our minds: if AI becomes more efficient, why do we need so many chips? The answer from NVIDIA’s results is that efficiency does not reduce infrastructure demand, it multiplies it. Every improvement in inference efficiency lowers the cost per token, encouraging more companies to deploy more AI applications, which in turn requires more compute. It’s like Jevons’ paradox, but applied to AI: efficiency expands the market instead of contracting it. Agentic AI as the next catalyst. On the same call with investors and analysts, Huang stood out that “enterprise adoption of agents is skyrocketing.” AI agentsthese systems that make decisions and execute tasks autonomously, require many more inference cycles than chatbots. They are the next step in the AI ​​value chain, and NVIDIA is once again in a privileged position. Colette Kress, CFO of the company, confirmed In addition, the first samples of Vera Rubin, the next generation of chips that will arrive later this year, have already been sent. China and the competition. Not everything is green. NVIDIA acknowledged that its forecast for the next quarter ($78 billion) does not include computing revenue in China. The company has generated just about $60 million from H20 chips since the Trump administration reapproved some sales in August 2025, according to SEC filings, and has yet to earn revenue from the most recently approved H200. Regulatory uncertainty with Beijing remains a small China in Huang’s shoe. In parallel, competitors such as AMD, Broadcom or Google’s own custom chips (TPUs) are gaining ground. But the NVIDIA CEO remains focused on his vision. And according to pointed at the meeting: “Every company depends on software, and all software will depend on AI.” As long as this is fulfilled, everything indicates that NVIDIA will continue selling the blades and picks. Cover image | NVIDIA In Xataka | NVIDIA was founded by three engineers, but only Jensen Huang remains CEO: “I wish I had kept some shares”

If you’re in a hurry to upgrade your PC, NVIDIA’s CEO has bad news: don’t be in a hurry

Talking about artificial intelligence is talking about Jensen Huang. The CEO of NVIDIA has become the figure of an industry: that of artificial intelligence. In large part, it is your company’s products that are driving the engine of the data centers and, at the same time, enormous semiconductor industries and memory are the essential components of NVIDIA GPUs. And if Huang has been commenting for a few weeks that this 2026 it’s going to need wafers and a lot of RAMhas now asked for patience with AI. Because he has another seven or eight years of unchecked climbing left. In short. When we talk about artificial intelligence, there are two poles. On the one hand, those who see signs of a bubble that will burst in the short term. On the other hand, those who defend the billion-dollar investment against all odds. In that boat is Jensen Huang, who recently noted in CNBC that this massive spending is “necessary and appropriate” because a “once-in-a-generation infrastructure” is being shaped. The most interesting thing is that, for him, this career will continue for several years, pointing that the investment and construction of infrastructure for AI has seven or eight years left. Mortars of money. In his statements, Huang pointed out that companies like Anthropic and OpenAI are making money despite everything invested and that their current brake is not so much the budget as the limit of computing power. That is why you want your suppliers –Samsung in HBM4 memories new generation or TSMC with the processors- increase the pace. It remains to be seen, however, if the pace can be maintained over the next five years. On CNBC, the CEO of NVIDIA pointed out that, despite the astronomical amount of money, the spending is sustainable. And proof of this is that it is increasing. If in 2025 the total spending of Big Tech did not reach 400,000 million, wait that this year the number of American companies will rise to 650,000 million. Only between Amazon and Alphabet -Google-, they will invest about 385,000 million. They see the AI ​​computing race as the next “whoever wins the most,” and none are willing to lose – DA Davidson analyst Gil Luria speaking to Bloomberg Parallel career. And that, as we say, in American companies, since China is the other pole in this race for artificial intelligence. The Asian giant is the birthplace of several extremely capable models, but also something that is missing in the United States: energy to feed the enormous needs of AI. China is betting on AI, but also on robotics, and all this at the same time buy NVIDIA products and develop your own semiconductor network with the goal of achieving technological sovereignty. It is another race parallel to that of the United States, and apart from the two poles of infrastructure development, we have particular names. That so much money is being invested means that opportunities are being created, and there are companies that have gone through a bad patch and want to surf the wave. For example, a Intel that, after needing a rescue by the United Statesis positioning itself as one of the great foundries in the United States. In addition, they are putting their foot in a segment that they had not explored, that of DRAM memory, and They are doing it with the Japanese giant SoftBank. Japan has not had a say in the memory industry since the 80s, when South Korea snatched their positionand now they may have another chance. Translation for the user. These are a couple of examples of companies that are taking advantage of the conditions to obtain financing and expand, seeking to position themselves in what they have determined is the future of the technology industry. With that amount of money and investment, there is a question you may be asking yourself: will I be able to buy a PC? The answer It is not hopeful. Giants like Micron -one of the heavyweights in the RAM segment- They are investing a lot to expand facilities and be more capable when creating memories, but they will not be for us: they will be for data centers. If the end of 2026 or 2027 was targeted as the end of the component crisis like the RAM or SSD (which are still components with memory modules), now it is Lip-Bu TanCEO of Intel, who states that It won’t be until 2028at the earliest, when we can see a horizon in the current panorama. So, yes, the entire tech industry has turned to AI and those that can increase their production of key components will do so over the next few years. The issue is that they are going to focus on components that users neither care about nor care about, neglecting those that we really need on a day-to-day basis. AND an example is NVIDIA itself. Image | NVIDIA In Xataka | Apple has been the industry’s first customer for decades. AI is relegating it to the background

Intel refuses to be left out of the AI ​​race. Your next move points directly to NVIDIA’s territory

The AI ​​fever is not only redefining software, it is also turning the map of power in the chip industry upside down. On this new board, the GPU has become the essential engine for building models and scaling data centers, to the point that demand has skyrocketed and placed its main manufacturers in a dominant position. For Intel, the diagnosis is difficult but evident: if the next decade of computing is decided in this area, it is not enough to protect the kingdom of the CPU. Intel’s move. The Santa Clara company has chosen a very specific setting to begin organizing its speech. During an AI Summit organized by Cisco, the company’s CEO, Lip-Bu Tan, said that Intel will start to produce GPUs and has just hired the “chief GPU architect” who will lead that effort. The manager avoided giving details about the name, but he did leave a message consistent with the moment in the sector: the GPU matters and will continue to matter. The missing piece. According to Reutersthe talent hired by Intel is Eric Demers, from Qualcomm. On the other hand, the initiative would fall under the umbrella of Kevork Kechichian, executive vice president and head of Intel’s data center business, incorporated in September within the framework of a series of hires aimed at strengthening the company’s technical profile. AI, before gaming. The nuance is important, because talking about GPU can automatically activate the imagination of graphics cards for gaming, but reality goes in another direction. Intel already has a presence in graphics on the PC, with its Arc productsbut the announcement targets GPUs for AI and data centers. The initiative as a still early plan, with a strategy that will be developed based on customer demand, a coherent approach with an AI infrastructure market where the most intense battle is being fought today. Intel’s corporate moment. According to CNBCthe stock market value has risen in the last year in the heat of optimism about your business foundrybut the company is still mainly dedicated to manufacturing chips for its own catalog. It’s no secret that Intel has lost ground to companies driven by the AI ​​data center wave, and is now taking steps to respond. No relief until 2028. In the same forum, Tan slipped in another element that helps dimension the challenge of AI infrastructure. He spoke of the memory chip shortage which is disrupting the market due to the mismatch between supply and demand, driven by the construction of AI-oriented data centers. That environment is giving manufacturers room to continue raising prices, and Tan was blunt in describing AI as the “biggest challenge” to memory. He also released an estimate that leaves little room for optimism: he stated that he does not expect “no relief until 2028.” Images | Brecht Corbeel In Xataka | Goodbye to the duopoly of Intel and AMD in Windows: the arrival of NVIDIA processors is imminent and brings 8 laptops under its arm

That the US authorizes Nvidia’s H200 to reach China is not a concession, but a plan. They prefer money to competition

The chip war between China and the US has mutated from a blockade to a commercial transaction. Donald Trump has announced that he will allow Nvidia export its high-performance H200 chips to China. The authorization carries an unprecedented condition: the US government will receive a 25% commission about these sales. This “reverse tariff” transforms China containment into a source of income, breaking with the strategy of total suffocation and offering a lifeline to Nvidia in its most critical market. End of free blocking. The decision is a direct result of a meeting last week between Trump and Jensen Huang, CEO of Nvidia. The White House’s logic has changed: it argues that this measure is carried out under strict national security conditions, extending the model to competitors such as Intel and AMD. It is a movement that formalizes what was already intuited a few months ago, when Nvidia managed, after a first meeting with Trump, lift veto on bottom H20 chip. At that time, a precedent was already established of transferring 15% of income to the country, a figure that now scales to 25% for the most powerful hardware. Tap on the image to go to the original post A dose for China. That they chose this chip is no coincidence: the H200 is significantly more powerful than the H20—the trimmed model that China had started to boycott— but it is still behind the cutting-edge Blackwell architecturewhich is still banned. According to advisors such as David Sacks, the North American country seeks to keep China addicted to its technology: if they are denied all access, they are forced to look for alternatives of their own. In fact, Huawei has already admitted that it will take two years to match the performance of the H200, making this chip the perfect tool to slow down Chinese development while monetizing its need. Cracks and black market. The reality is that the total blockade was failing. Recent investigations showed how Chinese companies used shortcuts through Indonesia to access the power of banned chips. Furthermore, the second-hand market had become the main avenue for China get H100 and A100 GPUs off the radar. By allowing the sale of the H200, the US is trying to regain control over a flow that already existed, but in the shadows. At the same time, the Department of Justice announced “Operation Gatekeeper” to dismantle smuggling networks in countries like Hong Kong. China’s response. The great unknown is precisely this, the reception of the news in Beijing. Although Trump claims that Xi responded “positively,” the reality on the ground seems different. China has been for months banning your local businesses buy Nvidia chips to promote its domestic industry. The CAC (Cyberspace Administration of China) came to investigate the H20 looking for rear doorssomething that generated a climate of mistrust that not even the previous July agreement managed to completely dissipate. Jensen Huang, who warned about the danger of an “AI silk road” If the US continued to block sales, with this pact it gets a golden opportunity to not lose a market that represents 13% of its income, although its Chinese clients must now pay the price of American geopolitics. Cover image | Composition with images from Nvidia and RawPixel In Xataka | China has just redrawn the map of strategic minerals: its new rules on rare earths target the United States

Google’s TPUs are the first big sign that NVIDIA’s empire is faltering

It was 2013 and Jeff Dean, one of the directors of Google, he realized something along with your team: if each Android user used their new voice search option for three minutes a day, the company would have to double the number of data centers to cope with the computational load. At the time, Google was using standard CPUs and GPUs for this task, but they panicked and realized they needed to create their own chips for those tasks. This is how it was born Google’s first Tensor Processing Unit (TPU)an ASIC specifically designed to run the neural networks that powered its voice services. That grew and grew and in 2015, before the world knew it, those first TPUs accelerated Google Maps, Google Photos and Google Translate. A decade later, Google has created TPUs so powerful that they have almost unintentionally become a surprising and unexpected threat to the almighty NVIDIA. There it is nothing. Blessed panic. Google TPUs keep their promise Until now when an AI company wanted to train its models, turned to advanced NVIDIA chips. That has changed in recent times, and in fact we have seen two recent signs that certainly pose a turning point. Missing from that timeline is the last and most striking member of this family, Ironwood, presented in April 2025. Source: Google. The first is the release of Claude Opus 4.5, an exceptional modelespecially in programming tasks. Those responsible for Anthropic already they explained that this new model does not depend only on NVIDIA, but combines the power of three different proposals: that of NVIDIA, but also Amazon’s Trainium and Google’s TPUs. But it is also that Google has given the bell because your brand new AI model Gemini 3 He has been exclusively trained using the new Ironwood TPUs that were presented in April and have become a real sensation. As we said, Google started that project in 2013 and launched its first TPU in 2015, but that internal need became a blessing, because what Google I couldn’t know is that these TPUs would end up arriving at the right time: the launch of ChatGPT turned them into a fantastic opportunity to strengthen your AI infrastructure, but also to be used for training and inference of your AI models. From there we end up reaching the current Ironwood TPUs, which in their seventh generation are exceptional both in inference as in training (as its use has demonstrated for Gemini 3). Google has managed to squeeze even more out of its chips and has doubled the peak FLOPS per watt compared to its previous generation. Source: Google. The efficiency and power of these chips gives a very notable jump compared to their predecessors, and for example they achieve double FLOPS performance per watt which was achieved with Trillium chips. If we compare them with the TPU v5p of 2023, the chips manage to reach 4,614 TFLOPS, 10 times more than the 459 TFLOPS of those models from two years ago. It’s an extraordinary leap in performance (and efficiency). The key to 2025: Google now lets others use its TPUs But in the evolution of TPUs there is another differentiating element in 2025. This has been the year in which Google has stopped “being selfish” with its TPUs. Before only she could use them, but in recent months she has reached agreements with OpenAI —which also seeks make your own chips— and especially with Anthropic. The performance of Ironwood is already comparable to that of the GB200 and even the GB300 from NVIDIA. Source: SemiAnalysis. That second alliance is especially monumental as part of that outsourcing strategy. Google is not only renting capacity in its cloud, but facilitating the physical sale of hardware. The agreement covers one million TPUs: 400,000 units of its TPUv7 Ironwood sold directly through Broadcom, and 600,000 rented through Google Cloud (GCP). In a deep report in SemiAnalysis It is revealed how from a technical perspective, the TPUv7 Ironwood is a formidable competitor. The performance gap with NVIDIA is closing, and Google’s TPU is practically the same as NVIDIA’s Blackwell chip in FLOPS and memory bandwidth. However, the real advantage lies in the cost. The Total Cost of Ownership (TCO) of an Ironwood server is estimated to be 44% lower for Google than for an NVIDIA GB200 server, allowing the search giant to offer very competitive prices to clients like Anthropic. To help even more in that race, they point out in SemiAnalysis, Google has another ace up its sleeve. This is Google’s Inter-Chip Interconnect (ICI), a network architecture that allows up to 9,216 Ironwood chips to be connected using a 3D torus topology. Google also uses optical circuit switches that allow optical data to be routed without electrical conversion, reducing both latency and power consumption. This allows you to reconfigure the topology of that network on the fly to avoid (or mitigate) failures and optimize different types of parallelism. NVIDIA’s “moat” with CUDA is narrowing We have often repeated that although semiconductor manufacturers already have flashy chips —tell AMD– In fact the true strength from NVIDIA is in CUDAthe software platform that has become the de facto standard for AI developers and researchers. Google also wants to change things here. During the last few years the company tried to focus on Python libraries such as JAX either XLAbut in recent times has started prioritizing native PyTorch support —a great competitor of TensorFlow— in its TPUs. That’s crucial to making it easier for engineers and developers to start migrating to their TPUs instead of NVIDIA GPUs. Before it was possible to use PyTorch on TPUs, but it was cumbersome, as if one had to speak a language using a dictionary in real time, while for NVIDIA GPUs that was the “native” language. With XLA Google used an intermediate library as a translator to be able to use PyTorch, but that was a nightmare for developers. Native support allows Google TPUs to behave just like NVIDIA GPUs in the … Read more

The US vetoed NVIDIA’s most powerful chips in China. I didn’t count on an unexpected problem: Indonesia

NVIDIA is at the center of the technological war between China and the United States. After the blockadethe US allowed the company sell a version of its H20 chips specific for the Chinese market, but the most powerful chips, The Blackwells are still banned in China. Or so we believed. What is happening. Donald Trump made it clear that he does not want China to have access to Blackwell chips, but despite the blockade, an investigation by the Wall Street Journal shows how there are Chinese companies benefiting from the computing power of these chips using legal shortcuts. The process. The investigation details the process that NVIDIA’s Blackwell chips go through until INF Tech, a Shanghai-based startup, uses the computing power. NVIDIA sells its chips to Aivres: Aivres is a Silicon Valley company partially owned by Inspur, a Chinese company that is on the US blacklist. NVIDIA could not do business with Inspur or its partners, but the blockade does not affect partners based in the US, as is the case with Aivres. Aivres sells the chips to Indonesia: specifically to an Indonesian communications provider called Indosat Ooredo Hutchison. The agreement includes the sale of 32 NVIDIA GB200 racks with 72 Blackwell chips each; more than 2,300 chips worth $100 million. Indonesia sells computing power to China: The end customer for this cloud computing power is INF Tech, which will use it to train AI in financial and medical research applications. This point is key as we will see later. Why it is important. The investigation calls into question the true effectiveness of US blockades and regulations. Using intermediaries in other countries, Chinese companies can manage to circumvent the restrictions and access the most powerful chips, all without violating the restrictions. Cracks. According to the Trump administration’s controls, the deal is legal as long as INF Tech does not use the chips to help the government with military intelligence applications or to develop weapons. However, it is difficult to know what it is actually being used for and in fact in the US there are suspicions that The Chinese government is leaning on the private sector to improve its military technology. Disagreement. If there is a crack, the logical thing would be to cover it. The Biden administration tried to tighten these rules to prevent chips from being sold to countries that are not close allies of the United States. This would have prevented the sale to the Indonesian company, but when Trump returned to power he decided not to go ahead with these new rules. Instead of the government controlling it, it should be the companies themselves. Interests. The US blockades seek to take advantage of China in the AI ​​technological race, all for reasons of “national security.” It is contradictory that they leave these cracks open through which these chips end up sneaking in legally. The one who thinks it’s great is NVIDIA. Speaking to the Wall Street Journal, a company spokesperson came out in favor of Trump’s decision, saying that “Biden’s controls cost taxpayers tens of billions, paralyzed innovation and ceded ground to foreign rivals.” Image | NVIDIA, Pexels In Xataka | The Chinese government has taken a definitive step to break NVIDIA’s dominance in China: prioritize “national” chips

Everyone is developing chips that compete with NVIDIA’s. They are in the wrong race

Qualcomm advertisement on Monday that it is working on AI accelerator chips, which means there will be new competition for NVIDIA. The company that dominates the AI ​​hardware landscape is seeing a large group of competitors try to erode that position, but the problem for all of these companies is not the chips, but something else. A CUDA call. what has happened. Qualcomm has announced the AI200 chip, which will begin selling in 2026, and the AI250, which will do so in 2027. Both will be able to work in rack-type systems that have liquid cooling. Qualcomm servers may have up to 72 chips based on the Hexagon NPUs of the company’s Snapdragon SoCs. Inference yes, training no. The company has revealed that its chips focus on inference (the execution of AI models) and not training. Their rack-based systems will have lower operating costs than cloud system providers, Qualcomm says. Each rack consumes 160 kW, a figure comparable to the consumption of some racks based on NVIDIA GPUs. There are no details about the price of these chips, the cards or the racks that will integrate them, nor about how many NPUs can be offered in each rack. What we do know is that Qualcomm’s accelerator cards will support up to 768 GB of memory, more than what NVIDIA or AMD offer in their current models. according to CNBC. Chips for third parties. The other important point is that Qualcomm will sell its AI chips and other components separately, allowing large AI companies to “customize” their own racks based on Qualcomm chips. It is an identical philosophy to the one they have adopted in the world of their mobile SoCs. Investors viewed the news with exceptional optimism, and Qualcomm shares rose 11% in Monday’s session. NVIDIA dominates with an iron fist. In the AI ​​chip segment, the king is NVIDIA. The company is the absolute protagonist of this market and according to CNBC it maintains a 90% market share, which has allowed it to skyrocket its valuation to 4.5 trillion dollars. That dominance could now be threatened by the avalanche of chips that are arriving from various manufacturers. All against NVIDIA. AMD has its excellent Instinct, Google has your TPUsAmazon their TrainiumMicrosoft their Maia and Huawei has your Ascend. All of them make really striking proposals for NVIDIA chips, and little by little these solutions are being integrated into more and more data centers. But the real problem is not in the hardware, but in the software. The great challenge is to defeat CUDA. The de facto standard in the AI ​​industry that developers use It’s CUDAa platform that allows you to take full advantage of the capabilities of NVIDIA chips in the field of artificial intelligence. This hardware+software combination is much more mature than that of its competitors, who have the hardware part resolved (or are on the right track) but do not have a platform comparable to CUDA. AMD has ROCmwhich is especially interesting because it is Open Source, but at the moment its features still do not reach those of CUDA. Reinvent the wheel? CUDA has been on the market for almost two decades, which means that the majority of academic research and pioneering models—such as ImageNet—were written for CUDA. It is not a language, it is a vast collection of libraries, optimized frameworks (like cuDNN), debugging tools and a huge community. Developing a competitor is basically like reinventing the wheel, and migrations are expensive and companies and startups will not have an easy time assuming it. China is also in the fight. And of course, if there is another great protagonist in this race, it is China. The Asian giant, previously dependent on NVIDIA, is seeking to get rid of this manufacturer, and along with the development of advanced AI chips they are also trying to have its own AI software to surpass CUDA. In Xataka | AI is the best thing happening to nuclear fusion. The construction of ITER is already accelerating

Huawei already has his best strategy to end Nvidia’s domain in China

In early 2025 NVIDIA had a fee in the Chinese chip market for artificial intelligence (AI) of nothing less than 95%. However, during the last weeks has dropped to 50%. This abrupt decrease is largely due to the export restrictions of chips for the The US government has imposedalthough it is also caused by the development of competition within China. Despite this unfavorable Nvidia scenario, it has something very important in its favor: CUDA (Compute Unified Device Architecture). Most of the AI projects that are currently being developed are implemented on CUDA. This technology brings together the compiler and development tools used by programmers to develop their software for NVIDIA GPUs, and replace it with another option in the projects that are already underway it is a problem. Huawei, who aspires to an important portion From this market in China, it has Cann (Compute Architecture for Neural Networks), which is its alternative to CUDA, but for the moment CUDA dominates the market. Huawei is going to position Cann as an open source tool kit This declaration of Li Guojie, a computer scientist from the Chinese Academy of Sciences that is considered an authority in China, Express clearly The important thing that are the tools that I have just mentioned in the ECOsystem of AI models: “China must develop an alternative system to achieve self -sufficiency in AI (…) Deepseek has had an impact on the CUDA ecosystem, but it has not overcome it completely because barriers persist. In the long term we need to establish a set of software tool systems for the controllables that exceed CUDA.” Xu Zhijun does not mention it expressly, but what his strategy pursues is to increase the competitiveness of the Huawei’s ecosystem This is undoubtedly one of the great challenges that China faces in this area, and probably its best option is Cann. During the last five months Huawei has launched two GPU for Ia Very competitive and is about to take a very important step: Cann will position as An open source tool kit. Its purpose is, According to Eric Xu ZhijunRotary President of Huawei, “to accelerate the innovation of developers and get the chips of the Asce Family to be easier to use.” Xu Zhijun does not mention it expressly, but what his strategy pursues in the background is to increase the competitiveness of the Huawei ecosystem attacking Nvidia where he is stronger. In addition, it has already begun to discuss with the main actors of the China’s AI industry, as well as with its commercial partners, universities and research institutions how Huawei should build their open source ecosystem. If this initiative thrives, and presumably will, it will represent a very important step forward on the road to China’s technological independence. Image | Hiilicon More information | SCMP In Xataka | Nvidia has to deal with the absolute distrust of several US legislators. His plan in China is in danger In Xataka | The US wants to end the chips for the Chinese that are sold abroad. And China knows how to defend oneself

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.