“Circular financing” between Nvidia and Openai can be the genius of the century … or collapse

Nvidia has announced A “strategic investment” of up to 100,000 million dollars in Openai. But it is an investment with trap: Openai will use that money to buy Nvidia chips. The semiconductor manufacturer thus becomes the financier of its own most important client. Why is it important. This maneuver dangerously reminds the “circular financing” schemes that characterized the end of the 2000 Puntocom bubble. Companies like Lucent, Nortel and Cisco financed operators as Global Crossing to buy them equipment. We are not the first to see this simile At this stage of AI. When the bubble exploded, both suppliers and customers sank into a spiral of debts and overcapacity. The agreement will allow OpenAI to build data centers with a joint capacity of 10 gigawatts, equivalent to about 10 nuclear reactors. Jensen Huang, CEO of Nvidia, has acknowledged that this represents between 4 and 5 million GPUS: “double those we distributed last year.” Brutal scale In figures. The numbers are astronomical. According to Huang himself in August, creating a 1 Gigavatio data center costs between 50,000 and 60,000 million dollars, of which about 35,000 million are destined for Nvidia chips. With that logic, the 10 projected gigawatts would cost more than 500,000 million dollars. The bags have reacted with euphoria: Nvidia shares rose almost 4%, adding 170,000 million dollars to their stock market capitalization. Jensen Huang Broza’s company is already 4.5 billion dollars of valuation. Yes, but. Parallelism with the ‘Puntocom’ bubble is disturbing. These same schemes of ‘Financing vendor‘We already saw them in the final stage of the 2000 technological bubble. They did not end well for any of the parties. The difference is that current numbers are much larger, even adjusting for inflation. The key is whether the productivity profits of the generative AI will compensate for the spent money. Between bambalins. The agreement explains the current situation in the AI ​​ecosystem: OpenAi desperately needs computing capacity to maintain its competitive advantage over the 700 million weekly users of their products. But infrastructure costs are so high that it needs constant external financing. Nvidia, on the other hand, seeks to ensure the future demand of its most advanced chips. The agreement guarantees mass orders while consolidating its dominant position against competitors such as AMD and Intel. “It is a closed cycle: Nvidia gives OpenAi money, and OpenAi uses it to buy Nvidia products,” Summary Summary Javier Pastor. The threat. Anti -Ponopoopoly experts are already arched eyebrows. Andre Barlow, a lawyer specialized in competition, explained to Reuters that “the agreement could change the economic incentives of NVIDIA and OpenAI, potentially blocking the Nvidia chips monopoly with OpenAi software leadership.” The structure creates extra barriers so that competitors such as AMD in OpenAi chips or rivals in AI models can climb their operations. They paint basts. In perspective. The story is full of similar schemes that ended badly. Global Crossing, the telecommunications operator that broke in 2002it was funded precisely by the same suppliers that sold equipment. When it was discovered that the real demand was much lower than the projected, both Global Crossing and its financiers lost thousands. The key question is whether the demand for AI services will be sufficient to justify this billionaire investment, or if we are faced with the recreation of the same speculative pattern with even more exorbitant figures. As Stacy Rasgon concludesBernstein analyst: “On the one hand, Openai helps meet very ambitious infrastructure objectives. On the other hand, it will further feed concerns about ‘circular’ financing.” Outstanding image | In Xataka | Openai estimates that it will enter 200,000 million dollars in 2030. The figure, like everything in OpenAi, is extremely ambitious

Nvidia will invest 100,000 million dollars in OpenAI. Actually a single euro will not be spent

Openai has signed a “strategic agreement” with Nvidia. According to this agreementNvidia “intends to invest up to 100,000 million dollars” in OpenAI gradually, but the truth is that this investment is misleading. Especially since Openai will spend those 100,000 million dollars to buy GPUS to Nvidia. Everything remains at home. What happened. These two companies have initiated the procedures to complete an agreement with a clear objective: create and display AI data centers With a joint gigantic computing capacity: 10 GW. The investment will be made gradually and will be completed “as each gigawatt” of computing capacity is installed in those Data centers. Nvidia will thus become a “computing partner and strategic connectivity” for the development plans of new data centers, says Openai. Millions of Gpus. According to Jensen Huang statementsCEO of Nvidia, that represents between four and five million gpus. Or what is the same: it is the number of units of their GPUS of ia that they expect to distribute this year, and “twice the ones we distributed last year.” The strategy “seller finances buyer”. This agreement is not a simple investment, but a strategic association in which the hardware provider invests a massive amount of money in its main client. In return that client undertakes create a mass infrastructure With supplier technology. It is nothing more than a closed cycle: Nvidia gives OpenAi money, and OpenAi uses it to buy Nvidia products. This sounds like a bubble. There is Several analysts that They speak How this remembers once again The bubble of the Puntocomwhere companies lent money to buy products from the other. That raises suspicions and questions about the long -term sustainability of these agreements. Companies becoming stronger among them. The circular agreement serves in fact to strengthen both companies and solidify their positions as dominant and indispensable actors in the AI ​​industry. In fact, this strategic alliance makes rivals like AMD or Intel very difficult. Nvidia is worth 170,000 million dollars more. The announcement caused immediate reactions in the NVIDIA assessment, whose shares increased almost 4%. The stock market capitalization of the company of Jensen Huan grew by 170,000 million dollars in that session and already touch the 4.5 billion dollars, and manages to distance itself even more from Microsoft, Apple or Google, which already exceed three billion. Long live Hype. Here once again there is a reinforcement of the speech of expectations and Hype. The confidence of these companies in the future of AI is patent, but they are interested and for now Openai’s income – no rivals – are well below spending They are doing in these technologies. Energy challenge. The plans to create infrastructure with 10 GW capacity are also astronomical. According to Some estimatesthose 10 gigawatts They are equivalent to the production of about 10 nuclear reactors, which normally provide a capacity of 1 GW per plant. A colossal cost. The current data centers range between very modest capabilities of 10 MW and other extraordinary 1 GW. Openai’s plans would leave those facilities very behind in computing capacity. In August Huang told investors to create a 1 GW data center is a cost of between 50,000 and 60,000 million dollars, of which about 35,000 are dedicated to Nvidia chips. With those figures, the total cost of those 10 GW of joint computing power would amount to more than 500,000 million dollars, a figure that – one—curiously— It coincides with that of the Project Stargate. Image | Flikr (Techcrunch) | Nvidia In Xataka | 5,000 “tokens” of my blog are being used to train an AI. I have not given my permission

Nvidia has paid 900 million for one

Nvidia has signed a new talent from Ia. The news is not that, but the way in which he has signed it: to achieve “capture it” he has made an investment of more than 900 million dollars in the company he directed. It is a new hiring modality that allows “stealing talent” without current legislation being able to do much. What happened. According to CNBC dataNvidia has invested more than 900 million dollars in the AI ​​Hardware Startup Contractwhich manufactures connectivity components for AI servers. As part of that agreement, Nvidia has signed his CEO, Rochan Sancar, who will begin to work directly for the firm led by Jensen Huang. Clusters to power. Contractivity connectivity solutions – founded in 2019 – allow the company to connect more than 100,000 GPUS to put them to work within AI. These types of solutions can help Nvidia offer integrated systems that use their chips. Or what is the same: all this points to new “supernodos” of computing with thousands of Nvidia chips ready to be installed in large data centers. The firm already its GB200 NVL72 marketsfor example, but this agreement allows you to go more in that field. They already knew each other. In 2023 Nvidia already participated in a round of investment in which Undergraduate raised 125 million dollars. In 2024 another new 115 million round He made companies such as ARM, Samsung or Cisco participate. According to Pitchbook, after this round the contribute assessment was 600 million dollars: the investment made by NVIDIA is enormous considering that data. Big Tech invest fortunes to sign talent. This tactic is the same that used goal in June, when invested 14.3 billion dollars In the startup Scale AI and signed his new tsar of the Superintelligence Division, Alexandr Wang. Google did the same in July by announcing an agreement with Windsurf: would invest 2.4 billion dollars in itand incidentally, he would sign his CEO, Varun Mohan. Microsoft did the same With Inflection and Mustafa Suleyman In July 2024, and Amazon also He moved file At that time when signing managers of the STARTUP of the ADEPT. One way to avoid legal problems. Traditionally, these types of operations were carried out through the well -known “Acquihires”. A large company bought another and in many cases it did to get talent, not because of the product or service offered by the “victim.” These agreements have ended up suffering remarkable legal scrutiny, which has made large companies go to other forms of talent. These “pseudoinversiones” are nothing more than a mechanism to achieve that talent without being so exposed – at least, for the moment – to legal scrutiny. And a distortion of the startup market. These operations, however, pose an important problem for the global startup panorama. If a large company can go to methods like this to sign talent, change the dynamics and strategies of startup themselves. After the investment in Underground, the company should continue to be separated, but to what extent is this a undercover acquisition? More elements for the bubble of the AI. There is also a threat to the risk capital market. Big Tech are using their huge cash reserves to inflate startup assessments such as getting out. Not because of its market potential, but as a “hiring premium” covered by its founders. This can create a bubble and change the strategy of risk capital investors, which can now value talent more than long -term business viability. Image | Hillel Steinberg In Xataka | We still do not have the four -day week and there are already CEOS dreaming with the next level: work only three days

one million terabytes and 24,000 nvidia chips for a key mission

In an increasingly digitized world and where artificial intelligence (AI) is transforming the way we work, investigate and relate, the supercomputing has established itself as the rod of measure technological power. It is a strategic resource that allows us to accelerate advances in science, innovation and defense. Not all super -taders play in the same league. Frontierof the United States Department of Energy, marked a milestone in 2022 by becoming the first to officially overcome the exaescala barrier, with 1,102 Exaflops in the Benchmark HPL. To that achievement they joined later The Captain and Auroraalso on American soil, consolidating its leadership position on paper. In the case of China, the information remains opaque, With very few public data about the status of their projects. Europe, however, just moved. Your first superorous to exaescala is already underway: Jupiter. Installed in the Jülich Supercomputing Centerin Germany, one of the most important advanced research poles of the continent. Jupiter is driven by the platform Nvidia Grace Hopper And Evidan xh3000 Bullsequana architecture is based on a liquid -refrigerated system designed to squeeze efficiency and performance. It is expected to reach up to 90 exaflops in artificial intelligence loads. Their applications will be diverse: from climatic research to neuroscience and quantum simulation, placing Europe in a new calculation capacity league. An inauguration with historical air September 5 The official inauguration ceremony in Jülich took placewith the presence of German authorities, European and leaders of the technology industry. German Chancellor Friedrich Merz presented him as a Pioneer project for Europe. “With Jupiter, Germany now has the fastest supercomputer in Europe and the fastest room in the world. Open completely new possibilities, from the training of AI models to scientific simulations.” In the Top500 list, Jupiter already appears as the fourth Most powerful supercomputer in the world, only behind the Captain, Frontier and Aurora in the United States. The European Union stands outIn addition, what works entirely with renewable energyby hiring green supply on the German network, and that its Rack Jedi leads the Green500 energy efficiency classification. The figures behind Jupiter To understand its magnitude, just review some technical data: 24,000 superchips nvidia gh200 grace hopper 51,000 network connections with infiniband quantum-2 technology Storage capacity close to an exabyte Modular installation with 50 specialized containers Maximum consumption of 17 MW, equivalent to about 11,000 homes A rack called Jedi leads the World Energy Efficiency Classification Why is it relevant to Europe Europe had been behind in the supercomputing career for years, with a landscape dominated by the United States. JUPITER offers researchers, companies and academic centers direct access to a top -level machine without depending on external resources. This means forming their own talent, consolidating experience in the management of these systems and reinforcing technological sovereignty at a time when artificial intelligence and calculation capacity have become strategic issues. Concrete applications The first projects already selected show how far a supercomputer of this category can go: Climate: The ECMWF works with a kilometer scale simulations, capable of representing extreme storms and feeding the Destination Earth project, whose objective is to build digital twins of the planet European: The Trustllm consortium trains language models in multiple European languages ​​for industrial and scientific applications Neuroscience: With the arbor simulator, neurons behavior will be modeled at the subcellular level, key to developing therapies against diseases such as Alzheimer’s Quantum: JUPITER aims to exceed the 50 -QBITS record in simulation, a relevant step towards quantum practical computing Astrophysics: The Max Planck Institute will use it to study cosmic reion, the period in which the first stars and galaxies emerged Particle physics: The University of Wuppertal will increase the resolution of its calculations on the Mon, which could open the door to new discoveries Video models: The University of Munich explores compression and dissemination architectures to advance applications that go from medicine to autonomous driving Multimodal models: The University of Lisbon Scale open and multilingual models, integrating different fields of science and automatic learning Access and future Researchers may request access to the system in calls that will be held twice a year. At the moment, there are already 30 projects underway. The expected useful life is at least six years, which guarantees continuity and stability in a land where technological cycles are increasingly fast. A strategic movement Jupiter is not just a technological achievement. It is a strategic commitment to provide Europe on their own capacity in an area where part of the future of artificial science and intelligence is played. With him, the continent finally has a tool that allows him compete at the highest levelwith energy efficiency and technological independence. Images | Nvidia | Jülich Supercomputing Center In Xataka | Alibaba has just demonstrated that Openai spends 78 million to do the same as them for $ 500,000

We believed that Nvidia was the company that had benefited most from AI. Micron is ridiculous

Micron is at historical maximums in Nasdaq, and rightly. The American manufacturer is taking a lot of benefit from the FEVER through the AI ​​and the data centers. The demand for memory chips is growing extraordinarily, but that has two faces. A good for micron, and another bad for customers and consumers. They all love micron. Citigroup analysts They promoted these days Micron’s target price from $ 150 to 175. The reason: according to its data, the company will have financial results “much better than consensus” when these are presented on September 23. Micron is doing so well that it even exceeds the growth of Nvidia. Source: Bloomberg The chips devastate. Yeah A week ago The shares were around $ 125, yesterday they closed at $ 150 and before market openings that figure is $ 155. This year the value has already grown by 81%, exceeding 33%Nvidia growth, although it is also true that the company led by Jensen Huang grew especially in 2024 (approximately 170%). Other companies such as Broadcom (55%), SK Hynix (91.88%in the South Korean bag), or TSMC (31%) also show an outstanding growth in the bags. Micron’s “Compute Networking” division is the one corresponding to the data centers. As can be seen, sales in that segment are already more than half of all of the last quarter. Source: Paul/Note. The commitment to HBM memories goes well. Micron has dedicated many resources to boost the manufacture of HBM memories, used precisely in the accelerators (GPUS) that are used in data centers. Independent analysis confirm the increasing weight of both these memoirs and the AI ​​segment in the micron business Micron will raise prices. According to Citi analysts, workloads for the inference of AI need more DRAM and NAND memories, and demand is spectacularly. The problem is that this demand will overcome the supply, and Micron will take advantage of the occasion to do something logical (for her): upload prices. Up to 30%. This is what it indicates Trendforce And also Some media In China, according to which Micron has notified its distribution channel partners today that the prices of their storage products will rise between 20% and 30%. In fact, the quotes of the DDR4, DDR5, LPDDR4 and LPDDR5 memories have been suspended among others: “All prices agreed with customers will be canceled and the quotes will be suspended. All products are expected to stop quoting for a week. “That involves not only industrial and consumer memories, and the chips for the automotive industry will rise in price by 70%. Sandisk and TSMC have already announced up. Both TSMC and Sandisk announced Price increases For memory chips in the past days. That will affect its great clients –apple, Nvidia, among others – and as indicated In Techpowerup It is a clear confirmation that manufacturers want to maintain their gross margins. In Sandisk there have been 10% prices due to the “growing demand” of the AI ​​market, data centers and mobile devices. At the moment, they indicate In Trendforcethat climb has encountered resistance from customers. In Xataka | Intel’s recent history is that of a failure. Now he has found a niche from which to resurface: HBM memories

China is no longer made up of moving away from Nvidia. His next step is the heart of the AI ​​with a system that breaks molds

In 2017, the Paper “Attention is all you need”Google changed the technical basis of language generation: the Transformers They allowed to process long sequences in parallel and climb models to sizes that were previously unfeasible. That climbing route has driven architectures such as GPT and Bert and has converted self -how The central piece of generative AI Contemporary. But this new approach was accompanied by growing costs in memory and energy when the context lengthens, a limitation that has motivated research to develop alternatives. Spikingbrain-1.0 aims to break molds. Of the “attention is all you need” to the brain: the new commitment to break limits in the A team from the Chinese Academy of Sciences Automation Institute He has just presented Spikingbrain-1.0. We are talking about a family of spiky models aimed at reducing data and computation necessary for tasks with very long contexts. The experts propose two approaches: Spikingbrain-7B, of linear architecture focused on efficiency, and spikingbrain-76b, which combines linear attention with Mixture of Experts (MOE) mechanisms of greater capacity. The authors detail that much of the development and the tests were carried out in clusters of GPU Metax C550, with libraries and operators specifically designed for that platform. This makes the project not only a promising advance at the software level, but also a demonstration of own hardware capabilities. An especially relevant aspect if China’s effort is taken into account for reducing his dependence on Nvidia, A strategy that we already saw reflected with Depseek 3.1. Spikingbrain-1.0 is directly inspired by how our brain works. Instead of having neurons that are always “burning” by calculating numbers, uses spiky neurons: units that accumulate signals until they exceed a threshold and trigger a peak (spike). Between peak and peak they do nothing, which saves operations and, in theory, energy. The key is that not only does it matter how many peaks there are, but when they occur: the exact moment and the order of these peaks carry information, as in the brain. In order for this design to work with the current ecosystem, the team developed methods that convert traditional self -acting blocks into linear versions, easier to integrate into its spiky system, and created a kind of “virtual time” that simulates temporal processes without stopping the yield in GPU. In addition, the Spikingbrain-76B version includes Mixture of Experts (MOE), a system that “awakens” only to certain submodos when we are needed, which we have also seen in GPT-4O and GPT-5. The authors suggest applications where the context length is decisive: analysis of large legal files, complete medical records, DNA sequencing and massive experimental data sets in high energy physics, among others. That lace appears reasoned in the document: if the architecture maintains efficiency in contexts of millions of tokens, would reduce costs and open possibilities in domains today limited by access to very expensive computer infrastructure. But validation in real environments is pending outside the laboratory. The team The code of 7,000 million parameters has released in Github next to a detailed technical report. It also offers a web interface similar to chatgpt to interact with the modelwhich according to the authors are deployed entirely in national hardware. Access, however, is limited to Chinesewhich complicates its use outside that ecosystem. The proposal is ambitious, but its true scope will depend on the community to reproduce the results and make comparisons in homogeneous environments that evaluate precision, latencies and energy consumption in real conditions. Images | Xataka with Gemini 2.5 | ABODI VESAKARAN In Xataka | Openai believes having discovered why the IAS hallucinates: they don’t know how to say “I don’t know”

Nvidia, TSMC and SK Hynix are the most powerful chip companies on the planet. None can allow any of the others to fall

Nvidia dominates the global chips market for artificial intelligence (AI) with a fee that during the last three years has oscillated between 80 and 94%, according to Fourweekmba. Your leadership is supported by A very competitive hardware and a software ecosystem in which CUDA (Compute Unified Device Architecture) It has an essential role. This technology brings together the compiler and development tools used by programmers to develop their software for NVIDIA GPUs. However, the company led by Jensen Huang has a fundamental partner: TSMC. Nvidia designs the chips for AI and this manufacturer of Taiwanese semiconductors, the eldest of the planet with A global quota close to 60%it produces them. Its iron leadership is the result of Its peak technology and its titanic production capacity. TSMC has many important clients, such as AMD, Qualcomm, MediaTak or Broadcom, among many others, but thanks to the AI ​​NVIDIA, it has established itself as Your second best customer Only behind Apple. Presumably TSMC is about to start MANUFACT 2 NM GPU For Nvidia, but this is not the only thing that this chips manufacturer is going to do for one of its best customers. And this Taiwanese company has decided to start An expansion plan for five years of its manufacturing capacity of integrated circuits using its advanced cowos packaging technology (Chip-on-Wafer-on-Substrate). According to Beth Kindigof the I/O Fund consultant, this technology will monopolize between 50 and 60% of the market in 2025 compared to 15% it supported during 2024. The synergy of these companies is indisputable The high demand for GPUs for AI with Blackwell MicroAritectura de Nvidia is largely responsible for the implementation of this plan. The company led by Jensen Huang can respond better to the needs of its customers and will see how its competitiveness is increased in a phase in which Depseek and other Chinese companies represent a challenge. In March 2024 TSMC officially announced which was building two cowos packaging plants in the town of Chiayi, housed in southern Taiwan. However, this is not all. He also shuffled the option to put a plant more specialized in this advanced packaging technology in Japan, presumably on the island of Kyushu, in which this company is currently building two semiconductor production plants of avant -garde. In any case, there is something else. And it is that Chiayi plants will be trained to work, in addition to the packaging cowos, With advanced Info and Soic technologies (System on Integrated Chips). Nvidia and TSMC synergy is indisputable, but this recipe requires a third ingredient: SK Hynix It is evident that TSMC wants to cover your back well and look to the future to prevent its production capacity from being threatened by a bottleneck. An interesting note: currently the Cowos packaging is being used with the AMD Instinct Mi250 chips and with the A100, H100, H200, B100 and B200 NVIDIA GPUs, as well as in its derivatives. The review used in these last two chips, the B100 and B200, is known as Cowos-L. Before the TSMC ends this year, you will be able to process no less than 60,000 wafers per month using its advanced packaging technology. The synergy of Nvidia and TSMC is indisputable, but this recipe requires a third ingredient: SK Hynix. This South Korean manufacturer of memory chips leads the HBM memories market (High Bandwidth Memory) that work side by side with the GPUs for ia with a shocking authority. Your market share Broken 70%so that the remaining 30% are distributed by Samsung and Micron Technology. After them, Chinese manufacturers of Yangtze Memory Technologies Co. (YMTC) and CXMT (Changxin Memory Technologies). At the end of 2024 SK Hynix took advantage of the celebration of an innovation forum organized by TSMC to publicize its mastery of the manufacture of HBM memories. According to SK Hynix itself Its MR-MUF process, which, in broad strokes, is a technology that makes possible a faster punch of the DRAM compared to the TC-NCF process that other companies use, has allowed it to achieve an efficiency 8.8 times higher than that of Samsung and Micron. This simply means that it manufactures its HBM chips much faster than its main competitors. SK Hynix is ​​manufacturing 12 -layer HBM3E memories on a large scale while Samsung and Micron have problems with their production As we can intuit, the speed at which a company that is dedicated to manufacturing semiconductors is capable of producing its integrated circuits deeply condition its competitiveness. It is evident that greater efficiency will allow you supply more guarantees to your customersespecially in an upward market like that of HBM memories. In addition, SK Hynix is ​​manufacturing 12 -layer HBM3e memories on a large scale while Samsung and Micron have problems with their production. In any case, both Samsung and SK Hynix are already working on the development of HBM4 memories with the purpose of catapulting their competitiveness. Here it is precisely where Nvidia appears. SK Hynix announced in October 2024 that he intended to deliver the first HBM4 memory chips to his clients during the second half of 2025. However, Jensen Huang asked him That the delivery advances. Chey Tae-Won confirmed itthe president of SK Group, so it is absolutely reliable information. Why does NVIDIA require so urgently the HBM4 chips? Simply because you need to support your chips for the most capable with the most available energy and energy efficiency memories. And in this field SK Hynix currently has the pan well grabbed by the handle. Image | TSMC In Xataka | South Korea fears US reprisals. To avoid their old lithography equipment, they take dust on a warehouse

Nvidia world leadership in chips for AI is brutal. In GPU for games directly has fulminated the competition

Nvidia dominates the global chips market for artificial intelligence (AI) with a fee that during the last three years has oscillated between 80 and 94%, according to Fourweekmba. Your leadership is supported by A very competitive hardware and a software ecosystem in which CUDA (Compute Unified Device Architecture) It has an essential role. This technology brings together the compiler and development tools used by programmers to develop their software for NVIDIA GPUs. Most of the artificial intelligence projects that are currently being developed are implemented on CUDA, and replace it with another option in the projects that are already underway it is a problem. Huawei, who aspires to an important portion From this market in China, it has Cann (Compute Architecture for Neural Networks), which is its alternative to CUDA. AND Moore Threads and Cambricon Technologies They have muse and neuware respectively. Even so, the competitors of Nvidia will cost them a lot to break the leadership of Cuda. Nvidia has distributed 94% of GPUs for market games During the second quarter of 2025, 11.6 million graphics cards for PC and 21.7 million processors for desktop computers have been distributed throughout the planet, according to the US consultant Jon Pedie Research. By themselves these figures do not tell us just anything, but they acquire the relevance they deserve if we consider that they indicate that the distribution of graphics cards has grown by 27% and that of CPUs 21.6% compared to the first quarter of 2025. Distributed units allow us to train a very precise idea about market behavior It is important that we do not overlook that these figures quantify the distributed units, and not the units sold. However, there is a direct correlation between them, so the distributed units allow us to form a very precise idea About market behavior. Anyway there is a fact that is even more shocking than all we have collected so far in this article: NVIDIA has distributed no less than 94% of GPUs for market games during the second quarter of 2025, again according to Jon Pedie Research. AMD has been forced to settle for 6% of the distributed units, and Intel does not even appear in the report of this consultant because its presence is anecdotal. So are things in the graphic hardware market for PC. One more note to conclude: the rebound of distributed units of graphics and processors for PC during the second quarter of 2025 against the first responds in all likelihood to the need for stores and users to supply before Tariffs approved by the US government They entered into force. Image | Xataka More information | Jon Pedie Research In Xataka | Nvidia is ready for the chip for the need to survive in China. Who is not ready to let him sell is the US government

This company is China’s great hope to definitely dispense with Nvidia chips

In China there are dozens of companies that are dedicated to the design of GPU for applications of artificial intelligence (AI). Stepfun, which belongs to Tencent Holdings; Infinigence ai; Siliconflow, from Huawei; Metax; Biren Technology; Focus me; Iluvatar Corex or Moore Threads They are some of the most important. However, currently One shines more than the others. In fact, as we have anticipated from the head of this article, this company is the best China asset when dispensing with the Nvidia chips. Although it is not as well known as Huawei or Moore Threads, Cambricon Technologies is one of the companies specialized in the design of GPU for AI with greater growth potential. In fact, he has received the approval of the Shanghai bag (China) to raise 560 million dollars. Will allocate them to the design of four chips for training and inference of AI models, and also to the development of an alternative to CUDAfrom Nvidia. To this company everything seems to be going well. And is that during the last twelve months The value of its actions has tripled. The strategic role of AI for China in its technological and commercial war with the US supports Chinese companies dedicated to the hardware design for AI and the development of large language models. However, there is more than promises to boost the business not only of Cambricon Technologies, but also that of the other Chinese companies that design integrated circuits for AI: the Chinese government has decided to force the data centers that belong to the State throughout the country To use at least 50% of Chinese integrated circuits on their servers. Cambricon Technologies is not an emerging company like the others China needs talent to compete with the US on equal terms and knows where you should look for it: in its population. In fact, the Administration has encouraged the implementation of elite educational centers that receive the best students in the country with open arms. The Chen brothers were two of them. Today are the founders and maximums responsible for Cambricon Technologies. The first, Chen Tianshi, exercises as president and general director of this company specialized in chip design for AI applications. And the second, Chen Yunji, is an expert in the development of processors for neural networks that, as far as we know, exercises as an advisor and responsible for technology in Cambricon. Both formed in An elite program for young talents In the Chinese Academy of Sciences, and currently the two are researchers and professors in this educational institution. Your best asset is its complementarity. Tianshi is an expert in chips design, and Yunji in AI. Chen Tianshi and Chen Yunji obtained their doctorates in computer science at age 24 Together they created a project at the Chinese Academy of Sciences that pursued a processor specialized in deep learning. Their plan went well and that chip allowed them to found their company. Their curriculum supports them, and there is no doubt that their effort has helped them reach the position in which they are. In fact, both obtained their doctorates in computer science at age 24. However, Cambricon is not a traditional emerging company. The growth of which we have spoken a few lines above and the expectations it has raised have been led by the support of the Chinese government, which sees in this company the opportunity to achieve the technological self -sufficiency it needs. During the last three years Huawei has established himself as one of the main Chinese GPU designers for AI, but Cambricon has something that this giant does not count at the moment: he combines a very ambitious hardware and A constant software platform improves. Huawei Ascend family chips are very competitive, and also has Cann (Compute Architecture for Neural Networks), what is Your alternative to Cudabut Cambricon is demonstrating that he has the ability to adapt its Neuware software very quickly to the needs of its customers. And in a market in which CUDA governs with iron fist It is a very important asset. Currently the flagship products that have changed to compete with Nvidia and Huawei in the Chinese market are the MLU series (Machine Learning Unit) and yes. In fact, the expectations of the Chinese semiconductor industry defend that the GPU Siyuan 690 will have comparable performance to the chip NVIDIA H100. In addition, Cambricon guarantees that their products are compatible with the models of the leaders in China, such as Deepseek, Qwen de Alibaba or Hunyuan de Tencent, among others, which has allowed it Gain the confidence of the Chinese industry. If we add that, According to Financial Timesfor developers it is easier to use neuware that Cann is reasonable to anticipate that during the next months Cambricon will monopolize the attention of the technology industry. Image | Cambricon Technologies In Xataka | Nvidia has to deal with the absolute distrust of several US legislators. His plan in China is in danger In Xataka | The US wants to end the chips for the Chinese that are sold abroad. And China knows how to defend oneself

Nvidia has become the most important company in the world. His problem is that he has all the eggs in the same basket

In Nvidia everything goes on wheels, but Not even enough for Wall Street. The latest quarterly results report has once again demonstrated Eun Eun Exceptional Power, but be careful. The most important company in the world –by stock marketat least – has an Achilles heel. A dangerous concentration of customers. He Official document With the financial results, it refers to a “risk of concentration” of the great clients of Nvidia. The situation is really worrying, because Six customers They accumulate 85% of all income from the company: 10,750 million dollars – Customer A (23% of total ingreoss) 7,480 million dollars – Customer B (16%) 6,540 million dollars – Customer C (14%) 5,140 million dollars – Customer D (11%) 5,140 million dollars – Customer E (11%) 4,670 million dollars – Customer F (10%) The problem goes more, no less. If we only look at the two most important customers, A is responsible for 23% of Nvidia and B revenues of 16%: 39% of income therefore come from only two clients. A year ago the two largest Nvidia clients were responsible for 14% and 11% of income, 25% in total. These data raise an inevitable question: who is who in that client cast. And the answer is not simple. Direct customers … Nvidia makes a distinction between those clients to whom he refers to the document, and that are divided into two large groups, the first is that of direct customers, which are not end users of their chips, but companies that buy the chips and that mounted them in complete systems or on plates that then sell to data centers, infrastructure suppliers in the cloud or final cloud. Among the examples, they indicate In CNBCwould be Foxconn, Quanta or Dell. … and indirect customers. This is where those companies would enter that we are all thinking and use these chips – which they buy from direct customers – in Your gigantic data centers. Microsoft, Openai, Meta, Google, Tesla/Xai and Meta – and even Oracle – are clear candidates, but again, it is impossible to know for sure who is on that list of great buyers. But the two most important are direct. What they do indicate in Nvidia is that customer A and B are direct customers, so they are not theoretically none of those great technological ones. But those definitions of Nvidia are somewhat diffuse, and the company states that some direct customers buy chips to create systems for their own use, so Any of the Big Tech I could enter that definition. To curl the curl, Nvidia said that two of its indirect clients each of them were responsible for 10% of their total income, but above all through the purchase of systems from customers A and B. OpenAI in the pools. In Nvidia they talked about “an AI research and development company” contributed with a “significant” amount of income both through direct and indirect customers. Here are more candidates, but one of the strongest would be Openai, especially now that he is working In the Stargate project. But the situation is dangerous. Be as it may, depending on both so few clients is delicate and creates a dangerous dependency chain. Thus, Nvidia depends on intermediaries that in turn They depend on a handful of technological giants. The company’s destination is in the hands of two buyers who represent almost 40% of their business, but the risk is not only for Nvidia, but for the entire technological ecosystem that depends on their chips. There are not only companies, there are countries buying gpus. Another of the curious data of this report is the one that tells us about how Some foreign governments They are also buying chips massively. In fact, the company expects to enter 20,000 million dollars in these “Sovereign” projects with countries that try to create their own models and artificial intelligence infrastructure. Image | Sharon Waldron edited with Google Gemini In Xataka | Microsoft had a saved secret. His new AI model for Copilot is the clearest statement against Openai’s domain

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.