That Oracle speaks out on the soap opera between NVIDIA and OpenAI is a bad sign. That it will not have benefits until 2029, too

Oracle counted in a tweet that the agreement between NVIDIA and OpenAI has “zero impact” on your financial relationships with the company that owns ChatGPT. This is more complicated than it seems, because the AI ​​business could end up collapsing if a large company like NVIDIA or Oracle shows even a hint of doubt towards OpenAI. The latest statements by Jensen Huang, CEO of NVIDIA, have made the market nervous, although Oracle’s path is not very encouraging either. Why is it relevant? Oracle just announced that will raise between 45,000 and 50,000 million of dollars this year through debt and equity issuance to build cloud infrastructure for its large AI clients. Among them, OpenAI stands out with a contract of 300,000 million of dollars for five years that starts in 2028. The problem is that OpenAI is not profitable right now, and Oracle needs OpenAI to raise capital so that it can pay it. It is a circular financing circuit where everyone depends on everyone Keep signing checks. The numbers don’t add up yet. The contract with OpenAI involves about $60 billion annually starting in 2028. To fulfill it, Oracle must buy approximately 400,000 chips NVIDIA’s GB200, with an estimated cost of $40 billion just for its flagship data center in Abilene, Texas. Meanwhile, OpenAI’s total revenue in 2025 was around $13 billion, according to Bloomberg. Oracle is betting its bottom line that a company that currently burns more cash than it generates can pay bills equal to five times its current annual revenue. The alarm signals. In January, investors accused Oracle of hiding the need for more debt to finance its AI infrastructure, according to Reuters. Oracle’s debt-to-equity ratio is at 6x, and credit default swaps reached levels not seen since the 2008 financial crisis in December, according to point Bloomberg. In addition to all this obstacle, Oracle’s action has fallen 50% from its September peak, when it announced precisely the agreement with OpenAIerasing some $460 billion in market capitalization. ANDnegative n until 2029. Developing data centers for AI has pushed Oracle’s free cash flow into negative territory, where it is expected to remain until 2030, according to data compiled by Bloomberg. Jefferies esteem that the company will need to raise more funds in 2027 and subsequent years, since cash flow will not return to positive until 2029. Oracle plans to raise 50 billion: half through equity, with convertible preferred securities and a share sale program of up to 20 billion, and the other half through a single bond issue in early 2026. Between the lines. What really worries the market is the structure of mutual dependence. NVIDIA funds OpenAI. OpenAI pays Oracle. Oracle buys chips from NVIDIA. Everyone’s income growth depends on everyone else continuing to write checks. When Jensen Huang, CEO of NVIDIA, declared to journalists that the 100 billion agreement with OpenAI “was never a commitment” and that they would invest “step by step”, Oracle had to come out with that tweet to calm the waters. And that tweet is precisely the type of communication that worries investors. Cover image | IEEE Awards, Hartmann Studios, Wikimedia Commons In Xataka | The CEO of Airbnb is clear that there are companies with too many meetings: his trick is to follow Jony Ive’s philosophy

NVIDIA and OpenAI’s relationship is disintegrating

We have to talk. It’s not you, it’s me. Our love broke. That’s just what seems to be happening between NVIDIA and OpenAI, who just four months ago were living an idyllic moment. The first announced a mammoth investment of 100,000 million dollars in the second and everything indicated that we could have before us a new great technological empire. It was the most ambitious marriage in the history of technology, but now that marriage is failing. a decade of love. It was August 2016 and everyone knew about NVIDIA but almost no one knew about OpenAI. Jensen Huang, CEO of NVIDIA, saw clearly that the company had potential, so gave Elon Musk a DGX-1 serverits first “desktop supercomputer” for AI. OpenAI was using more and more advanced NVIDIA GPUs to develop its work, and with the explosion of ChatGPT in 2022, OpenAI became one of the largest customers of NVIDIA GPUs, which in turn was buying shares of OpenAI. The quid pro quo began. August 2016. The idyll began. Jensen Huang handed a DGX-1 to the co-founder and still member of the board of directors of OpenAI at the time, Elon Musk. From where I said I say… In September 2025 NVIDIA announced a “strategic investment” of up to $100 billion in OpenAI. Was one more gigantic case of circular financing that apparently made these two companies stronger and the others weaker. For a few days there has been talk that this announcement is being blurred and the agreement according to The Wall Street Journal is frozen. There they indicate that according to Huang, said agreement was not binding and he privately criticized that OpenAI had a lack of discipline in its business strategy. …to where I say Diego. At a meeting with journalists in Taipei on Saturday Indian that NVIDIA will “absolutely be involved” in the new funding round that OpenAI is carrying out. In fact, he assured that “we will invest a large amount of money, probably the largest investment we have ever made,” but when asked that this investment would be over $100 billion, he said “No, no, nothing like that.” Furthermore, as shown in the video of the included tweet, he clarified that the investment “we never said we were going to invest $100 billion in a single round” and highlighted that “there was never a commitment.” “We were invited to invest up to $100 billion and we were honored,” he explained, but added that “we will consider each round of financing separately.” narrative clash. Huang’s statements made Sam Altman quickly want to downplay the matter saying that “we expect to be a gigantic customer (of NVIDIA) for a very long time” and adding that “I don’t know where all this madness is coming from.” However, the statements of both parties suggest that there are differences of opinion and a latent tension in that hypothetical commitment that they had reached and that perhaps was not communicated or clarified adequately in September. OpenAI has its own NVIDIA complaints. In Reuters they point out that OpenAI is “dissatisfied” with some of NVIDIA’s AI chips because while they are great for model training tasks—preparing them before we use them—they are not so great for inference. OpenAI is said to be looking for alternatives to inference chips and is in talks with Cerebras and Groq to provide them with advanced inference chips. Here’s a bonus chapter: NVIDIA reached an agreement with Groq to license (“pseudo-acquire”) the company’s technology for $20 billion, which has blocked OpenAI’s talks with Groq. And look for other girlfriends. Sam Altman doesn’t hesitate when it comes to looking for alternatives to prosper. He did it when the relationship broke down with Microsoft and He looked for other girlfriends like SoftBankOracle or NVIDIA itself. But in reality he plays several sides, because he has become a shareholder in AMD, one of NVIDIA’s biggest rivals. But there is more. A lot more. Polyamory. Not to mention that Amazon is in talks with Sam Atlman to close an investment of up to 50 billion dollars on OpenAI. Or that Altman is also in negotiations with Softbank that could result in a investment of 30,000 million additional payments by the Japanese company, which had already promised a investment of 40,000 million of dollars a year ago. The amounts are dizzying, but OpenAI handles them as if nothing had happened. Dependencies and reverse lock-in. Typically, companies fear being locked into dependence on a vendor like NVIDIA. Here NVIDIA seems to be suffering just the opposite: being trapped by a client (OpenAI). If NVIDIA invests 100 billion, it becomes too dependent on the success of OpenAI. If Altman’s company fails or changes course, the hole in NVIDIA’s balance sheet would be catastrophic. It is “mutual assured destruction.” Image | Hillel Steinberg | Village Global In Xataka | The leaks are shaping OpenAI’s physical device: headphones that sit behind the ears

Get ready because NVIDIA “needs a lot of wafers”

The foreign relations of the United States are no longer a solely governmental matter. Figures like Tim Cook they have acting as ambassadors in recent years, receiving treatment and baths from political entities worthy of a high-ranking politician, and the baton in both the global technological and economic conversation has been taken by another CEO of a ‘Big Tech’. Jensen Huang He is the boss of NVIDIA, and recently has visited Taiwan to remember something key: Nvidia would not be possible without Taiwan. And he has taken the opportunity to put pressure on the factory that moves the technological world: TSMC. The Billion Dollar Dinner. Huang has been touring his home island of Taiwan. Curiously, it is also the epicenter of the global technology industry as it is home to companies as powerful as asus or MediaTek, but also from Foxconn and those who create most of the advanced chips in our devices: TSMC. To close out the trip, Huang met two dozen people for dinner at a local restaurant in the event dubbed by the press as “the billion-dollar banquet.” More than a dinner with friends, it is an institutional event because NVIDIA is the one that is calling the shots in this current era of AI, but TSMC is the one who has the upper hand. Get your batteries. In that environment, and in an improvised manner, a press conference was held in which Huang made it clear that 2026 will be a crucial year, but he also gave an interesting headline: TSMC needs to work very hard this year because I need a lot of wafers.” Local media reported it. crossed out It may be a joking comment, but it is one of those jokes that are not jokes. The CEO of NVIDIA added what TSMC is doing”an incredible job” and predicted that they will increase their capacity by more than 100% over the next decade.” No pressure, go. TSMC is key. Just a few weeks ago, TSMC announced that its spending would increase by almost 40% to reach $56 billion in 2026, with additional increases planned for both 2028 and 2029. It makes perfect sense considering that it is the company that factory not only almost all of the world’s advanced chips, but it is the heart of NVIDIA’s graphics cards that have become the standard of artificial intelligence. The Taiwanese company is not only manufacturing in its country, but is taking steps to expand throughout Europe (with the Germany’s vaunted factory) and you already have a plant in the United States that will expand in the short term. And NVIDIA itself will be one of the first customers of the advanced chips that TSMC produces on US soil. If the chain fails, the AI ​​gets into trouble. The problem is that Huang doesn’t just need wafers: he needs RAM, and we are in one of the component crises deepest in history. That unbridled spending on components for powering data centers for AI It has left us consumers without the opportunity to buy components for our PC at a consistent price. First was the RAM and then the SSDs because companies like TSMC, Micron or Samsung cannot cope with production. Some -the aforementioned Micron- has left the component market for consumers because they need to run all their plants for only one purpose: powering those data centers. And the chain cannot failsomething that Huang himself has stated, pointing out that they will need a lot of memory this year – graphics cards also have memory inside – and that “the entire supply chain will be a challenge in 2026 because demand will be much greater. In short: a challenge for manufacturers, a headache for users. For Huang, a blessingsince your company is the one that leads the way in an artificial intelligence that, according to him“it has become something really useful.” Images | TSMC, NVIDIA In Xataka | The situation with RAM prices is so desperate that there are already those who build their own memory at home

The arrival of NVIDIA processors is imminent and brings 8 laptops under its arm

When the river sounds, it carries water. And rumors of NVIDIA launching its own processors for home computers have been around since at least a couple of years ago. Well, the arrival of Jensen Huang’s company in this segment is imminent and represents a total challenge to the hegemony of Intel and AMD and the x86 architecture. What’s more, it points to a paradigm shift in how we will understand Windows personal computers in the coming years. Bottom line: think Apple Silicon. The context. To date, Intel and AMD have divided the Windows laptop pie and ARM architecture It was intended either for more or less affordable and basic computers with Chromebook and MediaTek or for an expensive MacBook. It is true that already is there a powerful laptop with Qualcomm Snapdragon under the hood that runs Windows, but the arrival of NVIDIA chips this first quarter of 2026 wants to be the definitive push. Thus, these ambitious and powerful teams will not equip an NVIDIA GPU with an Intel CPU as we have been seeing for years, but rather they will have an NVIDIA SoC (actually, there are two models: N1 and N1X). Simply put, NVIDIA takes care of two essential pieces of hardware. Why is it important. NVIDIA wants to do with Windows what Apple has achieved with its M chips, an ecosystem where the processor and graphics are integrated and understood wonderfully, which is noticeable in issues such as battery consumption, efficiency or performance itself. The blessed convergence. For years, if you wanted a powerful computer for gaming or work, you had to choose between Intel or AMD and the x86 architecture, but NVIDIA enters a china shop like an elephant with the ARM architecture and its advantages: more efficiency, less heating and longer battery life. Until now, finding a reliable gaming device to play with was a pipe dream, but the N1X chip with the Blackwell architecture of the RTX 50 wants to change it. And furthermore, you could do it on more stylized equipment. On the other hand, these chips use unified memory (up to 128 GB LPDDR5X), which means fewer bottlenecks in demanding tasks such as gaming, local AI or video editing. What has escaped. A leak from Lenovo has revealed that the company has manufactured six laptops with the N1 and N1X processors, including a 15-inch gaming device. An X/Twitter user has published the list of the teams. The user’s profile is completely anonymous, not that it does not inspire much confidence, but there is more: this page updates The Legion control software already shows the existence of a “Legion 7 15N1X11” laptop where the “N1X” is precisely the NVIDIA SoC. Besides, The Verge has discovered already indexed and protected Lenovo content that refers to products with these processors. And not just Lenovo: Dell has also missed a premium device with NVIDIA N1X on its website, such as another X/Twitter user swiped. just a couple of days ago Digitimes gave to date: It will be this spring, although more devices will arrive this summer. After suffer a delayit seems that they will finally become a reality and will not stop here: the company already has the N2 and N2X on its roadmap for the end of 2027. Product descriptions with NVIDIA SoCs already appear on the Lenovo website The processors. There is little information about these components beyond a Geekbench leak that must be taken with tweezers. We know that the most powerful and aimed at models with more muscle is the N1X and rumors suggest that it has a 20-core CPU and an integrated GPU with 6,144 CUDA cores (with Blackwell architecture). However, the CEO of NVIDIA already confirmed than the N1 and the superchip GB10 They are practically the same. The N1 is simply a more modest chip focused on thermal efficiency and battery life and aimed at ultrabooks and mid-range. The first laptops with NVIDIA chips. The leaked devices that will debut with SoC from the company led by Huang will be eight: a Lenovo Legion 7 (15N1X11) gaming laptop with the N1X chip, the Lenovo Yoga 9 and Yoga Pro 7 convertibles with two versions to choose between N1X and N1, the IdeaPad Slim 5 in 14 and 16-inch versions with the N1 chip and the Dell “Premium 16”, probably XPS or Alienware, with an OLED screen and chip N1X. NVIDIA is not new to this. Lenovo is the largest PC manufacturer in the world (as collects Statista) and that it launches several models of its most important families means that it has strong reasons to trust that the performance of NVIDIA chips is up to par. However, NVIDIA’s ARM PC chips have been a long time coming, but that does not mean that it is a newbie in the sector, it is worth remembering that the switch It has a Tegra SoC and that this line has previously been the brain of tablets and even the Microsoft Surface or the Shield for TV. It is the beginning of a new cold war. And if confirmed, Microsoft’s desire would be fulfilled: the Windows on arm as a real alternative to Apple’s Macbook. The first quarter is not only the launch date, it can become a before and after between NVIDIA, Apple, AMD and Qualcomm for control of the computers of the future. In Xataka | The new thing from NVIDIA is called DLSS 4.5 and it seems like witchcraft: it can multiply the performance of the GeForce RTX 50 by six In Xataka | The US plan to stop China’s AI is failing. Huawei is becoming the “Chinese NVIDIA” Cover | Hillel Steinberg

The US offered NVIDIA chips to China. China has responded with a “no, thank you”, according to the Financial Times

China has turned the technological development in state policy. The country is shaking up its economy through robot development (some already working in stores or in disasters), artificial intelligence and, above all, chips. Giants like Huawei and companies like SMIC are developing chips with one goal in mind: eliminate dependence on the United States. However, some of these companies need to access powerful and reliable chips immediately, and NVIDIA had presented itself as the best option. It seems that everything has been a mirage. Full speed ahead. The current technology war between the United States and China means that Western companies cannot do deals with Chinese ones. This includes the sale of advanced chip making machinesbut also that NVIDIA, for example, can’t even sell its advanced chips like the previous generation. A few weeks ago, however, the United States relaxed its policies, which opened the door so that NVIDIA could sell the famous ones again H200 to certain Chinese customers. The US was going to take a 25% tax on each sale and it was a win-win: Chinese customers had access to renowned chips and NVIDIA managed to take part of the Chinese pie (a pie of 50,000 million dollars). At least until local companies develop their alternatives. Last week we already said that NVIDIA had increased production waiting for two million orders. But there is a problem: a sudden stop. With Customs we have encountered. At that time, China had not commented and the person most interested in the operation, Jensen Huang, CEO of NVIDIA, commented that if the orders were arriving it is because someone had authorized them. That was taken as a silent confirmation from China, but now there is news. Although the country still has not made an official statement, since Financial Times They point out that NVIDIA was surprised to find that customs had stopped the orders. According to sources consulted by the media, customs officials in China recently summoned logistics companies from Shenzhenone of the neural points of technological innovation of the country, to warn of something: they could not submit shipping requests for the H200 chips. National chips please. That pressure has led the company to pause production. All there is is uncertainty right now due to a chain of events that show that NVIDIA was crazy about selling. After putting pressure on both governments, Huang managed to get the US to give approval for the sale in China, but China did not comment, something that the US company took as an approval. Chinese policy for a few months has been very clear: favor and promote local industry with one goal: ‘Delete America’. China seeks technological sovereignty through giants like the aforementioned Nvidia, but also with others like Moore ThreadsBiren, MetaX or Enflame. black market. However, the fact that orders cannot be placed to buy NVIDIA chips does not mean that NVIDIA chips are being stopped: As already pointed out Reuters a few months ago, that ban and the veto on the sale of sophisticated chips has promoted a black market of American chips, especially the B200 and B300 from NVIDIA, more powerful than the H200 that the US Administration authorized. There is talk of a market of more than 1,000 million dollars, and although NVIDIA had hopes of re-entering the country through official channels, it seems that the Government is going to continue encouraging its technology companies to bet on ‘Made in China’ solutions. Images | Chinese Communist PartyNVIDIA + Photoshop In Xataka | The race for AI has placed China in an unthinkable scenario: forcing the United States to leave its comfort zone

Chinese startups have been relying on NVIDIA chips to train their models for years. That is already changing

The name of the Chinese startup Zhipu AI (Z.ai) may not sound familiar to you, but perhaps GLM, its AI model, does a little more than its latest version, GLM-4.7already competes with Claude Sonnet 4.5 or GPT-5.1. The real surprise of this “Chinese AI tiger” is the launch of GLM-Image…and not so much for what he does, but for how he has managed to do it. what has happened. GLM-Image is a multimodal generative AI model that focuses on image generation. The idea, of course, is to compete with options like Nano Bananafrom Google. That’s interesting, but even more striking is the fact that the model has not been trained with conventional chips. Trained with Chinese chips. According to those responsible for Z.ai, this model is the first developed in China that has been fully trained with “local” chips. Specifically, it has been trained with Huawei’s Ascend chips thanks to the use of servers Huawei Ascend Atlas 800T A2 and a framework called MindSpore. Thus, traditional NVIDIA AI chips, which are usually the usual choice for AI model developers in Chinese startups, have not been used. Turning point? This milestone demonstrates the real feasibility of training high-performance generative AI models on a platform developed entirely in China. We are not dealing with something minor: it is validation that it is possible to continue innovating in this area despite the restrictions imposed by the US. In fact, Zhipu AI — included last year on the US blacklist — has intensified its collaboration with other local manufacturers, such as the promising firm Cambricon that has risen from the ashes thanks to tariffs. Threat to NVIDIA. The news comes at a unique time, because NVIDIA has not stopped pressuring the US government to once again allow it to sell its advanced AI chips to Chinese companies. He has obtained that permission—which It won’t be free—, but now the one that might not be interested is China, which he hasn’t said anything at all. That chips from companies like Huawei are a valid alternative for training quality AI models can change many things in this area. Zhipu goes like a shot. The Chinese startup has also just gone public, and since it has done so its shares they have shot up more than 80%. Investors see the company no longer as a rival to Google or OpenAI, but as a banner. One that shows that it is possible to compete without depending on the US and its companies. Huawei, great beneficiary. If the trend continues, Huawei can become the Chinese NVIDIA, and the company prepares an increase in production of its AI chips. It is not the only one: Cambricon plans triple your production by 2026, which seems to make it clear that the Chinese industrial machinery is moving quickly to neutralize the impact of US vetoes. Challenges…Despite everything, Zhipu already has warned that the price war in the AI ​​sector will become international. If Chinese companies end up controlling the entire chain (or rather, their chain), they could offer AI services at much lower costs than their Western competitors, who must pay NVIDIA’s margins and Big Tech’s cloud infrastructure. …and unknowns. This technological achievement raises other questions. One of the most important is how powerful and capable Huawei chips are compared to NVIDIA’s in these processes: is training much slower? Is it more expensive in time and resources? The efficiency of the MindSpore framework compared to Pytorch or TensorFlow is another of the key components of these developments. In Xataka | Faced with the US strategy, China has a plan to revive its technology industry: that AI belongs to everyone

Nvidia is the ball in the AI ​​game. The US wants to share it with China, but it is not clear that China wants to play

The CES held in Las Vegas is the great showcase of technology, and if there has been a protagonist (apart from the chinese humanoid robots), that has been Jensen Huang. The CEO of Nvidia has become a key figure in the technology landscape. artificial intelligence because it is their chips that are shaping the data centersand the H200 It is the great proper name. It is the favorite for ‘assembling’ data centers and has become a throwing weapon in the commercial and technological warwith the United States vetoing the sale of the chip to China. But the situation seems to have relaxed and there are already those who point out that Nvidia will soon have access to a critical market. In short. We already mentioned it in December: Nvidia planned to increase production of the H200 chip for 2026. It was not something that responded only to the rise of artificial intelligence this year (which so many problems it is going to give us consumers), but to something much more important for the company: the reopening of the Chinese market. It all came after the announcement that the United States would allow exports, specifically, of the H200 to certain Chinese customers. They had to have a series of characteristics, such as being validated by the Department of Commerce, in addition to having a 25% rate on each sale. It’s outrageous, but while it was being debated whether China would now want to buy the H200s for its data centers (the country is developing its own solutions), from Reuters point to one piece of information: two million orders. Two million H200. After opening the door, it was reported that two Chinese giants such as Alibaba (e-commerce, cloud services and the model qwen) or ByteDance (TikTok, Douyin and AI chatbots) would be asking the Chinese Government to They will let them buy Nvidia chips to boost business. More recently, since Reuters A specific figure is pointed out: two million H200 chips (with ByteDance and Alibaba asking for 200,000 H200 each). It is the order that the Chinese companies would have already made, at the expense of receiving the green light to be able to formalize the purchase. Strict payment plan. The H200 is not the most cutting-edge chip that Nvidia has to offer, but it is one of the most used in data centers and the one that is allowed to export to China. Other more powerful ones remain restricted for national security reasons. And, although there is nothing official yet, Nvidia has set certain purchase conditions. Basically, transfer financial risk to customers: if imports are approved, they will have to make full payment in advance. Deposits were previously allowed to some companies, but this lack of regulatory clarity, market instability and a stock of H200 that may be insufficient if the market reopens require these measures. 50 billion dollars. With this operation, Nvidia must be crazy about music. In the middle of last year, Huang himself pointed out that the Chinese AI market had headed toward $50 billion, stating that “it would be a tremendous loss not to be able to address it as an American company.” That someone else says it may not have as much weight, but Nvidia is now in the focus of all the big technology companies. You don’t have to be naive. Messages like “the world is hungry for AI, let’s put American AI at the forefront” surely contributed to the relaxation of trade conditions approved by the Trump Administration a few weeks ago. In fact, if we say that Huang has been one of the great protagonists of the CES, it is not so much because of the technology presentation, but because of continuing to push that commercial narrative. The CEO pointed out in the ‘No Priors’ podcast that “the idea of ​​decoupling from China for philosophical or national security reasons is not based on common sense”, also stating that he was optimistic about the relaunched relationship with China thanks to the new measures imposed by the United States. Because “China is an adversary, but also a partner. And the idea of ​​decoupling is naive,” he said. What if I don’t want to now? But although Nvidia has increased production of its H200s in Taiwan in anticipation of an avalanche of orders from China, the ball is not in its court: it is in that of its potentially large new customer. Although everything boils down to “business,” in this case there is something else at stake: technological sovereignty. Huang believes that there will be no official announcement from China about the “openness” of its hand when it comes to letting its companies buy American technology and assumes that if orders are being placed it is because they can. Now, with the intensification of trade bans by the United States, China responded. Banned Apple devices in official centers, also purchasing from companies like Micron (which have also focused on AI, abandoning the RAM segment for users) and restricted the purchase of Nvidia chips Manufactured expressly for the Chinese market. At that time, Local companies such as Huawei or Cambricon have advanced with their solutionsachieving very high yields that are allowing China’s robotics and AI industry flourishes. Friction. However, the H200 remains the “standard” for many data centers, and there may be a desire to buy as much as possible in advance of possible future bans while they continue to develop their own chips. We will see if the wish of the American giant and some Chinese companies that see CUDA as the optimal system for AI. China is very clear that its “dragons” are enough to stand up to Western technology, the ‘Delete America’ plan is still going and accepting the H200 could perpetuate a situation of dependence on foreign technologysomething that the Government wants to avoid at all costs. In whatever way, depending on BloombergNvidia will start shipping H200 en masse in the short term. Images | Nvidia, Karola G, Pexels In Xataka | … Read more

NVIDIA fears that China will hinder the sale of H200 chips, so it is asking for advance payment without exchanges or returns

The fact that NVIDIA can market H200 chips in China It’s going around a lot these days and it’s no wonder. And after the Government’s uncertainty about whether it ends up allowing them in the country or not, the company has imposed unusually strict payment conditions for customers who want to buy these chips in China. According to information According to Reuters, the company now requires full payment up front, with no cancellation, refund or configuration changes options once the order is placed. Why it matters. NVIDIA has billions at stake in China, the world’s largest semiconductor market. Chinese technology companies have placed orders for more than 2 million H200 chips valued at about $27,000 each, well above the company’s available inventory of 700,000 units, according to account the middle. But the regulatory situation is a powder keg: the United States has just authorized the sale with a 25% tariff, while China has not yet confirmed whether it will allow imports. Regulation. The Biden administration had banned the export of chips advanced AI to China, but Donald Trump reversed that policy last month allowing H200 sales with the aforementioned 25% tariff that goes directly to the US government. However, China has not yet given the official approval. According to BloombergBeijing plans to approve some imports this quarter, but only for select commercial uses. The military, sensitive government agencies, critical infrastructure and state-owned companies would be left out for security reasons. Protection. The payment terms transfer all of NVIDIA’s financial risk to its customers, who must commit capital without certainty that Beijing will approve the imports or that they will be able to deploy the technology as planned. According to account The average, although NVIDIA has always required advance payments from Chinese customers, deposits were sometimes allowed in lieu of full payment. Now the company is especially strict due to the lack of regulatory clarity. A recent scar. NVIDIA has reason to be cautious. Last year it had to write down $5.5 billion in inventory after the Trump administration abruptly banned the sale of the H20 chip to Chinathe most powerful product that it could then offer there. Although the United States has reversed that decision, China has since banned H20 shipments. This experience explains why the company prefers to ensure collection before any unforeseen regulatory event. Overwhelming demand. Chinese tech giants like ByteDance and Alibaba see the H200 as a significant improvement. This chip, currently NVIDIA’s second most powerful, offers approximately six times the performance of the now locked H20. According to Bloombergboth Alibaba and ByteDance have privately communicated to NVIDIA their interest in ordering more than 200,000 units each. Delivery times. NVIDIA plans to fill initial orders with existing stock, with the first batch of H200 chips expected to arrive before the Lunar New Year holiday in mid-February, according to account Reuters. The company has also approached TSMC to increase H200 production to meet demand in China, with additional manufacturing planned for the second quarter of 2026. The local competition. Meanwhile, NVIDIA’s Chinese rivals are gaining ground. And just as inform Bloomberg, local manufacturers such as Huawei have developed AI processors, including the Ascend 910Calthough its performance still lags behind the H200 for large-scale training of advanced models. On the other hand, Cambricon Technologies It also plans to significantly increase its production of AI chips in 2026, thus expanding its market share and filling the gap left by NVIDIA. What’s coming now. In the coming days it will be known if China makes a final decision on H200 imports. Jensen Huang, CEO of NVIDIA, declared at CES this week that customer demand for H200 chips is “quite high” and that the company has “activated its supply chain” to increase production. Huang also noted that he doesn’t expect the Chinese government to make a formal statement about approval, but rather that “if purchase orders come in, it’s because they can make them.” Cover image | NVIDIA and Arthur Wang In Xataka | There is a new player in the race for the autonomous car and it is one that should worry Tesla a lot: NVIDIA

NVIDIA already has its own Autopilot. And Tesla has reason to worry

NVIDIA has presented at the CES 2026 Alpamayo, a family of open source AI models designed specifically for autonomous vehicles. The system not only detects obstacles and plans routes, it “reasons” about complex situations and explains your driving decisions. Mercedes-Benz will be the first to implement it in the CLA, which will arrive in the United States in the first quarter of 2026. Why is it important. Tesla has kept its FSD system completely closed since 2016, and now NVIDIA is betting on releasing the weights of the model, the framework of simulation and more than 1,700 hours of driving data. This strategy can make NVIDIA “the Android of autonomous mobility” and allow any manufacturer to access capabilities comparable to Tesla’s without requiring years of internal development. The contrast: Tesla sells its FSD as a proprietary system integrated only into its cars, generating recurring income from your own clients. NVIDIA wants to sell chips to the entire industry, providing the base technology for others to build their systems. The first model earns more per individual sale, but the second can scale exponentially if multiple manufacturers adopt the platform. In detail. Alpamayo 1 is a 10 billion parameter model that processes video and generates both a trajectory and the logic behind each decision. Jensen Huang has described it as the “ChatGPT moment for physics AI.” The Mercedes CLA will integrate 30 sensors (cameras, radar, ultrasonic…) and will be marketed as a “Level 2+” system, similar to Tesla’s FSD in that it requires constant attention from the driver. Between the lines. NVIDIA’s move seems really good from a regulatory point of view: By generating a “reasoning traceability” that explains every decision, it reassures regulators who are often terrified by black-box models. And by releasing the code, it hooks startups and manufacturers in your CUDA ecosystem. If you can’t develop autonomy yourself (most traditional manufacturers can’t), you just use Alpamayo… and run it on NVIDIA chips. The threat. For Tesla, this means the dreaded commoditization of a technology that has been its main differentiator. If Mercedes delivers FSD-like capabilities in March based on a system that any brand can buy, Tesla’s sales pitch weakens. Elon Musk You have already commented on this announcement on your X profile: “It’s easy to get to 99%, then it’s very difficult to solve the rest.” It also seems like an implicit admission that Tesla hasn’t solved that final problem either. Yes, but. Open source does not guarantee success or similarity with Android in telephony. Actual implementation, integration with specific sensors and validation in real conditions remain complex. Tesla has been accumulating millions of kilometers of driving data for years. NVIDIA offers 1,700 hours, a tiny fraction in comparison. The question is whether that data advantage for Tesla offsets the distribution advantage NVIDIA can get by partnering with multiple manufacturers. Time and the market will tell. In Xataka | If it seems expensive to change the battery in an electric car, wait until you see what it costs in a Ferrari LaFerrari: more than 200,000 euros Featured image | Pixilustration

NVIDIA has paid $20 billion to “license” Groq’s technology. He actually bought it

NVIDIA has reached an agreement to “license” assets from Groq and will pay 20 billion dollars for said assets. The company—not to be confused with Elon Musk’s chatbot, Grok—has been designing and manufacturing AI chips for model inference for years. The quotes around “licensing” are important, because this is not a deal: it is a stealth acquisition. what has happened. on Wednesday the news appeared that NVIDIA had agreed to sign a licensing agreement with AI startup Groq. This news was confirmed by those responsible for Groq themselves. on your blogin which they talked about a “non-exclusive license agreement for inference technology to accelerate AI inference on a global scale.” But what both companies say is one thing and what this really is is quite another. How to buy a company without buying it. As part of the agreement, the company’s CEO and co-founder, Jonathan Ross, will go to work for NVIDIA, as will Sunny Madra – its current president – and other senior executives who “will join NVIDIA to help NVIDIA advance and scale this licensed technology.” At Groq they point out that they will continue to operate as an “independent company” led by Simon Edwards, who was their chief financial officer (CFO) and will now become the CEO. NVIDIA keeps (almost) everything. In September Groq raised a financing round of 750 million dollarswhich placed its valuation at $6.9 billion. Disruptive, Blackrock and other companies participated. Alex Davis, CEO of Disruptive, indicated on CNBC that NVIDIA will keep all of Groq’s assets except for one: Groq’s newly launched cloud business. NVIDIA’s biggest “pseudo-acquisition”. This operation is by far the most important for NVIDIA, which bought the Israeli company Mellanox —which designs chips—for $6.9 billion in 2019. In an internal email obtained by CNBC, NVIDIA CEO Jensen Huang explained that “although we are adding talented employees to our ranks and licensing Groq’s intellectual property, we are not acquiring Groq as a company.” The phrase is significant but sensitive, and NVIDIA may want to escape regulators’ scrutiny with this type of pseudo-acquisition. They already made another pseudo-acquisition before. Last September NVIDIA made an identical move by “betting” 900 million dollars by server startup Enfabrica. As in this case, they called to that operation a licensing agreement for its technology, but as in this case what happened is that the CEO of Enfabrica, Rochan Sankar, and other employees, ended up being part of the NVIDIA staff. What is Groq?. Although the name is confused with that of the xAI chatbot, this AI startup does something very different from that model. Groq was founded in 2016 by a group of former Google engineers led by Jonathan Ross and Douglas Wightman. Ross was one of the designers of Tensor Processing Units (TPUs), and Wightman was part of the Google X team and would end up becoming Groq’s first CEO until his departure in 2016. What Groq does. The company has designed AI chips that are specifically specialized in inferring AI models, or in other words, accelerating the execution of those models. While NVIDIA and other companies are especially focused on chips for model training, an equally critical phase, they are not as prepared for inference. Chatbots at full speed. That’s where Groq comes in, who allows extraordinary acceleration of inference and ensure that when we chat with models they “write” at very high speeds. This is when very high token/s speeds are obtained, far above other infrastructures. Not only that, Groq is also cheaper thanks to its specialized chips, so if you want your chatbot to respond at full speed, Groq chips are a fantastic option. How to be a monopoly without saying it. This investment by NVIDIA demonstrates its intention to diversify its business and not stay stuck in its own solutions. The huge operation gives it a major competitive advantage because none of the big AI companies today had focused specifically on inference chips. Groq did from the beginning, and with this “deal” it seems clear that NVIDIA’s dominance in this sector can be strengthened. Is, some analysts saya defensive move rather than a strategic one, and they may be right: Google is getting stronger and stronger with its TPUsand that now Groq is basically part of NVIDIA – although they don’t want to say it that way – will allow it to compete better against the aforementioned Google and the rest of the rivals that are beginning to challenge that dominance. Image | Groq | NVIDIA In Xataka | AMD’s problem is not that it doesn’t make good GPUs for AI. It’s not even close to NVIDIA

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.