NVIDIA has lost hope in China, which is why it has started manufacturing its own next-generation GPUs for AI

NVIDIA faces this 2026 a crucial year. They have become one of the largest strategic investors in the AI ​​ecosystem with dozens of billion-dollar investments in other companies, models, infrastructure and robotics. But, in the end, they are a company that supplies chips and, so far, the H200 They set the tone. According to a report by Financial Timesthat’s over. NVIDIA just ordered TSMC to start mass manufacturing Vera Rubinits next-generation hardware for AI. The reason? They have lost all faith in China. In short. With the entire AI industry looking to the future, and NVIDIA that has its Vera Rubin on the starting grid, it was strange that the company continued to invest so much in keeping TSMC working on a chip as old as the H200. Although it has been around for a while, it has positioned itself as unbeatable in the industry due to its price/power ratio, so these are the chips on which it has been built. the AI ​​empire. However, time passes and NVIDIA needs to move. Data centers need more power, new models are more demanding and the spearhead of the software sector – such as OpenAI either Google– have demanded new solutions. According to two sources consulted by the financial media, and close to NVIDIA’s plans, the company has grown tired of “waiting in limbo” and has begun to accelerate the delivery and deployment of Vera Rubin. Yoncomparable. As it could not be otherwise, TSMC is going to be in charge. The Taiwanese foundry would have already been asked to begin diversifying the production line to begin manufacturing the new chips. And if you’re wondering why it’s not enough for Google or OpenAI to simply buy more H200, the answer is because the chips have nothing to do with it. H200 is a more classic GPU for a data center. It is the configuration that AI and computing companies on these servers have been working with for years. Vera Rubin, however, is a paradigm shift made up of new CPUs, new GPUs and designed so that everything works as a single rack-scale accelerator. It has not only more power, but also the latest software and hardware additions from NVIDIA and something very important: incredible bandwidth. The higher the bandwidth on such a system, the more simultaneous data it can handle. This implies greater efficiency when training, but also a lower cost in inference. It is not an update, it is a platform change designed for models with trillions of parameters. Qgoose faith in China. To put it more simply, if the H200 is like a “super powerful graphics card”, Vera Rubin is like a mini data center in itself. And if you’re wondering why they didn’t start production sooner, the reason is… China. Jensen Huang, CEO of NVIDIA, has been ‘fighting’ with Washington for months to open their arms in the trade and technology war maintained by the US and China. Trump ended up agreeing and Huang commented earlier this year that they had returned to “turn on” all production lines to supply the very high Chinese demand. The problem is that that demand did not arrive. At least, It was not as high as Huang expected. In the presentation of results, NVIDIA’s financial director commented a few days ago that “although small quantities of H200 for Chinese customers were approved by the US government, we have not yet generated any income. And we do not know if imports to China will be allowed.” We already told the problem: The US was leaving for NVIDIA to sell its graphics, butThe Chinese government did not seem so convinced. Your main Big Tech They were demanding NVIDIA solutionsarguing that they need them to keep up with what their American rivals are doing, but the ball was in the court of the Government and Customs. China is promoting AI that is different from that of the US, more focused on low costs and rapid acceptance by the client, and at the same time want to build your own hardware network with companies like SMIC or a Huawei that you already have your supercomputer for AI. complicated swerve. From the Financial Times they point out that the president of China, Xi Jinping, and the president of the United States will meet at the end of March to discuss export controls. The problem is that, according to their sources, even if the barrier is lifted completely and not just for certain companies and China can buy H200s en masse, turning TSMC’s ship around so that it starts producing H200s again would be complicated. It is not as simple as pressing a button and going from producing one thing to another. If this situation occurs, “NVIDIA would take up to three months to reallocate or add capacity to the supply chain to produce H200.” One of Vera Rubin’s PCBs Rebound winner. What is clear here is that NVIDIA is not going to lose from the operation. Huang already argued that the United States could not miss the opportunity to take a slice of a multi-billion dollar market (because the US let the cards be sold… with a 25% tariff), but whether it is the Chinese or the Western industry, it is from NVIDIA that they continue to buy the H200 and, ‘shortly’, the Vera Rubin. And the rebound winner in this operation is Samsung. Of the three companies that manufacture memory (and that have catapulted the RAM and SSD crisis we are in), Samsung is the one that has completed its new generation HBM4 memory. It is the one that has passed the high standards of NVIDIA and the one that is already being mass manufactured to be able to integrate into Vera Rubin systems. Everyone attentive. As we said, NVIDIA has to the entire industry at his feet. Google, xAI and Meta are working on their own chips, but together with Microsoft, Amazon Web Services, OpenAI, Mistral and Anthropic they are some of the companies that they … Read more

NVIDIA is going to spend $4 billion on photonics companies. He is preparing for what is coming

NVIDIA does not provide stitches without thread. At the end of August 2025, the company led by Jensen Huang announced that in 2026 their platforms artificial intelligence next generation (AI) will use photonic interconnections to achieve higher transfer speeds between GPU clusters. This announcement came during the conference specializing in semiconductor engineering and high-performance computing ‘Hot Chips’, which was held in Palo Alto (California), and was just the prelude to what was to come. And this same week NVIDIA has revealed that is going to invest 2,000 million dollars in Lumentum, and the same amount in Coherent. These two companies have something very important in common: they are specialized in developing photonic technologies. Shortly after NVIDIA confirmed its interest in them, the shares of these two companies rose 5 and 9% respectively. And the company led by Jensen Huang has committed to purchasing products from Lumentum and Coherent for several billion dollars, and also to use their advanced laser solutions and optical networking technologies. Photonics is the support that cutting-edge semiconductors need Most IC designers and manufacturers are working on the development of silicon photonics. Douglas Yu, a TSMC executive with responsibility for systems integration, explained in September 2023 very clearly what disruptive capacity this technology has: “If we manage to implement a good integration system for silicon photonics, we will unleash a new paradigm. We will probably place ourselves at the beginning of a new era.” Silicon photonics is a discipline that in the field in question seeks to develop the technology of this chemical element to optimize the transformation of electrical signals into light pulses. The most obvious field of application of this innovation is implementing high performance links which, on paper, can be used both to resolve communications between several chips and to optimize the transfer of information between several machines. In AI clusters, thousands of GPUs must work in unison, so it is essential to connect them using high-performance links The advanced packaging technologies used by leading semiconductor manufacturers, such as TSMC, Intel or Samsung, can greatly benefit from a very high-performance inter-chip communication mechanism. And large data centers where it is necessary to connect a large number of machines, too. However, there is one discipline in particular that has an overwhelming future projection and that would benefit greatly from building on the advantages offered by silicon photonics: AI. This is precisely NVIDIA’s bet. In AI clusters, thousands of GPUs must work in unison, so it is essential to connect them using high-performance links. It is possible to solve this challenge using traditional copper cables or optical modules, but both of these solutions introduce into the infrastructure very important inefficiencies. The most problematic are energy loss and bottlenecks. Data transfer can consume up to 30 watts per port, which increases energy dissipation as heat and increases the likelihood of failure. Additionally, latency limits the scalability of clusters as the number of GPUs in data centers increases. To resolve these inefficiencies, NVIDIA will integrate the optical components required for photonic interconnections into the same switching chip package. This technology is known as CPO (Co-Packaged Optics) and manages to reduce power consumption to only 9 watts per port. Additionally, it minimizes signal loss and improves data integrity. Looks really good. NVIDIA has confirmed that it will integrate CPO technology into its Quantum-X InfiniBand and Spectrum-X Ethernet interconnect platforms during 2026. However, there is something important that is worth not overlooking: CPO is not going to be an extra. When it arrives, it will be established as a structural requirement of the next generation of AI data centers in a clear attempt to increase the competitiveness of NVIDIA’s AI hardware platforms. Image | Generated by Xataka with Gemini More information | Reuters In Xataka | Intel and TSMC lead the photonic chip revolution. Their problem is that China has just gotten fully involved in this war

NVIDIA was going to make the mother of all investments in OpenAI, but the era of favors between friends is over

NVIDIA has emerged as the pillar of artificial intelligence. Your chips They are the ones who move the more powerful data centers of the world and is getting billion-dollar investments to keep the wheel turning. At the same time, it has become one of the largest strategic investors of the artificial intelligence ecosystem. OpenAI She seemed to be his best friend, but that’s over. AND Jensen HuangCEO of NVIDIA, makes it clear: the next investments will probably be the last. Also in its great rival. Of 100,000 million. That was the magic figure of which we talked a few months ago. Recreating the schemes of “vendor financing“of the dotcom bubble, NVIDIA was going to finance OpenAI with $100 billion. In exchange, OpenAI would buy NVIDIA chips for the same value. It was a “trap” operation because the company would become the financier of its own premium client. With such an investment, it was expected that OpenAI will build data centers that they would need between four and five million NVIDIA GPUs: Huang already commented at the time that this represented double the total GPUs they distributed the previous year. In short: an absolute animal. And those 100,000 million were a mega-operation, yes, but one more of the many rounds of financing that the company led by Sam Altman. To 30,000 million. But in early February of this year, something unexpected happened. In what seemed like a historic turnaroundJensen Huang, cornered by the media after a Casual dinner at a Taiwanese restaurantcommented that there was never a 100% commitment to make that mammoth investment. The CEO of NVIDIA pointed out that they would surely continue making “the largest investment” they have made in their history, but although he did not give a figure, it was clear that nothing more than 100,000 million. How much? Lessmuch less: 30,000 million dollars. Good luck, OpenAI. Love broke, a love that began when Jensen Huang gave a DGX-1 server to Elon Musk back in 2016. Because it is not only that Jensen has commented that the figure will be around 30,000 million, but because he has mentioned that “it could be the last time” that they inject money into OpenAI. And the reason is very clear: “the reason is because they are going public.” From there, OpenAI will have to change its model completely and will be under the designs of the market. Big bets. NVIDIA, with this operation, shows that it is taking another course, one in which it prefers not to marry anyone and not commit in a truly serious way to a single company. Of course, OpenAI is not the only big operation that NVIDIA is going to get into. Another $10 billion is in store for Anthropic, OpenAI’s great rival both professionally and personally (since Altman and Amodei they can’t stand each other). Worse Huang has also mentioned which, again, will probably also be the last. They are also expected to go public. Fewer giants, broader base. OpenAI will have 110 billion soon. Apart from NVIDIA’s 30,000, Amazon will inject 50,000 million and SoftBank has committed 30,000 million. Huang has hinted that these two large operations could mark the beginning of a change of course. Instead of operations that can be counted on the fingers of one hand in giants, more investment in smaller companies. NVIDIA has gone investing more modest sums at other AI companies over the years. Model and software companies, infrastructure, robotics and even autonomous driving. It has been converting its GPUs and platforms into the standard on which it is founded the entire artificial intelligence industryand perhaps this break with giant companies like OpenAI or Anthropic marks a new beginning in which the focus is on supporting a broader ecosystem of partners. In this way, you will be able to continue shaping your objective: a range of more or less large companies that scale on your platform. Image | Steve Juvetson, NVIDIA In Xataka | AI engineers are closer to football stars than ever: NVIDIA has paid 900 million for one

Meta was building its AI chips to not be dependent on NVIDIA. Has ended up surrendering to the evidence

Meta faces a crucial year. While its competitors were laying the foundations for AI, Meta was burning money in the metaverse. That, along with a totally different approach to what Google or OpenAI were doing with AI, caused Zuckerberg’s company to pass a few years in the gutter. After reorganizing the house and sign the AI ​​A-TeamMeta was preparing so much a great model as new own chips for training. The thing… hasn’t turned out as expected. MTIA. Within the different Meta teams focused on artificial intelligence, there is one known as MTIA. It comes from ‘Meta Training and Inference Accelerator’ and its objective was research and design own chips training for artificial intelligence. Having your own chip makes all the sense in the world, since it is designed based on the needs you have. They have another advantage: you are not dependent on anyone else. If NVIDIA doesn’t have enough chips, it doesn’t matter because you have yours and can continue scaling data center systems (and those of Meta are immense) to continue the training and inference tasks. Meta was not going to be in charge of manufacturing, something that the highly reputable TSMCbut the program got off to a bad start. This is very difficult. Reuters He already mentioned it last year. After testing his first in-house developed training chip, Meta realized that things were not going well. It was underperforming what they expected, and it was also worse than the competition. They did not throw away the chips, but instead referred them to other systems (such as those for recommending Facebook and Instagram based on algorithms). The problem is that the performance of the training chip, the one really important for the AI ​​career, was not enough. Strategy change. In The Information They echo a statement from Meta stating that the company remains committed “to investing in different silicon options to meet our needs, which includes the advancement of our MTIA division” and they urge us to remain attentive to news that will be shared throughout this year. However, in the same medium it is noted that Meta has greatly lowered its expectations with its chips. The idea was to have two chips. On the one hand, Iris, a single instruction training chip that is easy to design, but from which it is difficult to extract all the juice in these training tasks. artificial intelligence training. On the other hand, Olympus, a chip that would be completed towards the end of this year and that would be the central part of Meta’s training clusters. According to The Information, there were many internal doubts about the stability of Olympus, its intricate design and profitability, so they have left it in the drawer to focus on more “simpler” chips. The evidence. In the end, if you can’t beat your “enemy”, join him. The sources consulted by The Information point out that, in addition to other complications, the training software was not as stable as what alternatives such as those from NVIDIA offer. And all of this has ended up causing two multimillion-dollar agreements. In a period of just a few days, Meta signed agreements with both AMD and NVIDIA so that both can supply them with chips to train the AI. It’s a win-win for everyone because Meta receives what he needs, NVIDIA has another client on a list it dominates and AMD continues to make a name for itself in the sector thanks to agreements like this one or the one they signed last year with OpenAI. In addition, Meta secures several sources so as not to depend only on one company. In fact, it is also estimated that they have signed an agreement to rent TPU units from Google. The competition. Meta’s objective, therefore, is to diversify its portfolio of AI chip suppliers as much as possible while continuing to investigate its own chips of which, supposedly, we will learn details later. They may continue investigating Olympus or a variant or decide on another approach. Because what is clear is that they must develop something ‘own’. NVIDIA and AMD are suppliers, not competitors as such. The real competition is OpenAI, X and Google, and the last two have their factories at full capacity. Google with its TPUsprocessors designed exclusively for AI, and xAI with its own chips that they abandoned and picked up more recently. Objective: dethrone NVIDIA. And all this occurs in a world in which everyone is ‘friends’, but enemies at the same time. I already say that NVIDIA is a hardware supplier, but they practically control the AI ​​​​computing market and are moving both in hardware and software. It is logical that other companies are investigating alternatives to boost their own AI. Added to the list is an Amazon that is also manufacturing some chips called Trainium3 UltraServer and OpenAI with its agreement with Broadcom to manufacture chips. It is, as I say, a curious scenario: everyone needs each other, and there is the “circular economy” of AI, but at the same time everyone wants to be independent. The problem is that NVIDIA has a huge advantage in this and has both the technology and the contracts with memory companies… and the contacts with which it ends up manufacturing the best chips: TSMC. In Xataka | Trump ordered the Pentagon to stop using Claude for being a “Woke AI.” Right after he bombed Iran using Claude

It is a nod to Chinese Big Tech and a message for NVIDIA

Huawei has arrived at the Mobile World Congress with one objective: to show the world What good have these last five years been? of vetoes and sanctions. The company has just had the second best year in its history. It seemed impossible when The United States ostracized herbut this five years has served not only to regain the throne in the enormous Chinese market, but to build something: the idea that China’s technological evolution passes through its hands. As a result of this we have the advertisement at the Barcelona fair of a line of SuperPoD supercomputers with a single objective: that the Chinese Big Tech don’t have to depend from NVIDIA. Return. Huawei has been collaborating with SMIC -the great foundry of China- to create chips. Chips that feed both your consumer devices as other high-performance ones for large-scale computing. It is clearly difficult to do this without violating Western vetoes (for example, their mobile processors do not have 5G and are less powerful than those of Qualcomm or MediaTek), but they are making progress. The symbolic thing is that They have turned resilience into their best quality. If in 2020 they competed for the market with Samsung and Apple, achieving a profit of 129,000 yuan, in 2025 registered 127 billion dollars, something impressive if we take into account that, above all, They come from the local market. In this time, Huawei has positioned itself as a lifestyle brand that has consumer devices, but also home automation and even cars. But if there is a great frontier today, it is that of artificial intelligence. And Huawei knows that it was something that had to be attacked not only from the most local perspective, but by launching a global warning. SuperPoD. Because these supercomputers, really, are not new. The company presented them in mid-September last year with a more local focus, for China. And before looking at the products, you have to see what a SuperPoD is. These are high-performance clusters that bring together thousands of specialized AI chips. And those chips are not from NVIDIA, which dominates the global conversation in AI computing, but rather their own. It’s about your Ascendsome that have been developing for years and that China is waiting like May rain to break that hegemony of NVIDIA. The idea is the same as with other technological sectors of the Asian giant: not to depend on anyone else. They are the following: Atlas 950 SuperPoD– A cluster of up to 9,192 Ascend 950DT NPUs per system with up to 1,152 TB of unified memory. TaiShan 950 SuperPoD– First general-purpose computing SuperPoD with two models: 96 cores / 192 threads or 192 cores / 384 threads for, for example, massive virtualization or critical databases. Local ecosystem. Huawei’s approach is very interesting. The Ascend is not close to the power and sophistication of NVIDIA chips, nor to CUDA technology that has become the language of AI. However, if each chip individually cannot compete for the most demanding tasks, what Huawei has thought is that these chips be scalable. To do this, they have developed a connection technology with ultra-high bandwidth that allows all these chips to be connected to each other with the aim that, in practice, it behaves like a single logical computer. This connection technology has been named UnifiedBus and, in the statement, Huawei states that the idea is to “continue defending open source and open systems to accelerate developer innovation and the prosperity of ecosystems. That is something that resonates with the Government’s objective: that its companies such as Tencent, ByteDance, Alibaba or DeepSeek, which have run into the arms of the latest NVIDIA chips As soon as the ban was lifted, they developed their technologies using ‘made in China’ solutions. Ambition at the cost of sanction. All this comes in a tremendously turbulent context. China is betting a lot on artificial intelligence and robotics as pillars of the country’s technological roadmapbut NVIDIA still has the best product. There is analysis that expose that the best of Huawei is still five times less powerful than the best of NVIDIA, and the United States has just made it clear that investment in AI is one in national security. All the mess between Anthropic and the Pentagon has to do with how the United States demands that the AI ​​of its private companies belongs to the State because they claim that the AI ​​of Chinese companies belongs to China, and China will not hesitate to do whatever it wants with that AI. Because computing power is, and will be, at the core of the AI ​​race, Huawei has shown that it is doing everything it can to deliver the best tools. And Western sanctions have only helped China ‘wake up’ and begin to shape these technological solutions at an accelerated pace. NVIDIA was clear. It remains to be seen whether customers around the world will adopt Huawei’s SuperPoD systems as an alternative to NVIDIA, but what is already on the table is that something is happening. At least, in China. In the middle of last year, the CEO of NVIDIA pointed out that before the vetoes, NVIDIA had 95% of the market share in Chinabut currently it is only 50%. These vetoes did not stop China, but rather accelerated the development of its own industry to the point that the competition, now, is fierce. In fact, the manager recently pointed out that it was absurd for the US to try to stop China with vetoes and sanctions, since China would achieve technological sovereignty sooner or later and that the ideal would be to take an economic slice while they could… and make Chinese Big Tech dependent on NVIDIA technology. And there Huawei’s approach is very curious because yes, its chips may not be the most powerful, but they are mass scalable and adaptable to the needs of each of the companies. Images | HuaweiXataka In Xataka | Huawei no longer competes: it is building its own … Read more

AMD wants to be the great alternative to NVIDIA in AI chips, and Meta has a plan that involves both

Meta has signed one of the largest contracts in history with AMD regarding chips for artificial intelligence. The agreement It represents a boost for AMD in its attempt to stand up to NVIDIA. It also shows how Lisa Su’s company intends to continue putting its foot even further into that little corner of circular financing that big technology companies have created in relation to AI. There are some nuances worth commenting on, so let’s get down to it. The agreement. Meta will purchase enough chips from AMD to power data centers with up to six gigawatts of computing power over the next five years. Just like esteem According to the Wall Street Journal, the total value of the contract would exceed $100 billion, since each gigawatt represents tens of billions in revenue for AMD, according to the company itself. First deliveries will begin in the second half of 2026, with a first gigawatt of AMD’s new MI450 chips. There is more. The agreement is not only about buying chips. As part of the pactAMD will offer Meta purchase guarantees (warrants) to acquire up to 160 million AMD shares at a symbolic price of one cent per share, which could make Meta the owner of up to 10% of the company. Of course, there are conditions, since the titles will be released in tranches as certain technical and commercial milestones are met. The last tranche will only be unlocked if AMD stock reaches $600, according to share the WSJ. On Monday it closed at $196.60, and after hearing the news, AMD shares have risen more than 10% in pre-opening. AMD seeks its place alongside NVIDIA. The company led by Lisa Su has been trying to gain ground in a market that NVIDIA dominates with more than 90% share. This agreement with Meta, together the one who signed with OpenAI in October in very similar terms, is its most ambitious bet to achieve it. “Meta has a lot of options. I want to make sure we always have a clear place at the table when they think about what they need,” counted His at the press conference prior to the announcement. Meta doesn’t put all her eggs in one basket. Zuckerberg’s company is not betting exclusively on AMD. Last week too closed an agreement with NVIDIA to acquire millions of its chips for tens of billions of dollars, and also is in talks with Google for the use of its AI processors. “At the scale at which we operate, there is room for all three,” counted Santosh Janardhan, head of infrastructure at Meta. The company’s strategy involves diversifying suppliers and ensuring sufficient supply for its major expansion. Meta spent 72 billion dollars last year in data centers and plans to disburse up to 135,000 million this year. And back to circular financing. Meta pays AMD for chips, and AMD returns some of that money in the form of shares. A similar scheme that we already saw in the agreement with AMD and OpenAI, but also identical to that of the rest of the big technology companies around AI. The problem of demand is also worth noting. And Reuters stood out the words of Matt Britzman, an analyst at Hargreaves Lansdown, who said that although Meta is securing supply and diversifying, “having to give up 10% of its capital suggests that AMD could have difficulty generating organic demand.” What’s coming now. The AI ​​race is not only fought in laboratories, but also in the field of finance. For AMD, the challenge now is to demonstrate that its chips live up to the demands. For Meta, the goal is to build with them “tens of gigawatts this decade and hundreds of gigawatts or more over time,” in words from Zuckerberg himself. All this while we are witnessing unprecedented spending on infrastructure and energy and of which we apparently do not see the bottom line. Cover image | AMD and Meta In Xataka | IBM has been living for decades that no one could kill COBOL. Anthropic has other plans

has surpassed Sora and Veo without NVIDIA chips

Brad Pitt and Tom Cruise fighting on the rubble of a devastated city. The recreation of the most expensive shot in the movie F1 for nine cents. Dragon Ball scenes indistinguishable from the original anime. None of this has been filmed by anyone. Everything has generated Seedance 2.0the ByteDance video model launched a few days ago. It has its seams, because it builds on already-made creativity and much is missing on a narrative level. But the technical plan is impressive. Why is it important. It is no longer “China is coming, China is coming.” It’s “this is what China already does.” The independent consultancy CTOL places it above Sora 2 and Veo 3.1 for specific improvements: native 2K resolution, synchronized audio as standard, simultaneous input of text, image, video and audio (something that no Western rival offers at the same time) and 30% faster generation. Between the lines. The uncomfortable thing for Silicon Valley is not just the quality. The most uncomfortable thing is what it has been achieved with. Seedance has been built without H100NVIDIA chips banned for China. And it still surpasses the models that do have them. It’s already happened something similar with DeepSeek in LLMs and now it happens with synthetic video. The pattern is consolidating: the sanctions are not serving to slow China down, but rather accelerating it, because they force it to innovate faster. In dispute. Disney, Paramount, Warner Bros. and Sony have sent termination requests to ByteDance for violating copyright. AND SAG-AFTRA has denounced the use of voices and faces of actors without consent. The trigger was seeing that Seedance was capable of cloning someone’s voice from a single photograph. ByteDance has suspended that feature and promised improvements, without specifying which ones. Yes, but. The studies are more disarmed than they appear: Their claims attack the generation of protected content, not training with that content, which could be covered by fair use law. The music industry has already gone through a similar scenario and ended up negotiating. Hollywood is headed for the same fate: not being able to stop this, but at least getting a cut. Disney already does it with OpenAI. But with Bytedance geopolitics comes into play and whether it will be capable of something similar with a Chinese company, not a Californian one. The big question. ByteDance has an asset that no one can replicate: the largest short video ecosystem in the world. TikTok and Douyin know, on a scale that is unrivaled, what makes a video tick, and that knowledge is built into Seedance. When I reach CapCutthe most popular editing app in the world, the impact will reach another level. Right now the question is not whether Seedance is better than Sora. We have already seen that it is. The question is whether the world will be willing to use it. In Xataka | Seedance 2.0 has flooded the networks with AI-generated videos with Disney content. And Disney has picked up the phone Featured image | BiliBili – Seedance

With the consumer segment drowning, Samsung is the first to manufacture HBM4 memory. And it will be for NVIDIA, of course

Samsung is one of the names of this February. They are expected to present the Galaxy S26but they have something on the table that will be a shock not only to their coffers, but to the engine of the South Korean economy. We refer to high bandwidth memories because, in the midst of the RAM and SSD crisisSamsung is prepared to mass produce the HBM4 memories. And it will be for the AI, How could it not be any other way?. In short. The South Korean company has not confirmed it, but recent reports published by Reuters and local sources such as Korea JoongAng Daily They point out that Samsung will begin mass manufacturing HBM4 memory chips starting next week. It will be the first of the three companies that dominate the production of memory chips (the others are the South Korean SK Hynix and the American Micron, the which is gone from the RAM consumption) in starting to manufacture in large quantities these fundamental memories for the artificial intelligence. HBM4. This type of memory, as its name suggests, has enormous bandwidth. This is crucial for GPU needs and while NVIDIA has remained faithful to GDDR memory for its graphics cardsAMD did flirt with the stacked technology of the HBM chips for their Vega GPUs. However, it is not a technology for consumption, not because its performance is inadequate, but because it is too expensive. Making HBM memory is more expensive than making traditional DRAM chips, but the advantages are there. With HBM4, for example, the density of stacked chips allows Double the bandwidth of the previous generation. This is key to transmitting more data per second, but they also consume up to 40% less energy than HBM3 memories. NVIDIA. The most interested is, as we have said on previous occasionsNVIDIA. And if NVIDIA benefits, practically the entire leading artificial intelligence industry will take advantage of it because its chips are what are currently moving the industry. It is estimated that Samsung memories will go to NVIDIA’s Vera Rubin acceleration systems In fact, it has been reported that Jensen Huang himself has urged to accelerate and increase the production of these chips. Well, Huang has asked the entire semiconductor industry to manufacture components for his cards. let’s get the batteriesit is not something that concerns only Samsung. Spearhead. According to a Korea KoongAng Daily source, “Samsung has the world’s largest production capacity and broadest product line. It has demonstrated a recovery in its technological competitiveness by becoming the first to mass produce the highest-performance HBM4 memory.” Because, in this field, its main competitor, the neighboring SK Hynix, is expected to begin mass manufacturing its response between March or April, enough time ahead for Samsung to begin sending its memory to NVIDIA. And, here, Samsung’s great advantage is that it does not depend on TSMC: it has its own foundry and the HBM4 modules are based on 4 nanometer photolithography. Looking to the future. SK Hynix’s delay is not because they have rested on their laurels: they are the ones who they lead the way in the previous generation thanks to the HBM3E memory, but due to their schedule and they did not need it, they started developing the new generation later than Samsung. But of course, although HBM is the standard in current AI systems, we have already said that they are expensive chips and, in addition, they heat up a lot, requiring dissipation equipment to match. And that’s where companies are combining HBM4 memory production with a new generation of DRAM memory. The idea is to find a way for this memory – slower, but cheaper and ‘fresh’ – to compete in bandwidth with the HBM. Samsung and SK Hynix are in it, but they will have to compete against someone who didn’t play in this league: an Intel that does not arrive alonebut from the hand of the Japanese giant SoftBank. In short: Samsung has decided to get back on its feet when it comes to manufacturing muscle. And most important of all, all the companies that make memory modules remain focused on one thing: they make hardware for artificial intelligence while components such as RAM and SSD consumption they have the prices through the stratosphere. Images | Maxence Pira, Choi Kwang-moNVIDIA logo (edited) In Xataka | Huawei has kept its promise: it has found a way to boost China’s competitiveness in AI compared to the US

NVIDIA will not launch new graphics this year, according to The Information

Being a PC gamer today is more like a test of patience than a simple hobby. After years marked by skyrocketing prices and shortages, the rise of artificial intelligence has added a new layer of tension to the hardware market. Memory has become a particularly disputed resource and its effects are no longer limited to data centers or large companies: they are beginning to be felt directly in the game ecosystem, right where users expected at least some stability. what’s happening. The current doubts stem from a chain of information that must be located precisely. The Information points out that NVIDIA does not plan to launch new GeForce graphics cards for gaming in 2026, a decision that this source links to the memory shortage that the industry is experiencing. This is not, in any case, a public confirmation from the company, but it comes from two people with knowledge of the matter who spoke to the aforementioned media on condition of anonymity. What NVIDIA says. The American manufacturer has not remained completely silent. In fact, speaking to Tom’s Hardwarehas revealed part of the problem: “Demand for GeForce RTX GPUs is high and memory supply is limited. We continue to ship all GeForce SKUs and work closely with our suppliers to maximize memory availability.” Understanding pitches. NVIDIA’s historical cadence combines two different rhythms that should be separated. On the one hand there are architectural changes, spaced over time and associated with clear leaps in performance or functions. On the other hand, the intermediate versions that refine what exists through memory, consumption or frequency adjustments, keeping the range alive. This hybrid strategy explains why we see a constant annual presence of new cards even when the technological base remains intact. The best way to understand this cadence is to look at what has happened in recent years, with architectural changes every 2-3 years and refreshes or expansions the rest of the time. Under this pattern, what was expected for 2026 was precisely another intermediate refresh of the RTX 50 series, the one that is now in doubt. The component that really sets the pace. The discussion about new cards usually focuses on the power of the graphics chip, but the current bottleneck seems to be located elsewhere in the chain. NVIDIA usually provides its partners with a complete set that combines GPU and memory, so the lack of sufficient GDDR7 modules prevents closing that package and, therefore, distributing new units. Under this industrial logic, memory shortage stops being a secondary problem and becomes a determining factor. Memory for data centers. The aforementioned material limitation does not appear in a vacuum, but at a time when the technology industry is rearranging priorities around artificial intelligence. Data centers dedicated to training and running advanced models demand huge volumes of memory and largely share the same supply chains as consumer hardware. When that pressure increases, available resources tend to shift toward the business segment. Searching for normality. With the present conditioned by available memory, the great unknown becomes when the true generational change will arrive. According to the information collected by Tom’s Hardwarethe internal roadmap would place mass production of the RTX 60 beyond 2027, which could shift its effective arrival to the market towards 2028. There is no direct confirmation from NVIDIA on these dates, so it is best to treat them as estimates from sources familiar with the planning. Images | Xataka In Xataka | The CEOs of NVIDIA and TSMC sat down for dinner and dessert was a request: the world needs wafers and RAM memory

China has given the green light to buy NVIDIA chips. The problem for your companies is that you will closely monitor each operation

NVIDIA has hundreds of thousands of H200 chips trapped in limbo. It is one of the company’s most powerful chips and the standard of the companies that are training AI. It is preferred for train the modelsand also the weapon with which The United States sought to leave China out of the game. After movements by the two countries, The US finally approved (25% commission through) that NVIDIA could sell the H200 to Chinese companies. China has taken some time, but finally it seems that it will accept the offer reluctantly and with an ace up its sleeve: DeepSeek. The mess. The H200 issue is a soap opera. In the context of the trade and technology warthe United States played one of the best cards they had: preventing one of their most powerful products from reaching Chinese hands. They also hindered European companies like ASML from selling their most advanced machinery for making semiconductors to companies like Huawei or SMIC. China responded, of course. He attacked with rare earth -that control almost exclusively– and has been showing little by little that they can not only create advanced semiconductors on your own (and pushing old technology to the limit), but they are alive and well in the battle for artificial intelligence. Furthermore, they have developed a robotics industry and other aerospace practically out of nowhere, making a vacuum to Western chips, and that has caught the United States on the wrong foot. China makes a move. Seeing that China was advancing and the US was not getting a cent, they moved tab: They opened the door for NVIDIA to sell its H200s to certain Chinese customers. For each sale, the US took 25%, but it seems that it was something that the Chinese Big-Tech wanted to take on because they need, at least currently, that NVIDIA technology. And the GPU company itself increased production expecting two million orders above normal. The problem is that everything moved very quickly. without China, really, having said anything. Because here it is not just a question of whether the United States lets it sell, but whether China wants its companies to buy. In a tense calm that left requests halted and thousands of H200 in limbo, China has finally made a move. According to Reuters, and as we told a few days agothere are companies that will be able to place orders for the H200. There is a “but”. It is not carte blanche for anyone to place an order. According to WSJ, Chinese authorities have indicated that each purchase must be for a use considered “necessary.” That includes advanced research or development in AI. Because two factors come into play here: On the one hand, it seems that there are Chinese companies that are pressuring the Government to let them access the technology. NVIDIA was allowed to sell the H20 to Chinese customers, but if these customers can now buy the H200 – six times more capable – they want to take advantage of it. But China does not want everyone to throw themselves into the arms of NVIDIA because, precisely, they have been building their own semiconductor industry for five years with SMIC and Huawei in the lead. China’s goal is to stop depending on the US, and if everyone starts buying US chips like crazy, they will not advance on the technological roadmap that the country marked a long time ago. That is to say, it seems that Chinese regulators are going to evaluate which companies can or cannot buy the H200 depending on the use they want to give it. It has been reported that, for example, ByteDance, Alibaba and Tencent will be able to import 400,000 H200 chips. But there is a twist to all this. deepseek. China’s quintessential artificial intelligence model is one that has turned both NVIDIA and the United States upside down. The question was how it was possible that, without access to the latest technology, DeepSeek could optimize its AI so much. On the one hand, ingenuity to circumvent the CUDA standard. On the other hand, there are those who are clear that DeepSeek has been trained with NVIDIA cards… smuggled. Accusations of smuggling are nothing new in this commercial and technological war, but precisely, and according to Reutersthe company that joins NVIDIA’s massive H200 order along with ByteDance, Alibaba and Tencent is… DeepSeek. Officially, and without restrictions, they will be able to access the H200. “We have given China the argument to launch its own industry and, at the same time, we are giving them access to ours again” – Samuel Bresnick Whiplash. I really liked this concept that Wired uses to define American policy in this regard. They are the ones who started the conflict and their position has been pivoting about tariffsbut with more or less lax measures depending on the moment. It seems clear that, now, they are at a point where they have had to think “if China is going to somehow reach the technology, at least we sell it and earn something along the way.” Samuel Bresnick is a researcher at the Georgetown Center for Security and Technology and comments in Wired that the worst thing you can do is “come and go,” noting that “we have given China the argument to launch its own industry and, at the same time, we once again give them access to ours.” Get your batteries. And meanwhile, there’s Jensen Huang. The CEO of NVIDIA has taken a mass bath in recent days in both China and Taiwan, where he has met with some of the companies that move the semiconductor sector. NVIDIA sat at the same table, TSMCFoxconn or Asus, and Huang came out, half joking, half seriouswith one request: you need wafers and RAM. Regarding the purchase of the H200, China is walking on eggshells, and it makes perfect sense. It is at a point where it does not want to be left behind, and to do so it needs its … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.