Europe is taking its technological independence so seriously that it is aiming for the most ambitious goal: NVIDIA

Europe cannot continue to be the technological vassal of the United States. With that powerful message, the CEO of Mistral presented a few days ago a roadmap with which he considers that Europe can take the pulse in the technological race of artificial intelligence. The warning came just when several companies are defining the future of European technological sovereigntyand one of those companies is Euclyd. It is seeking 100 million euros, is backed by one of the ASML bosses and has a clear objective: to stop depending on NVIDIA. And it’s not the only one. Euclyd. We have already talked at length about ASML. Although when we talk about the technology industry we have names like Intel, TSMC, NVIDIA or Qualcomm more present, ASML is the Dutch company that manufactures the most advanced machines for manufacturing semiconductors. Without it, the technology industry would not be what it is to the point that China is investing everything in having its own ASML. Well, Bernardo Kastrup is the former director of ASML and, in 2024, he founded Euclyd. This startup is backed by former ASML CEO Peter Wennink, and, according to CNBCis looking for financing to raise the necessary capital to start mass manufacturing chips. 100 times more efficient than NVIDIA. In this new round of financing, Euclyd is seeking $100 million and the goal is to create inference chips for AI. These chips are designed so that the models use what they learned in the training phase and are optimized for high speed, low latency and, above all, much lower energy consumption than the training ones. And that is where the ambitions are maximum. Euclyd, based in Eindhoven, claims that its ‘Craftwerk’ chip system is 100 times more energy efficient for AI inference than NVIDIA’s Vera Rubin chips. This is very good, but the comparison is a bit bulky because Vera Rubin, which is the new generation from NVIDIA, is not a pure training or inference platform: it is optimized to do both. European movement. But hey, Euclyd is currently raising the money with an eye toward delivering inference chips to its first two customers by 2027. And it’s not the only one. There are others such as the British Olix, Optalysys and Tactile, the French Lago or the Dutch Axelera that have raised more than 800 million euros to date. That is from the private sphere, since Europe has the FAMES pilot program which has 830 million euros to finance this type of projects. It is an extremely modest amount if we take into account what is moving on the other side of the pond, but between financing chip companies, renewables and European data centersis a sign that the feeling that Europe must fend for itself is there. world movement. The interesting thing is that this does not respond only to Europe’s feeling of technological sovereignty. It goes further, pointing to the great whale of AI: NVIDIA. Whatever company we think of, surely part of its hardware – or all – belongs to NVIDIA. own Mistral reached a very juicy agreement with the company led by Jensen Huang to be able to acquire thousands of GPUs, but the industry is already seeing what happens when all the eggs are in the same basket. That is why NVIDIA has its potential greatest rivals among its clients. Goal, tesla either amazon They buy from NVIDIA, but at the same time they are developing their own chips. The Chinese giants want NVIDIA chips, but they also develop their alternatives with local companies. All of this is creating more shadowy companies such as Texas Instruments, Marvell or Broadcom to do business, since they are the ones those who turn to They do not want to depend so much on NVIDIA. Google. In fact, just as startups developing AI chips are appearing in Europe, in the United States an ecosystem of companies is developing that are raising billions of dollars. Two examples They are Cerebras Systems, which is valued at 23 billion or MatXfounded by former engineers from Google’s TPU development team. Google itself, whose TPUs are manufactured by Broadcomthis searching an agreement with Marvell to diversify its inference chip business. NVIDIA responds. There is a phrase that has always made me laugh, that of “you think the police are stupid”, and applies perfectly here. NVIDIA has also been realizing for some time that it must diversify and has stopped injecting obscene amounts of money to only a few companies to go on to support other smaller onesbut promising. This way you get clients in the curious circular AI financingas well as continuing to be the one who leads the segment. But in addition to investing in others, she invests in herself. In March, he invested 4 billion a photonics company to make optical interconnection systems for next-generation data centers. They are also investing more than 18,000 million in R&D and winning juicy contracts with both TSMC as with Samsungwho make the chips for the company’s AI platforms. In the end, if all markets have something in common, it is unbridled spending. Europe, China and the United States have embarked on a race in which there is no end in sight and that will perhaps have its greatest test when Anthropic and OpenAI go public this year. In Xataka | Europe thinks that it is the one who wants to become independent from US technology companies. It’s actually the other way around.

raise the rent of your NVIDIA

GPU prices are through the roof. I’m not talking about AMD’s RX 9000 or NVIDIA’s RTX 5000, since those are for gamers. I am referring to the GPUs that, suddenly, They are the only ones that matter: GPUs for AI hyperscalers. The Big Tech of AI They have paralyzed the entire consumer market coppressing production of the few component manufacturers that exist in the segment and causing a brutal shortage. Good luck if you want to buy an SSD or RAM, but it is also impacting companies. Valve can’t release the Steam Machine and Apple just remove option from Mac Mini and Mac Studio with the largest amount of RAM. Simply put, either there isn’t one… or what there is is tremendously expensive. And the irony is that this situation is starting to impact the AI ​​business itself where some now have to pay to rent NVIDIA GPUs at almost double the price. It is the GPU as a service model. Sky-high prices for cloud GPUs Here we must differentiate between hyperscalers and AI companies that do not have their own facilities. Amazon, Microsoft, Meta, Tesla or Google, among others, are hyperscalers. They build gigantic data centers which they fill with tens of thousands of GPUs (which are usually from NVIDIA, since it is the one that dominates this market) to meet their needs. In them they carry out the training and inference tasks of their models, but some have turned to become service providers. Amazon Web Services, Google Cloud and Microsoft Azure They maintain a parallel business, that of huge NVIDIA GPU lessors. They buy huge lots of H100, H200 and A100 that they integrate into their infrastructure and simply rent their computing capacity to whoever is willing to pay the price. It’s like cloud gaming itself NVIDIA with GeForce Now: A company that has an interest in AI, but cannot build a data center, has the possibility of pay to rent that computing capacity to large tenants. So far so good because it is a win-win for all parties, but the problem comes when scarcity hits. On this playing field There are not only Google, Microsoft and Amazon. There are other companies more focused on the cloud GPU business, such as CoreWeavewhich a few months ago already increased rental prices by 20%. It coincided with the early stages of the RAM and SSD crisis. And the price increase was not the only change. From the previous year of permanence contract, the requirement increased to three years. In an article by business insider we can see more clearly the price of this demand exceeding supply. Carmen Li He is CEO of Silicon Data, an analysis firm, and has commented that NVIDIA’s veteran H100s have risen 20% in the last three months, from $2.20 per hour to $2.64. The B200 are along the same lines: from $4.40 per hour to $5.35. He problem comes with the H200, since rental price increases of 48% are being experienced here. From $2.75/hour a couple of months ago, they have gone to $4.08/hour. It’s almost double for the same product. because those who need it most want even more power for their latest models since so much money is being injected into this sector that more and more companies without data centers need more computing power for their products. The component manufacturers for these GPUs they can’t meet the excessive demand, which is causing Waiting times for new chips of between 36 and 52 weeks and, therefore, since there is not a GPU for everyone, cloud computing rental prices… increase. Between the big three and Meta more than 650,000 million will be spent in AI infrastructure this year and Carmen Li point that, since this demand for AI exceeds any expectation, not only is there not enough for everyone, but the old GPUs sold by hyperscalers When they renew equipment they depreciate very, very little.. In the second year of use of an H100, it can be sold for 85 cents. In the third year, for 84 cents. According to several voices in the sector, it is a tsunami that has already taken the consumer market by storm, but that with the rise of Agentic AI it will get worse. Because it is no longer just training and basic inference, but agents that execute several steps autonomously, consuming more computing capacity per request than traditional queries to a chatbot. Translation: that a market that some they point that should be stable is becoming something like electricity or energy, a roller coaster of prices that plays with the rules of the savage capitalism. In Xataka | Using Netflix in 2018 was much better than now: we have normalized degrading experiences

Amazon Web Services is such a profitable business that its CEO is already thinking about something more ambitious: competing with NVIDIA

Andy Jassy is the CEO of Amazon and an advocate of artificial intelligence to the point that he expects AI to transform the company’s workforce in the coming years. It makes sense that he is the captain of a liner that has turned to the AI ​​business, since before succeeding Bezos, he came from leading Amazon Web Services. And in his last letter annual to shareholders, Jassy leaves several notes that give us clues about the future of the company. It plans to compete against NVIDIA and SpaceX. And they have 200 billion dollars to invest. The photo. The company is going like a rocket. amazon hill 2025 at 717,000 million dollars, exceeding by 12% the 638,000 million of the previous year. Operating income increased by 17% to 80,000 million and, for its part, AWS cloud business it also worked well, achieving 24% year-on-year in the last quarter. They have done so, according to Jassy, ​​without being able to meet the demands of some clients due to the current situation of the data centers, but even so, they are more than happy. Burning pasta. And those good vibes are going to reach Amazon to invest some 200,000 million dollars in the coming months. The CEO has commented that “they are not going to invest that amount in 2026 following a hunch,” also pointing out that they are not going to be conservative in their bets and that what they are looking for is to lead the artificial intelligence business. HE wait that 50,000 of those millions will end up in the pockets of an OpenAI that will need a boost after the NVIDIA “sit-in”he Sora’s closure and Disney’s withdrawal of investment. Those 200 billion will be concentrated on AI infrastructure, a bet on the future that can add pressure to margins in the short term, but from which they expect a lot.or when the business starts operating. For its part, OpenAI is going to invest 100 billion in AWS over the next eight years. The chickens that enter by those that leave, like almost everything in this AI market. business engine. What business? Well… the one with the chips. Amazon is one of the companies (like Goal, tesla or one’s own OpenAI) that buys from NVIDIA, but that also you are developing your own solution. There are three proper names: Graviton, Trainium and Nitro, training and inference chips (depending on the case) whose business is growing at triple digits year-on-year. Specifically Trainium, which is the chip used to train some of the company’s models, can “save tens of billions of dollars a year.” But it’s not just about saving money by having the chip made at home and do not depend on NVIDIA prices and market competition: it is about not depend on NVIDIA itself at all. The NVIDIA Garden. We have already explained on more than one occasion how NVIDIA is the engine of the artificial intelligence business. Not only do they have the hardware that powers the data centers of the main AI players, but they have the money to invest in both established companies and, above all, in the startups that can define the future of the sector. And Jassy aims, directly, to become a hardware rival, one that competes with NVIDIA, AMD and even with the reborn Intel. According to the CEO, if Amazon were to sell its chip on the open market, it could represent a market of about $50 billion annually, more than double its current chip market. It would still be well below some of its rivals, but it could sell its hardware in conjunction with its AWS software. It would be by selling that “complete AI package” where Amazon would be strong against its rivals. Amazon’s Starlink. Wanting to step on the hose of the strong hardware trio is not the only field in which Jassy wants to play. We already know that Bezos, founder of Amazon, has its space businessbut in parallel, the own Amazon is deploying its Kuiper project. It is its own constellation of satellites in low orbit for broadband Internet that aims to be direct competition to SpaceX and Elon Musk’s Starlink. The deployment began in 2025 with a modest 27 satellites, but this 2026 They want to launch another 3,200. In the end, as all mega-companies want, Amazon seeks to be ubiquitous and permeate absolutely every millimeter of the business. Now, although its capacity in AWS is indisputable, competing against NVIDIA is a big deal. Jensen Huang’s company is TSMC’s first customer -the great global factory-, has deployed very aggressively and intelligently in the AI ​​segment, creating a network that is difficult to replicate and, in addition, has ensured itself to be the main customer of Samsung and SK Hynixthe companies leading high bandwidth memory without which AI cannot take off. Image | Amazon (edited) In Xataka | If you think the internet was much better before AI, congratulations: they have created an extension for you

Big Tech has entrusted the keys to its kingdom to NVIDIA. Now they want the keys back

NVIDIA is no longer a gaming graphics card company: NVIDIA is a ubiquitous company. That means it is the baby at the baptism, the bride at the wedding and the cement of the manufacturing industry. artificial intelligence. Your hardware is in the most powerful data centers on the planethis software controls everything and your money invest in any company that has something to say in AI. Big Tech (and everyone) is blindly trusting NVIDIA and has been given the keys to the house, but something is changing. And now they want the keys back to regain control. All the spotlights. Microsoft, Amazon, Google and Meta have bought hundreds of thousands of NVIDIA GPUs to shape your AI aspirations. At some point many began to develop their own hardware, but in the end NVIDIA’s was everywhere and was the one that gave the most guarantees, so they “gave up.” Apple, curiously, opted for Amazon. And not just the big ones. OpenAI, Anthropic, Mistral or xAI are purely AI companies that They bet very heavily on NVIDIA from the beginning. Its hardware is the one that leads the way, the one that Western and Chinese companies want and the one that has such a brutal demand that it has elevated the company as the best client of TSMC and Samsung. amd. But no one likes to have all their eggs in one basket, and those same names are moving. From a position of absolute dominance, in a short time we can move to another in which the hardware market is much more diversified. AMD is NVIDIA’s great historical rival in the PC gaming segment (and in consoles), but although they were out of the conversation for a few years, they have returned with force. They have the hardware and are moving to get the same memory that NVIDIA has (and Samsung wins more than anyone else) and contracts as juicy as the one they achieved recently with Meta. The big rival also has deep pockets and is committed to taking a piece of the AI ​​pie. The Chinese threat. On the other side of the world we have China. We have said on numerous occasions that China is on to other things when we talk about AI. If the West pursues the AGI (with questionable claims like it’s already here), to China doesn’t care exactly. They want fast chips that allow them to create accessible and monetizable models in the short term. But they also have Huawei, the company that has become the spearhead of the Chinese technology industry thanks to its collaboration with foundries such as SMIC is allowing, in an unthinkable way due to vetoes, can develop advanced chips. The development of cutting-edge chips still needs to be achieved, but Huawei already has more powerful inference chips than NVIDIA’s H20, according to them, and a supercluster for training. Taking back control. Because in that term, “inference”, is where the current key is. AI training is important because it is what allows the model to then have the data and have a wardrobe to pull from, but inference is the final layer, which processes the user’s request to provide a response. There is not so much raw power needed, and that is what almost all the companies mentioned above are taking advantage of. Amazon, Google or Meta have programs in which they are actively researching or developing chips proper for inference. OpenAI has signed an agreement with Broadcom to supply chips and xAI along with other companies Musk also has its own chips and they plan to open factories. And in China things are no different with Cambricon wanting to be a local alternative to NVIDIA and giants like Alibaba either ByteDance getting into chip design. Groq. Given this, do you think NVIDIA is standing still? Among their hardware proposals, they have Groq, an inference accelerator that is designed for, next to Vera Rubinprocess a large amount of data at enormous speed. Groq was an unknown in the world of AI – until NVIDIA licensed it – and specialized from the beginning in that: chips with minimal latency for inference. The key is in the architecture of its chips and it was a piece that was missing from the NVIDIA catalog and shows that, although the rest want the keys back, the one that already had them may have made a backup copy to continue being the reference. Because they may all be preparing their chips, but while they arrive, NVIDIA is already there and, in fact, with Groq it seeks to sneak into theThe $50 billion pie: China. Problem for NVIDIA. But of course, that’s part of the story. The other is that NVIDIA also has all its eggs in one basket: that of AI. In the middle of last year we already mentioned that six customers represent 85% of all NVIDIA revenue in the previous quarter. It is an absolute nonsense that shows that, if there is a shift in technology, a puncture of the bubble or a new player that arrives strongly, the situation for NVIDIA may not be so favorable. The question is whether a regime change can come and everything will be allowed to collapse like a house of cards. The uncomfortable thing is that an absurd amount of money is being invested and it’s not something that can escalate forever. In Xataka | Jensen Huang believes we have reached the “coming of the AI ​​wolf.” It is perfect for feeding a Tamagotchi

Huawei has been plotting a plan for six years and now they are ready to dethrone the undethroned: NVIDIA

With the beginning of the technological war between the United States and China, Huawei was given a mission: to become the spearhead of Chinese technology companies. After a tough first few years that were like a pilgrimage through the desert, the Chinese company has come back strong. Not only has it regained leadership in China, but it has taken steps to become the lever of the industry. Yes a few days ago presented its supercomputernow it’s time for something more modest, but essential in the AI ​​career. An inference chip that, they claim, is more powerful than the NVIDIA alternative. Atlas 350. Within the framework of the Annual Partners Conference, the company has once again introduce the Atlas 350 platform (already advertisement at Huawei Connect 2025 last September). This is a card that uses the latest version of its processor Atlas 950PR and which, according to the company’s data, has an improvement in inference performance of 2.8 times compared to the competition. That competition It’s the H20 chip, a trimmed version which was the one that NVIDIA had permission to sell in China. It is a platform focused on rapid data movement, which makes it ideal for a high workload in tasks such as search recommendations, multimodal generation and use of large-scale language models. It is an accelerator, in short, a piece of hardware dedicated to a very specific task, and it is what it knows how to do well within a server. to the mess. To train AI, China has other weapons, some from Huawei itself, but this Atlas 350 is to meet that goal of the Chinese industry of making AI tools accessible and monetizable as soon as possible. In fact, at the event it was confirmed that there are already partners launching servers built with the Atlas 350 as its heart. And here is the real relevant data. Huawei is not just presenting things: it is presenting and announcing that it already has partners launching products with this new technology. Because the idea is that each new piece of hardware begins to be distributed and deployed as soon as possible among Chinese companies that are within the ambitious five-year plan for technological sovereignty. Essential. For months now, the company has been moving to position itself as the lever for the rest of the Chinese technology network with NPUs, dissipation hardware, standard cards for AI, motherboards and “other different forms of hardware to facilitate the development of customers and partners.” At the event, they highlighted that “although the first half of the era of artificial intelligence focused on computing power, the second will be defined by data.” And it is in that inference where Huawei wants provide all your infrastructure to become an indispensable piece of the ecosystem. Because China, within the great future plan, is fighting to become a power not only of the AI ​​that we know, but of the physical artificial intelligencerobots or 6G networksa field in which Huawei also leads. Enough? That’s the big question, and the answer may not depend as much on raw power as it does on the ecosystem. I’m not talking about the rich ecosystem that Huawei is building, but rather the ecosystem of tools. If everyone uses NVIDIA cards for training (in the inference we see that little by little everyone is waging war on their own), it is for them that the software and processes are optimized. And the most leading Chinese companies they want NVIDIA hardware to be on par with or surpass American rivals. This has been a soap opera with NVIDIA pressuring Trump to let it sell the H200s in China, achieving it after 25% tariff for those purchases and then China sending contradictory messages. On March 31 there will be a meeting in Beijing between Trump and Xi Jinping and it is expected that export controls – and the issue of NVIDIA – will be put on the table. And someone who is going to be watching that meeting carefully is Huawei. Because China is at a crossroads right now: it knows that Your companies order NVIDIA chipsbut at the same time the Government does not want them to leverage themselves using foreign technology that could leave them stranded again. Images | Huawei In Xataka | The looming bottleneck in AI is neither RAM nor gas: it’s that TSMC’s N3 node is absolutely saturated

Nvidia reacts to criticism of DLSS 5 from players and press

Nvidia introduced DLSS 5 at GTC 2026 as its biggest graphics advancement since ray tracing. In a matter of hourstechnology became the biggest meme generator of the year and turned the entire gaming community against it. After the criticism, the CEO of Nvidia has come to justify the company’s decisions. Spoiler: the situation has not improved. Almost unanimous rejection. Nvidia presented DLSS 5 at GTC 2026 as an evolution that goes beyond upscaling traditional. The system analyzes the color and motion vectors of each frame and applies an AI model to generate photorealistic lighting and materials anchored to the game’s original 3D data. However, a good part of the community and professionals industry called DLSS 5 “AI slop.” The examples used by Nvidia as a demonstration have been the spearhead of these problems: Grace Ashcroft from ‘Resident Evil Requiem’ with more pronounced cheekbones, fuller lips and uniform skin, or characters from ‘Starfield’ who gain facial resolution but at the cost of aesthetic coherence… A fundamental error that came from assuming that “more photorealistic” is equivalent to “better”, when decades of video game design point in a direction that is, at the very least, complementary. Nvidia’s response. Jensen Huang responded to the controversy during a question session in GTC 2026 itself, in what was the CEO’s first public reaction to the general unrest. His words left no room for ambiguity: “Well, first of all, people are completely wrong.” Their technical argument was that, according to Huang, DLSS 5 combines developer-created geometry and textures with generative AI, leaving control in the hands of art teams. Developers can adjust the model to fit the intended art direction, rather than handing that process over entirely to AI. The manager insisted that It’s not about post-processing at the frame level.but rather generative control at the geometry level, what Nvidia calls “content-controlled generative AI”, adding generative capacity over the game’s existing geometry without altering artistic control. Huang added that studios can also explore stylized or non-photorealistic results depending on the type of game they want to build, and that players who don’t want to activate DLSS 5 they can just ignore it and continue with current scaling techniques. Bethesda supported that position. The study clarified that what was shown was a very preliminary version. Calculation errors. There were those who pointed out that the presentation of DLSS 5 in the middle of a keynote mostly dedicated to the cloud and enterprise AI, before an audience already extremely polarized with AI, turns Huang’s statements into ammunition that will be repeated until Nvidia manages to convincingly explain what DLSS 5 is and why gamers and developers need it. Comparisons have been read to Apple’s “Antennagate” (when the company’s response to criticism was perceived as arrogance): being right and communicating it poorly is like not being right. What’s behind. PC Gamer also points to a deeper debate: if it was previously possible to modify the graphical style of a game through adjustments or mods, hardware-assisted generative AI represents a shortcut controlled by large technology companies that could radically transform how artistic authorship is perceived in video games. It remains to be resolved whether when games arrive that have integrated DLSS 5 from the beginning and with a defined artistic direction they will change that perception. There it will be detected, without a doubt, who in this entire battle was really wrong. In Xataka | NVIDIA is going to spend $4 billion on photonics companies. He is preparing for what is coming

Beijing just gave the best news to NVIDIA

NVIDIA has been caught in the crossfire of the trade war between the United States and China for more than a year. Its most powerful chips could not be sold to the Asian giant because Washington required export licenses and later Beijing did not give the green light to imports. This week the two fronts have been unlocked simultaneously and Jensen Huang has taken advantage of his annual developers conference to announce it out loud. Your factories they are starting enginesand the future is brighter than ever for her. H200 chips, unlocked. The H200 chip, NVIDIA’s second most powerful chip today, had become the center of trade and technological tensions between China and the United States. The Trump administration had already granted export licenses but of course, as long as she got her cut. Which was missing it was him approval from Beijingthat according to Reuters It has arrived this week for many of the customers who were demanding access to these chips. Among them are ByteDance, Tencent, Alibaba and DeepSeek. China is a gold mine. Before the restrictions imposed For the US, China represented 13% of NVIDIA’s total turnover. The export veto was highly criticized by Jensen Huang, who did not stop criticize the measureof try to avoid it and to explain that what the US had done was not protect its technology, but rather shoot itself in the foot. During this blockade, Chinese companies have been advancing both in the development of AI models and in the development of their own chips. They still have room for improvement, but this effort to “become independent” from US technology is already bearing fruit and perhaps would not have occurred if it were not for the US veto. There will also be Groq chips. NVIDIA will not only export its H200 chips, but is preparing a version of its AI inference accelerators for the Chinese market. Specifically, we are talking about Groq chips, a company in which NVIDIA invested $20 billion to “license” its technology, although In practical terms I have acquired it. These chips are especially interesting because they are not used to train AI models, but to execute and “serve” them. This is the market that is growing the fastest right now, and where competition is toughest. But China already has inference chips. Companies like Baidu they are already producing its own inference chips, which means that NVIDIA will not enter the Chinese market from a monopoly position, but as another competitor. What is striking here is the fact that Groq chips are not refined versions nor are they adapted to this market according to the cited sources. in Reuters: They will be the same ones that companies in the US or other parts of the world use. China will continue without access to Vera Rubin. This week NVIDIA presented a new line of products built around its next AI chips, the Vera Rubin. These chips cannot be sold to China due to current restrictions, so at NVIDIA they have found a hybrid architecture: Vera Rubin for markets where it can operate freely, and Groq as an inference component for China. NVIDIA is brimming with optimism, and with good reason. Jensen Huang spoke precisely in his inaugural conference about how promising the company’s future looks. Previous projections spoke of medium-term revenues of $500 billion for its Blackwell and Rubin chips. Inference solutions and this “opening” to China now mean that this forecast is doubled: Huang hopes to achieve at least $1 trillion in cumulative orders by 2027a simply dizzying figure that makes it clear that NVIDIA’s business seems to be in an enviable state of health. Image | NVIDIA In Xataka | DLSS 5: Millions invested in AI graphics improvements so people say it looks like an Instagram beauty filter

The games of 2026 aim to be graphic marvels. NVIDIA is clear that the solution is in AI

The GDC, or Game Developers Conference, is a very special video game event. It is not focused on announcement of new titlesbut to holding presentations and roundtable talks between those who create video games. Lovers of the most technical issues in the industry consider it an unmissable event, and the one who does not skip an edition is NVIDIA. And in this GDC 2026 it has arrived with all his muscle and a clear idea. The future of gaming goes through artificial intelligence. DLSS 4.5, the umbrella. Leaving aside the current situation of the PC market due to the requirements of artificial intelligence and unprecedented crisis where we find ourselves, AI applied to video games is something that NVIDIA has been pushing for several generations. A lot has happened since the RTX 2000 and the arrival of ray tracing in real time along with a solution to make performance more sustained: DLSS. Deep Learning Super Sampling is a scaling tool that allows the GPU to render the game at a low resolution and then scale it to the native resolution of our monitor. This allows for improved frame rates in performance while maintaining image quality. With the passing of generations, DLSS has been evolving until it becomes a complete neural rendering suite that involves several technologies. It is no longer just scaling through deep learning, but another series of techniques to improve both image and performance. For its main work, DLSS 4.5 presents greater understanding of the scene, improving both image quality and performance at higher resolutions. But he has more things up his sleeve. Frame Generation. One of them, perhaps the most notable, is the enhanced frame generation mode. If in the previous generation DLSS could multiply the frames per second of the image by up to four (through this deep learning, four frames were “invented” for each native one provided by the GPU), with DLSS 4.5 the figure increases to 6x. This is crucial to maintain fluidity in games with extreme graphical loads if we want to play at 4K. Because at 1,440p, the power of the GPU is more than enough, but to play in 4K with all the current effects activated, generating frames seems key if we want to take advantage of the high refresh rates of the monitors. According to NVIDIA data, the step from 4x to 6x increases performance in titles with path tracing in 4K by up to 35% on RTX 50 GPUs. It uses Reflex technology, also from NVIDIA, so that latency is minimal, and the scenario is most curious because we can be playing a game in which most of the frames are reconstructed, and not native, without us realizing the latency. Multi Frame Gen, the “magic”. Within that frames per second multiplier, there is a very interesting technology: DLSS 4.5 Dynamic Multi Frame Gen. Its name is quite self-explanatory and,. Basically, it consists of an algorithm that establishes the best frame multiplier for each moment depending on the image, the performance of the GPU and even whether we have vertical synchronization activated or not. It is an automatic change that changes all the time between 2x and 6x (passing through intermediate multipliers) with the objective of always maintaining the highest possible frame rate per second, but without spending resources foolishly. That is to say: if we have a 120 Hz monitor, the GPU changes the multiplier depending on the situation to always try to guarantee those 120 FPS, but without wasting resources. If in a game we are in a phase with low graphic load (an interior, for example), only a 4x multiplier may be necessary. If we go outside, maybe we need that 6x push, and what the system does is change automatically. The next time we are inside it will go back to 4x and so on constantly. The explanation is simple: the aim is to make the experience as consistent as possible, but without generating frames foolishly to prioritize native frames over those generated by the AI. “New” word: path tracing. And all these technologies to give life to games that will soon begin to consume more and more of the PC’s native resources. Because if ray tracing is already demanding, we are going to have to get used to a new term: path tracing. It’s not new, but it’s basically a more complete form of ray tracing that attempts to even more realistically simulate how light impacts game geometry. Ray tracing can be applied to everything (shadows, reflections or global illumination) or separately, but path tracing is a unified solution. In short: it is like applying all possible ray tracing, but at the same time. This consumes a lot of resources, something we can see in games like ‘Cyberpunk 2077‘ either ‘Resident Evil Requiem‘, and is the reason for DLSS 4.5 rendering techniques and 6x frame generation. The games are ready. In the end, it’s about AI allowing you to achieve performance that the GPU, on its own, might not achieve. In top-of-the-range graphics like RTX 5080 or RTX 5090 We may want to ‘pull’ the native resources, but with others like the 5070 or the 5060, these AI ‘helps’ allow us to further stretch the visual quality of a game while maintaining good performance. And all these tools together will be necessary if we take into account what is to come. Because we have already mentioned some games, but over the next few months others will arrive like ‘007 First Light‘, ‘Resonant Control‘, ‘Star Wars Galactic Racer‘ either ‘Directive 8020‘ that promise to be visual wonders and that will integrate these technologies. In Xataka | Nintendo has not been just a video game company for thirty years. But it is now when it is showing it with dividends

Meta has been buying chips from NVIDIA and AMD for years. Now it also makes its own so as not to fall short

Meta has not thrown in the towel with its MTIA (Meta Training and Inference Accelerators) chips. And although they didn’t have it all on their sidestopping depending on NVIDIA is a very juicy candy to jump to conclusions. For that very reason, They have presented a roadmap of four new chips with which the company intends to accelerate both its content recommendation systems and its generative AI capabilities. The first chip is now operational; The other three will arrive before the end of 2027. Below are all the details. Dependence. For years, Meta has relied almost entirely on NVIDIA and AMD to power its data centers. The development of our own silicon is complicated, but if it is achieved, it can be a very successful financial and strategic bet in these times. According to statements According to its vice president of engineering, Yee Jiun Song, designing its own chips allows the company to “eliminate what we don’t need,” which directly translates into cost reduction. Added to this is greater independence from possible price variations or supply restrictions. Which is exactly what you have announced. The four new chips are the MTIA 300, 400, 450 and 500. Each one has a different use: The MTIA 300 is already in production and is intended to train the algorithms that decide what content Facebook and Instagram users see. The MTIA 400 (known internally as Iris) has completed laboratory testing and is en route to data centers. Meta claims that it offers performance “competitive with leading commercial products,” according to its official statement. The MTIA 450 (Arke) will double the high-bandwidth memory compared to the 400 and is scheduled for early 2027. The MTIA 500 (Astrid), the most advanced, will arrive in mid-2027 and will incorporate, according to the company, improvements in low-precision data processing. The chips are manufactured by TSMC, the world’s largest semiconductor producer, and have been developed in collaboration with Broadcom on the RISC-V open architecture. The rhythm is the most striking thing. What’s unusual is not just that Meta makes its own chips, but the speed at which it plans to do so. The usual cycle in the industry is one or two years between generations. Meta aims to release new versions every six months. “The pace of AI evolution is so fast that we always want to have the most advanced chip available when we need it,” counted Song. This accelerated cadence is possible, according to the company, thanks to a modular design that allows components to be reused between generations. ANDthis does not replace NVIDIA. It is important not to lose sight of the context. Meta remains one of the largest buyers of GPUs on the market. just a few weeks ago signed multi-million dollar agreements with NVIDIA and AMD to supply chips for the next few years, and has also reached an agreement to rent computing capacity on Google chips, as share Wired. MTIA chips are designed for specific and internal tasks (inference and recommendation systems), not for training large language models, so this strategy is complementary to your chip plans with NVIDIA or AMD. Nor should we forget that Meta recently had to abandon its most ambitious training chip, known internally as Olympus, after the project became complicated in the design phase, according to counted The Information. Susan Li, CFO of Meta, confirmed at a Morgan Stanley event that the company still has the goal of developing processors capable of training models, but without giving more details. And now what. The real test of this bet will come when the chips are deployed at scale. The challenge at the moment is to guarantee HBM memory supply before a RAM crisis that is affecting the entire technology sector. Song himself recognized to CNBC that the company “is absolutely concerned” about it, although it stated that they have assured supply for their current plans. In the long term, we will see if Meta can achieve something similar to what Google did with its TPUs. Cover image | Mariia Shalabaieva and Goal In Xataka | OpenClaw has caused a real media earthquake in China. The Government has prevented its officials from using it

NVIDIA has lost hope in China, which is why it has started manufacturing its own next-generation GPUs for AI

NVIDIA faces this 2026 a crucial year. They have become one of the largest strategic investors in the AI ​​ecosystem with dozens of billion-dollar investments in other companies, models, infrastructure and robotics. But, in the end, they are a company that supplies chips and, so far, the H200 They set the tone. According to a report by Financial Timesthat’s over. NVIDIA just ordered TSMC to start mass manufacturing Vera Rubinits next-generation hardware for AI. The reason? They have lost all faith in China. In short. With the entire AI industry looking to the future, and NVIDIA that has its Vera Rubin on the starting grid, it was strange that the company continued to invest so much in keeping TSMC working on a chip as old as the H200. Although it has been around for a while, it has positioned itself as unbeatable in the industry due to its price/power ratio, so these are the chips on which it has been built. the AI ​​empire. However, time passes and NVIDIA needs to move. Data centers need more power, new models are more demanding and the spearhead of the software sector – such as OpenAI either Google– have demanded new solutions. According to two sources consulted by the financial media, and close to NVIDIA’s plans, the company has grown tired of “waiting in limbo” and has begun to accelerate the delivery and deployment of Vera Rubin. Yoncomparable. As it could not be otherwise, TSMC is going to be in charge. The Taiwanese foundry would have already been asked to begin diversifying the production line to begin manufacturing the new chips. And if you’re wondering why it’s not enough for Google or OpenAI to simply buy more H200, the answer is because the chips have nothing to do with it. H200 is a more classic GPU for a data center. It is the configuration that AI and computing companies on these servers have been working with for years. Vera Rubin, however, is a paradigm shift made up of new CPUs, new GPUs and designed so that everything works as a single rack-scale accelerator. It has not only more power, but also the latest software and hardware additions from NVIDIA and something very important: incredible bandwidth. The higher the bandwidth on such a system, the more simultaneous data it can handle. This implies greater efficiency when training, but also a lower cost in inference. It is not an update, it is a platform change designed for models with trillions of parameters. Qgoose faith in China. To put it more simply, if the H200 is like a “super powerful graphics card”, Vera Rubin is like a mini data center in itself. And if you’re wondering why they didn’t start production sooner, the reason is… China. Jensen Huang, CEO of NVIDIA, has been ‘fighting’ with Washington for months to open their arms in the trade and technology war maintained by the US and China. Trump ended up agreeing and Huang commented earlier this year that they had returned to “turn on” all production lines to supply the very high Chinese demand. The problem is that that demand did not arrive. At least, It was not as high as Huang expected. In the presentation of results, NVIDIA’s financial director commented a few days ago that “although small quantities of H200 for Chinese customers were approved by the US government, we have not yet generated any income. And we do not know if imports to China will be allowed.” We already told the problem: The US was leaving for NVIDIA to sell its graphics, butThe Chinese government did not seem so convinced. Your main Big Tech They were demanding NVIDIA solutionsarguing that they need them to keep up with what their American rivals are doing, but the ball was in the court of the Government and Customs. China is promoting AI that is different from that of the US, more focused on low costs and rapid acceptance by the client, and at the same time want to build your own hardware network with companies like SMIC or a Huawei that you already have your supercomputer for AI. complicated swerve. From the Financial Times they point out that the president of China, Xi Jinping, and the president of the United States will meet at the end of March to discuss export controls. The problem is that, according to their sources, even if the barrier is lifted completely and not just for certain companies and China can buy H200s en masse, turning TSMC’s ship around so that it starts producing H200s again would be complicated. It is not as simple as pressing a button and going from producing one thing to another. If this situation occurs, “NVIDIA would take up to three months to reallocate or add capacity to the supply chain to produce H200.” One of Vera Rubin’s PCBs Rebound winner. What is clear here is that NVIDIA is not going to lose from the operation. Huang already argued that the United States could not miss the opportunity to take a slice of a multi-billion dollar market (because the US let the cards be sold… with a 25% tariff), but whether it is the Chinese or the Western industry, it is from NVIDIA that they continue to buy the H200 and, ‘shortly’, the Vera Rubin. And the rebound winner in this operation is Samsung. Of the three companies that manufacture memory (and that have catapulted the RAM and SSD crisis we are in), Samsung is the one that has completed its new generation HBM4 memory. It is the one that has passed the high standards of NVIDIA and the one that is already being mass manufactured to be able to integrate into Vera Rubin systems. Everyone attentive. As we said, NVIDIA has to the entire industry at his feet. Google, xAI and Meta are working on their own chips, but together with Microsoft, Amazon Web Services, OpenAI, Mistral and Anthropic they are some of the companies that they … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.