can multiply the performance of the GeForce RTX 50 by six

At this CES 2026 NVIDIA has forgotten about hardware, but not about innovation. Of course, if there is one word that summarizes its commitment for this year, that is AI. Whether or not you like image reconstruction techniques using artificial intelligence that use the latest GPUs, the reality is that they are here to stay and in what way. And if your thing is also to enjoy games with the best possible quality and high resolution, even more so. Because the GeForce RTX 50 are the standard-bearers (but not the only ones) of the latest NVIDIA installment that has begun to be deployed now and will end this spring: the Deep Learning Super Sampling 4.5, or abbreviated DLSS 4.5, the successor to DLSS 4. Because although the latest generation graphics are the ones that benefit the most from this new technology, there are also innovations compatible with previous series. With DLSS 4.5, NVIDIA promises 4K and 240 Hz gaming with ray tracing thanks to AI. By the way, NVIDIA details that more than 250 titles are already compatible with DLSS 4.0 and that the most ambitious games of 2026, such as ‘Resident Evil Requiem’ or ‘Pragmata’ will also be compatible. The secrets of DLSS 4.5, explained DLSS came from the first generation of GeForce RTX GPU with one objective: that we would enjoy video games with higher FPS even if we were demanding and, incidentally, activate the ray tracing. With DLSS 4.0 they managed to free the graphics card from part of the effort of image rendering in favor of raising the FPS without taking a toll on the quality. DLSS 4.5 goes one step further promising to enjoy games with full Path Tracing at 4K and high refresh rate in a movement that is not a mere iteration, but a profound review of the underlying technology. As we explained in our experience DLSS 4.0the combination of better graphics, fluidity and latency is a holy trinity that cannot be achieved the old way: if we want the best image quality, we have to sacrifice fluidity and latency. If we look for fluidity in abundance, the textures are not going to be as good as they could be. So AI comes into play to make everything possible even if it is by adding invented frames. They are not real, but the experience is so satisfying that it is worth it. The three pillars on which DLSS 4.0 is based are Super Resolution thanks to transformers, multiframe generation and ray reconstruction. He ray tracing In this latest installment it remains as it is in DLSS 4.0, but the first two go up a level with DLSS 4.5. Let’s see where they started and how far they go with the latest technology presented by NVIDIA. The super resolution. The GPU renders the game at a low resolution (e.g. 1080p) to go very fast. DLSS 4.0 AI takes that blurry image and turns it into a crystal clear 4K image. Well, with DLSS 4.5 NVIDIA explains that we will achieve cutting-edge image quality with dynamic generation of multiple frames (up to six times more) to achieve incredible fluidity. The let’s give that we have been able to see show minimal goshting, greater image stability and smoother edges because in fact, it also improves anti-aliasing, the procedure used to reduce the jagged edges of the objects in each frame. In short: it goes directly to the current problem. The secret: second generation transformers. This enhanced Super Resolution is based on a second-generation transformer with improved training, a larger data set, five-fold increased computing power, the ability to analyze many more problematic scenarios than its predecessor, or more intelligent pixel sampling. On a practical level, although the scene is more complex, the reconstruction is much more precise. While it is true that this second generation transformer is a more complex and heavier model, the efficiency of the FP8 format used by the newer series (RTX 40 and 50) softens the impact. In short: that extra intelligence hardly penalizes the latest graphics from the house in terms of speed. Multiple frame generation. With DLSS 4.0, up to three artificial frames were created for every frame drawn by the GPU to make heavy games feel surprisingly fluid. With DLSS 4.5, multi-frame generation is dynamic. Thus, compatible graphics are capable of multiplying this frame invention by four to reach 190 fps or achieve up to six frames for each rendered frame and up to 240 fps. On a practical level, the most interesting thing is that it is capable of maximizing fps depending on the refresh rate of the monitor. That a GPU is capable of moving a game with full Path Tracing at 4K and a sustained refresh rate at a real 240 Hz is a milestone. The graph we see below shows the performance of an RTX 5090 at 4K in several moderately recent games and three different scenarios: natively and with the new DLSS 4.5 dynamic and x6. As can be seen, this image reconstruction technology returns higher performance in all titles, with notable improvements such as ‘NARAKA: BLADEPOINT’. Compatibility and availability. As one would expect given that this launch does not entail a new generation of GPUs (they are expected between 2027 and 2028), each and every one of these new features will be available on the latest graphics cards from the house, the GeForce RTX 50. Below these lines you have a summary table with the main technologies that DLSS 4.5 implements and the graphics families compatible with each of them. geforce rtx 50 geforce rtx 40 geforce rtx 30 geforce rtx 20 Multi frame generation x6 Yeah No No No Super RESOLUTION Yeah Yeah Yeah Yeah Regarding when we can enjoy these improvements, the option to enable the new DLSS 4.5 Super Resolution function is now operational in the NVIDIA app in more than 400 games for those compatible GPUs. Of course, for the generation of frames x6 dynamics exclusive to the … Read more

A quarter of a century ago a student put together 32 GeForce graphics cards to play Quake III. CUDA came from there

In the year 2000 Ian Buck wanted to do something that seemed impossible: play Quake III in 8K resolution. Young Buck was studying computer science at Stanford, specializing in computer graphics, and then a crazy idea occurred to him: put together 32 GeForce graphics cards and render Quake III on eight strategically placed projectors. “That,” he explained years later, “was beautiful.” Buck told that story in ‘The Thining Machine’, the essay published by Stephen Witt in 2025 that traces the history of NVIDIA. And of course one of the fundamental parts of that story is the origin of CUDA, the architecture that AI developers have turned into a gem and that has allowed the company to boost and become the most important in the world by market capitalization. And it all started with Quake III. The GPU as a home supercomputer That, of course, was just a fun experiment, but for Buck it was a revelation, because there he discovered that perhaps specialized graphics chips (GPUs) could do more than draw triangles and render Quake frames. In 2006 the GeForce 8800 GTS (and its higher version, the GTX) began the CUDA era. To find out, he delved into the technical aspects of NVIDIA graphics processors and began researching their possibilities as part of his Stanford PhD. He gathered a small group of researchers and, with a grant from DARPA (Defense Advanced Research Projects Agency), began working on an open source programming language that he called Brook. That language allowed something amazing: making graphics cards become home supercomputers. Buck demonstrated that GPUs, theoretically dedicated to working with graphics, could solve general-purpose problems, and also do so by taking advantage of the parallelism offered by those chips. Thus, while one part of the chip illuminated triangle A, another was already rasterizing triangle B and another writing triangle C in memory. It wasn’t exactly the same as today’s data parallelism, but it still offered amazing computing power, far superior to any CPU of the time. That specialized language ended up becoming a paper called ‘Brook for GPUs: stream computing on graphics hardware‘. Suddenly parallel computing was available to anyone, and although that project barely received public coverage, it became something that one person knew was important. That person was Jensen Huang. Shortly after publishing that study, the founder of NVIDIA met with Buck and signed him on the spot. He realized that this capacity of graphics processors could and should be exploited, and began to dedicate more and more resources to it. CUDA is born When Silicon Graphics collapsed in 2005 – due to NVIDIA that was intractable in workstations – many of its employees ended up working for the company. 1,200 of them in fact went directly to the R&D division, and one of the big projects of that division was precisely to take forward this capacity of these cards. John Nickolls / Ian Buck. As soon as he arrived at NVIDIA, Ian Buck began working with John Nickolswho before working for the firm had tried—unsuccessfully—to get ahead of the future with his commitment to parallel computing. That attempt failed, but together with Buck and some other engineers he launched a project to which NVIDIA preferred to give a somewhat confusing name. He called it Compute Unified Domain Architecture. CUDA was born. Work on CUDA progressed rapidly and NVIDIA released the first version of this technology in November 2006. That software was free, but it was only compatible with NVIDIA hardware. And as often happens with many revolutions, CUDA took a while to gel. In 2007 the software platform was downloaded 13,000 times: the hundreds of millions of NVIDIA graphics users only wanted them for gaming, and it remained that way for a long time. Programming to take advantage of CUDA was difficult, and Those first times were very difficult for this projectwhich consumed a lot of talent and finances at NVIDIA without seeing any real benefits. In fact, the first uses of CUDA had nothing to do with artificial intelligence because artificial intelligence was barely talked about at the time. Those who took advantage of this technology were scientific departments, and only years later would the revolution that this technology could cause take shape. A late (but deserved) success In fact, Buck himself pointed this out in a 2012 interview with Tom’s Hardware in 2012. When the interviewer asked him what future uses he saw for the GPGPU technology offered by CUDA in the future, he gave some examples. He talked about companies that were using CUDA to design next-generation clothes or cars, but he added something important: “In the future, we will continue to see opportunities in personal media, such as sorting and searching photos based on image content, i.e. faces, location, etc., which is a very computationally intensive operation.” Here Buck knew what he was talking about, although he did not imagine that this would be the beginning of the true CUDA revolution. In 2012 two young doctoral students named Alex Krizhevsky and Ilya Sutskever They developed a project under the guidance of their supervisor, Geoffrey Hinton. The Nvidia Way: Jensen Huang and the Making of a Tech Giant (English Edition) The price could vary. We earn commission from these links That project was none other than AlexNetthe software that allowed images to be classified automatically and which until then had been a useless challenge due to the cost of the computing it required. It was then that these academics trained a neural network with NVIDIA graphics cards and CUDA software. Suddenly AI and CUDA were starting to make sense. The rest, as they say, it’s history. In Xataka | We can forget about AI without hallucinations for now. NVIDIA CEO explains why

Nvidia tries to placate the criticism of the distribution of the GeForce RTX 50. Your answer is not entirely convincing

Several weeks after its launch it is still very difficult to find in the stores a family graphics card GeForce RTX 50 with a price aligned with its official cost. The easiest model to get with a price Similar to the marked by Nvidia It is the GeForce RTX 5070but the others or have an important extra cost or are not available. In this scenario it is understandable that users who have decided to get one of these graphics cards are disappointed. Many of you have clearly expressed it in The comments of our articlesand forums dedicated to PC hardware are full of users They complain due to the low availability of the new NVIDIA graphics cards. Given the circumstances, the company led by Jensen Huang has been forced to face with the purpose of appeasing players and trying to explain what is happening. This is Nvidia’s response As we can see in the graphics that we publish on the cover of this article, Nvidia says that during the first five weeks after the launch has distributed twice the graphics cards of the GeForce RTX 50 family that on its day of the range GeForce RTX 40. However, this explanation deserves a calm analysis. And, as they point out in Tom’s hardware and Redditthis comparison does not scrupulously reflect what is happening. During the first five weeks of the launch of the GeForce RTX 40, only the RTX 4090 was available Jarred Walton, one of Tom’s hardware editors, explains very well: During the first five weeks after the launch of the GeForce RTX 40, only the RTX 4090. This graphics card arrived on October 12, 2022, and the RTX 4080 landed on November 16, just five weeks later. All users know that RTX 4090 is not an adequate graphic solution for a wide range of players or for its price or for their benefits. However, during the first five weeks after the launch of the GeForce RTX 50 family, the four graphics cards that are currently available: the RTX 5090, the RTX 5080, the RTX 5070 TI and the RTX 5070 arrived. The first two landed together on January 30, 2025, while the GeForce RTX 5070 TI arrived on February 20. Finally, the GeForce RTX 5070 appeared almost at the end of this period: on March 5. If we have all this present it is reasonable to conclude that the comparison made by Nvidia It is not balanced. Not even accepting the possibility that this company has added the RTX 4080 to the figure of the GeForce RTX 40 family. Hopefully both availability and prices are normalized as soon as possible for the good of users. Image | Nvidia More information | Tom’s hardware In Xataka | If Nvidia lived only from PC GPUs, she would be about to die of success. And US tariffs don’t help

AMD RX 9000 RX 9000 are lists. And they come determined to fight you with the GeForce RTX 50 of Nvidia

AMD has just presented its first graphics cards that implement RDNA 4 microarchitecture. He told us about them during the 2025 CES, but at that time he only announced a few details about his architecture. THE RADEON RX 9000 They are destined to measure from you to you with the GeForce RTX 50 of Nvidia, so presumably they will not have it easy. Whatever the new AMD graphics cards come full of novelties. His best baza a priori is his microarchitecture. And, as we are about to verify, RDNA 4 proposes a lot of refinements, so, on paper, it should significant clearly to the Radeon Rx 7000 With microarchitecture Rdna 3. In addition, the Rx 9000 RADEON proposes the technology of reconstruction of the image Fidelityfx Super Resolution 4 (FSR 4), which, on paper, is able to give us a cadence of images per second and a higher level of detail than FSR 3. Rdna 4: This is the authentic heart of the Radeon Rx 9000 The slide that we publish under these lines collects the main characteristics of the RDNA microarchitecture 4. The lithography used to manufacture the RADEON RX 9000 GPU uses a 4 Nm integration technology, presumably from TSMC. However, these chips give us many more innovations. And AMD promises that the calculation units of these GPU have been deeply redesigned, that their performance when facing the Ray layout It is higher, and that, in addition, they rely on the artificial intelligence (AI). The lithography used to manufacture the RADEON RX 9000 GPUs uses a 4 Nm integration technology AMD will continue in these graphics cards the same path as NVIDIA: it will address the reconstruction of the image using the latest generation algorithms. On the other hand, Radeon RX 9000 have SECOND GENERATION ACCELERATORSThird generation ray layouts and a second generation video signal delivery engine. An important detail: according to AMD the average performance of the calculation unit with Microarchitecture RDNA 4 is 40% higher than that of RDNA 3 solutions. The next slide clearly reflects that much of the improvements introduced by AMD engineers seeks to increase the performance of the new GPUs when they must deal with AI algorithms to face the reconstruction of the image. AMD promises that the RDNA 4 duplication of RDNA 3 in FP16 operations; It quadruples in int8 and when working with FP16 matrices with many zero values ​​(this circumstance is known as ‘sparsity’), and multiplies it by eight when operating with int8 matrices with sparsity. On paper is not but that nothing wrong. We are now with the reconstruction of the image through ia. FSR 4 technology is only compatible with the new RADEON RX 9000 graphics cards because the presence in the GPU of Hardware resources introduced by AMD in these chips. Be that as theory this technology offers us a cadence of images per second higher, a higher level of detail and a lower latency. In addition, FSR 4 uses the same application programming interface (API) as FSR 3.1, so developers who have worked with this last tool should have no problem when making the leap to the new technology developed by AMD. A curious note: in the next slide we can see that this company promises that its technology is prepared to face the neural rendering that will arrive in the future. Sounds good. We’ll see what it consists of when the time comes. The next slide collects the main specifications of the RADEON RX 9070. This graphics card, the most “modest” of the two that the Radeon RX 9000 family today, incorporates, incorporates 56 calculation units16 GB of GDDR6 VRM, the GPU works at a maximum clock frequency of 2,520 MHz and has an average consumption of 220 watts. The Radeon RX 9070 XT is a bit more ambitious than the model in which we just inquired. It proposes 64 calculation units, 16 GB of GDDR6 type VRM, the GPU works at a maximum clock frequency of 2,970 MHz and has an average consumption of 304 watts. It will be interesting to see how it yields to the GEFORCE RTX 5070 TI of Nvidia. There are the prices of Radeon RX 9000. RX 9070 XT will cost $ 599 (about 575 euros). And to get a RADEON RX 9070 we will have to pay $ 549 (Approximately 527 euros). AMD’s plan is that these two graphic solutions compete from you to you with the GeForce RTX 5070 Ti and 5070 of Nvidia. One last note to conclude: the two graphics cards with which the Rx 9000 AMD family is born will be available from next March 6. More information | AMD In Xataka | The next revolution of the chips is approaching. Intel, Samsung, TSMC and AMD already work on glass substrates In Xataka | This is the strategy that has kept Nvidia at the top for more than two decades: the dilemma of the innovator

Some GeForce RTX 50 graphics cards are missing rop units. Nvidia has explained why

The family of graphics cards GeForce RTX 50 developed by NVIDIA continues to occupy the Center for Care in the field of PC hardware some more than a month and a half After your presentation. On this occasion we do not propose to talk about how difficult it seems to be to achieve one of these graphic solutions; We invite you to investigate A manufacturing defect which affects a small amount of GPUs installed in these graphics cards. Shortly after their launch some users observed that the number of rankization units, identified in English as ROP (Raster Operations Pipeline either RASTER OUTPUT PROCESSOR), integrated in some GPU GeForce RTX 50 was lower than that indicated by NVIDIA in the specifications. These units are responsible for executing the operations required by the rendering by rankization with the purpose of delivering the images formed by the final pixels to the monitor. Nvidia explained what is happening Several buyers They assure that their GeForce RTX 5090 and 5070 TI have a number of functional ROP units lower than the one indicated in the specifications. Days later a user of Reddit He denounced this same problem on his graphics card with GPU GEFORCE RTX 5080. In the latter case the graphic chip should integrate 112 ROP, but that particular card incorporates only 104 ROP, which has a downward impact on its performance. “Affected consumers can contact the graphics card manufacturer to obtain a new one. This production anomaly has already been corrected” Unfortunately it is not possible to solve this problem by resorting to an update of the BIOS or the controllers because it is a hardware defect. Nvidia has confirmed Tom’s hardware that, in effect, the problem exists, and that It is a manufacturing defect which affects a small amount of GEFORCE RTX 50 family chips. According to the company led by Jensen Huang, the amount of affected graphics cards is somewhat less than 0.5%. “We have identified a rare problem that affects less than 0.5% of the GeForce RTX 5090, 5090D and 5070 TI. They have a lower ROP than specified (…) The average impact on graphic performance is 4% and does not affect the workloads of artificial intelligence (AI) and calculation. Affected consumers can contact the graphics card manufacturer to obtain a new one. This anomaly in production has already been corrected “, explains the official Nvidia statement. The ideal in these circumstances is that all players who have bought one of the GeForce RTX 50 that can presumably be affected by this anomaly in the manufacturing process check if Your GPU has appropriate rop units using Techpowerup Gpu-Z or other similar diagnostic application. If your GPU is one of the defective ones they have the right to claim the change of their graphics card in the store in which they have bought it. An important note: some Founders Edition cards sold directly by Nvidia are also affected. Image | Nvidia More information | Tom’s hardware In Xataka | Nvidia “has broken the deck”: it is leading the creation of a memory standard for the PCs with AI

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.