in

The B300 GPU is the new Nvidia beast for Ia. And we already know what prepares for 2026 and 2027

Jensen Huang, the co -founder and general director of Nvidia, has not let out the opportunity to publicize the next GPU for artificial intelligence (AI) that have put their engineers ready in the framework of the GTC 2025 (GPU Technology Conference). The spectacular thing this electrical engineer has presented is The DGX B300 platform. This hardware is the most powerful Nvidia for generative, although according to this company it is also its most efficient proposal from an energy point of view.

The Blackwell Ultra GPUs work on the B300 platform elbow with a 2.3 TB Map of HBM3E memory, delivering according to NVIDIA 72 Pflops in training processes with precision FP8 and nothing less than 144 pflops in inference tasks with precision FP4. These figures are a real monstrosity. In fact, the B300 platform is 11 times faster in inference and 4 times in training than its predecessor, the B200.

This is the hardware with which Nvidia wants to maintain her leadership

If we look at the consumer figures announced by NVIDIA we will see that apparently the energy efficiency of B200 and B300 platforms is similar. The first consumes approximately 14.3 KW maximumand the second one 14 kW. However, there is something that we should not overlook: the GPUs of both solutions have been implemented on the Blackwell microarchitecture, but they are not the same. The Blackwell Ultra chips of the B300 platform are more powerful than the Blackwell to dry infrastructure B200.

The B300 platform integrates 50% more memory, allowing you to deal with larger AI models

In addition, the B300 platform integrates 50% more memory, which in theory allows this hardware to deal with larger and more parameters. This proposal will reach the first data centers During the second semester of 2025. In any case, Nvidia has not only spoken of her current hardware in this edition of her conference dedicated to AI; He has also anticipated what his engineers are working for 2026 and 2027. The microarchitecture that will replace Blackwell is known as Rubin, and, as expected, it will be even more powerful than his predecessor.

An interesting detail is that Rubin will be compatible with Blackwell at the infrastructure level, which will allow Nvidia customers to combine both solutions. In any case, Rubin will deliver 1.2 EXAFLOPS in training processes with precision FP8 compared to 0.36 EXAFLOPS of the B300 platform. It will arrive during the second half of 2026.

And during the second semester of 2027 Nvidia will launch Rubin Ultra, a review that according to this company will reach 5 exaflops In training tasks with FP8 precision, so your performance in this scenario will be almost four times greater than Rubin’s. A last interesting note: Rubin will use HBM4 memory, while Rubin Ultra will have HBM4E.

Image | Nvidia

More information | Nvidia

In Xataka | AI is already our best ally to solve the mathematical problems that seem impossible

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

characteristics, price and technical file

Figure creates a system to make large -scale humanoid robots. And of course, there will be robots manufacturing robots