Nvidia is preparing to open the door to the door to The photonic silicon. Just a few hours ago it has started in Palo Alto, California (USA), the Specialized Conference in Semiconductor Engineering and High Performance Computing ‘Hot Chips’. And the company led by Jensen Huang has not let out the opportunity to announce that in 2026 its platforms of artificial intelligence (AI) latest generation They will use photonic interconnections to reach higher transfer speeds between GPU clusters.
Most of the designers and manufacturers of integrated circuits are working on the development of silicon’s photonic. Douglas Yu, a TSMC executive with responsibility in the field of systems integration, explained In September 2023, what disruptive capacity has this technology: “If we manage to implement a good system of integration of silicon photonics we will trigger a new paradigm. We will probably be placed at the beginning of a new era.”
Nvidia has just taught her letters
Before moving forward we are interested in intuiting with some precision what we are talking about. The photonic silicon is a discipline that in the field that concerns us seeks to develop the technology of this chemical element to optimize the transformation of electrical signals into light pulses. The most obvious field of application of this innovation is the implementation of high performance links that, on paper, can be used both to solve communications between several chips and to optimize the transfer of information between several machines.
Advanced packaging technologies with which the main semiconductor manufacturers work, such as TSMC, Intel or Samsung, can benefit a lot from a communication mechanism between very high performance chips. And the big data centers in which it is necessary connect a large number of machinesalso. However, there is a particular discipline that has a projection of the future overwhelming and the one that would be wonderful about the advantages proposed by the photonic silicon: AI.
CPO technology reduces energy consumption to just 9 watts per port
This is precisely Nvidia’s commitment. In the clusters of the thousands of GPU they must work in unison, so it is essential to connect them using high performance links. It is possible to solve this challenge using traditional copper cables or optical modules, but these two solutions introduce very important inefficiencies in the infrastructure. The most problematic are the loss of energy and bottlenecks. Data transfer can consume up to 30 watts per port, which increases the dissipation of energy in the form of heat and increases the probability of a failure.
In addition, latency limits the scalability of clusters as the GPU number of data centers increases. To solve these inefficiencies NVIDIA will integrate the optical components that require photonic interconnections In the same encapsulated switching chip. This technology is known as CPO (Co-Packaged Optics) and reduce energy consumption to only 9 watts per port. In addition, it minimizes signal loss and improves data integrity. It looks really good.
NVIDIA has confirmed that it will integrate CPO technology into its Quantum-X infiniband and Spectrum-X Ethernet interconnection platforms during 2026. However, there is something important that is worth not overlooking: CPO will not be an extra. When it arrives, it will be strengthened as a structural requirement of the next generation of data centers for AI in a clear attempt to increase the competitiveness of hardware platforms for NVIDIA.
Image | Nvidia
More information | Tom’s hardware

GIPHY App Key not set. Please check settings