AI is running out of power in this world. So Nvidia has opted for servers in space
The energy appetite of data centers is nothing new. Elon Musk predicts a shortage of transformers in two years. Sam Altman believes we will need an energy revolution, such as nuclear fusion, to keep pace. The planet was not prepared for so much energy demand. And that’s why Nvidia is funding a possible solution: deploy the servers outside of Earth. It’s not science fiction. It is the business model of several startups that propose building the next hyperdata centers in Earth orbit and even on the Moon. The idea, which until recently sounded far-fetched, is gaining traction driven mainly by two factors: the insatiable demand for AI and the low-cost launches that Starship promises. One of the companies leading this idea is Starcloud, supported by the NVIDIA Inception program. And he is so serious that he plans to launch his first satellite, the Starcloud-1in November. On board it will carry the first GPU for data centers launched into space: an NVIDIA H100. The difficult part will come later. Starcloud-1 is a test unit the size of a small refrigerator, but the company’s goal is to build a monster five-gigawatt orbital data center. Adding the solar panels and the enormous radiator, it would measure four kilometers wide. Its goal is the training of large AI models in orbit. Why in space? As detailed in an extensive white paperfuture models like GPT-6 or Llama 5 could require multi-gigawatt clusters, something “simply impossible with the current energy infrastructure” on Earth. In space, there is no such limitation. It’s more. According to Starcloud calculations, server energy costs are 10 times lower in space than on Earth. The value proposition of space data centers is based precisely on two pillars that are a problem on Earth: energy and cooling. Solar energy 24/7. On Earth, solar energy is intermittent. They depend on the day/night cycle, the weather and the atmosphere, which attenuates the radiation. In space, things change. By placing your data centers in a sun-synchronous “dawn-dusk” orbit, Satellites follow the line that divides day and night on Earth. With the panels illuminated by the sun almost continuously, the system increases its capacity to more than 95%. “Almost unlimited, low-cost renewable energy,” in the words of Starcloud. And the refrigeration? How would they dissipate all that heat? Land-based data centers consume millions of liters of fresh water to cool. There is no water in space, but they have something much better: an infinite heatsink at -270°C. The plan is not to ventilate the servers. The heat generated by GPUs (such as the H100) will be managed within sealed modules using liquid cooling (direct-to-chip or immersion), like high-performance systems on Earth. The difference is that this hot liquid does not go to an evaporation tower, but is pumped to gigantic radiator panels. These panels simply radiate waste heat into the vacuum of space in the form of infrared radiation. The Starcloud white paper details the calculations using the Stefan-Boltzmann law, estimating that a radiator at 20°C can cleanly dissipate more than 630 watts per square meter. Without using a single drop of water. Not everything that glitters in space is gold. The pillar that supports this entire concept is the launch of high-capacity reusable rockets, such as SpaceX’s Starship. Starcloud calculations are based on a long-term cost of $30 per kilo put into orbit. But Starship is not ready, and it is certainly far from achieving its full and rapid reusability capability. If that cost does not materialize, the economic viability of the system collapses. The other big problem is radiation. Commercial GPUs are not designed for space. Cosmic radiation and solar flares can fry electronics. The solution is shielding, which adds mass and therefore launch cost. Not to mention that maintenance is not possible with current technology.