Something is going wrong with AI. The US is turning to energy solutions that it thought were buried to power data centers

The race to develop and operate increasingly powerful artificial intelligence models comes at a cost that is rarely at the center of the technological narrative. It is not in the chips or the software, but in the huge amount of electricity needed to keep active data centers running around the clock. In the United States, this pressure is already being translated into concrete decisions: polluting power plants that were in retirement are being restarted to cover increasing peaks and tensions on the grid. The paradox is evident, the most ambitious advance in the technology sector depends, for the moment, on energy solutions from another era. The problem is not so much an absolute shortage of electricity as a time lag. The demand for data centers linked to AI it’s growing much faster than the ability to launch new electrical generation, especially renewable, in short terms. Building large energy infrastructures takes years, while these complexes can advance in much shorter time frames. Faced with this temporary shock, network operators and electricity companies are turning to what already exists and can be activated immediately, even if it is more polluting. PJM in context. The clash between electricity demand and supply is perceived with special clarity in the PJM region, the largest electricity market in the United States, which covers 13 states and concentrates a very significant part of the country’s data centers. We can understand it as a large regional electricity exchange that coordinates generation, prices and network stability in real time. There, the growth of data centers linked to AI is putting to the test a system designed for a very different consumption pattern, making PJM the first thermometer of a problem that is beginning to appear in other areas. What is a central peaker. The calls central peakeror peak, are facilities designed to come online only during short periods of peak demand, such as heat waves or winter peaks, when the system needs immediate reinforcement. They are not designed to operate continuously, but to react quickly. According to a report According to the US Government Accountability Office, these facilities generate just 3% of the country’s electricity, but they account for nearly 19% of the installed capacity, a reserve that is now being used much more frequently than expected. South view of the Fisk plant in Chicago The case of the Fisk plant, in the working-class neighborhood of Pilsen, in Chicago, illustrates well how this shift translates on the ground. It is an oil-fueled facility, built decades ago and scheduled to be retired next year, that had been relegated to an almost testimonial role. The arrival of new electrical demands associated with data centers changed that equation. Matt Pistner, senior vice president of generation at NRG Energy, explained to Reuters that the company saw an economic argument to maintain the units and that is why it withdrew the closure notice, a decision that returns activity to a location that many residents believed was in permanent withdrawal. When the price rules. The change is not explained only by technical needs, but also by very clear market signals. In PJM, the prices paid to generators to guarantee supply at times of maximum demand skyrocketed this summer, more than 800% compared to the previous year. An analysis by the aforementioned agency shows that about 60% of oil, gas and coal plants scheduled for retirement in the region postponed or canceled those plans this year, and most of them were units peakerjust the ones that best fit in this new scenario of relative scarcity. The bill for this energy shift is paid above all at a local level. The power plants peaker They tend to be older facilities, with lower chimneys and fewer pollution filters than other plants, which increases the impact on their immediate surroundings when they operate more frequently. Coal is also postponed. The phenomenon is not limited to power plants peaker fueled by oil or gas. On a national scale, several utilities have begun to delay the closure of coal plants that were part of their climate commitments. A DeSmog analysis identified at least 15 retirements postponed from January 2025 alone, facilities that together represent about 1.5% of US energy emissions. Dominion Energy offers a clear example: In 2020 he promised to generate all its electricity with renewables by 2045, but after the company projected that data center demand in Virginia will quadruple by 2038, it is now taking a step back. Images | Xataka with Gemini 3 Pro | Theodore Kloba In Xataka | A former NASA engineer is clear: data centers in space are a horrible idea

data centers in space are a horrible idea

Artificial intelligence has turned energy into the new technological bottleneck. And faced with that limit, some of the largest companies in the world have begun to look up. To give some examples, Jeff Bezos has spoken of “giant AI clusters orbiting the planet” in a decade or two. Google has experienced with running artificial intelligence calculations on solar-powered satellites. Nvidia supports startups who want to launch GPUs into space. Even OpenAI has tried the purchase of a rocket company to ensure his own path off Earth. The promise is seductive: solar data centers running around the clock, without power grids or cooling towers. The problem is that, when you move from the story to physics, engineering and numbers, the idea begins to break down. Data centers in space. There is a question that surrounds this issue: why do technology companies want to send data centers to space? The motivation at first glance is clear. According to data from the International Energy Agencydata center electricity consumption could double by 2030, driven by the explosion of generative AI. Training and running models like ChatGPT, Gemini or Claude requires massive amounts of electricity and huge volumes of water for cooling. In many places, these projects are already running into local opposition or physical network limits. In this context, space appears as a tempting solution. In certain orbits, solar panels can receive almost constant light, without clouds or night cycles. Besides, as Bezos and other defenders explainthe vacuum of space seems to offer an ideal environment to dissipate heat without resorting to cooling towers or millions of liters of fresh water. According to this argument, space data centers would be more efficient, more sustainable and, over time, even cheaper than terrestrial ones. For some executiveswould not be an eccentricity, but the “natural evolution” of an infrastructure that already began with communications satellites. When engineers raise their hands. Faced with the enthusiasm of corporate statements, several space engineering experts have been much more forceful. In one of the most cited texts on the subjecta former NASA engineer with a PhD in space electronics and direct experience in AI infrastructure at Google sums up his position bluntly: “This is a terrible idea and it doesn’t make any sense.” His criticism is not ideological, but technical. And it starts with the first great myth, the supposed abundance of energy in space. Solar energy is not magic. The largest solar system ever deployed outside of Earth is the International Space Station. According to NASA dataits panels cover about 2,500 square meters and, under ideal conditions, generate between 84 and 120 kilowatts of power, a part of which is used to charge batteries for periods in the shade. to put it in contexta single modern GPU for AI consumes on the order of 700 watts, and in practice around 1 kilowatt when losses and auxiliary systems are taken into account. With those figures, an infrastructure the size of the ISS could barely power a few hundred GPUs. As this engineer explainsa modern data center can house tens or hundreds of thousands of GPUs. Matching that capability would require launching hundreds of structures the size—and complexity—of the International Space Station. And even then, each would be equivalent to just a few racks of terrestrial servers. Furthermore, the nuclear alternative does not solve the problem either since the nuclear generators used in space, RTGs, produce between 50 and 150 watts. In other words, not even enough to power a single GPU. Space is not a refrigerator. The second big argument against orbital data centers is cooling. It is frequently repeated that the space is cold, and that this would make it easier to dissipate heat from the servers. According to engineers, this is one of the most misleading ideas in the entire debate. On Earth, cooling is based on convection: air or water carries away heat. In the vacuum of space, convection does not exist. All heat must be removed by radiation, a much less efficient process that requires enormous surfaces. NASA itself offers a compelling examplethe active thermal control system of the International Space Station. It is an extremely complex network of ammonia circuits, pumps, exchangers and giant radiators. And even so, its dissipation capacity is in the order of tens of kilowatts. According to the calculations of the aforementioned engineercooling the heat generated by high-performance GPUs in space would require radiators even larger than the solar panels that power them. The result would be a colossal satellite, larger and more complex than the ISS, to carry out a task that is solved much more simply on Earth. And there is a third factor: radiation. In orbit, electronics are exposed to charged particles that can cause bit errors, unexpected reboots, or permanent damage to chips. Although some tests, such as those carried out by Google with its TPUs, show that certain components can withstand high doses, the failures do not disappear, they only multiply. Shielding systems reduces risk, but adds mass. And each extra kilo increases the cost of the launch. Furthermore, AI hardware has a very short lifespan, as it becomes obsolete within a few years. On Earth it is replaced; In space, no. As critics point outan orbital data center would have to operate for many years to amortize its cost, but it would do so with hardware that is left behind much sooner. So why do they keep insisting? The answer seems to lie less in current engineering and more in long-term strategy. All of these projects depend on the condition that launch costs fall drastically. Some estimatesthey talk about thresholds of about 200 dollars per kilo so that space data centers can compete economically with terrestrial ones. That scenario relies on fully reusable rockets like Starship, which have not yet demonstrated that capability on an operational scale. Meanwhile, terrestrial renewable energies they continue to get cheaperand storage systems They improve year after year. Furthermore, the story of the space fulfills another function because it positions … Read more

AI data centers are skyrocketing your electricity bill

data centers They consume a lot of electricityfrom there arise proposals as crazy as that of take them to space either submerge them in the sea to reduce its consumption. Technology companies face a problem of shortage of electrical energy, but the real problem is something else: data centers are causing the electricity bill to rise for all citizens. Now three US senators want to investigate it thoroughly. A political question. They say in the New York Times that three Democratic senators have announced that they are going to investigate big technology companies for their role in increasing the electricity bill. Senators have sent letters to Microsoft, Google, Meta, Amazon, CoreWeave and other companies asking them to detail exactly what their data centers consume. The bill increases have become a political issue and have played an important role in elections in several states. In the case of Virginia, where the largest number of data centers in the world are concentrated, Governor Abigail Spanberger’s campaign included proposals to require data centers to “pay their fair share.” The problem. For the past 20 years, the US electricity system had been stuck with stable energy demand or very modest increases. Data centers have seen very abrupt growth. In 2023, data centers consumed 4% of all electricity in the United States and this is estimated to increase up to 12% 2028. This abrupt increase in demand has forced electricity companies to modernize the network. The technology companies assume part of the cost, but not all, and the way to recover that investment is through the bill of all network users. The discount trick. The technological ones, such as Amazon ensures that its data centers are not raising the bill and that they assume all the costs, contributing to improving the network for everyone. What they don’t say is that they benefit from enormous discounts, like the one they Amazon itself requested regulatory authorities in Ohio in 2024, where they are building a data center, a discount on the electricity rate. The problem is that the agreement is opaque and we do not know how much that discount was, but it is estimated that it could be 135 million per year, over a period of 10 years. Who really pays? In many cases, technology companies pay for the infrastructure necessary to expand the network, but what about these discounts? According to a paper published by the Harvard Electricity Law Initiative in which they reviewed more than 50 regulatory cases, it is very common for electricity companies to offer subsidies to attract technology companies and the way to compensate for these discounts is to pass them on to all network subscribers, which ends up increasing the bill. Unaffordable increases. According to the United States Energy Information Administrationin September electricity increased 7% compared to the same period of the previous year. Things change if we go to the cities near the data centers, where the increases have reached 267%, unaffordable figures for many citizens. Proposals. There are states that are already legislating to prevent network customers from ending up paying the bill for data centers. This is the case of Michigan, which has put special rules for data centers. Companies must sign a contract of at least 15 years, face fines if they cancel before and pay at least 80% of the contracted power even if they do not use it. In addition, they must pay all the costs of the lines and services that are built to serve them. However, these proposals could encounter difficulties due to the executive order that Trump signed and that prevents states from enacting laws that could stop the advance of AI, all to win the battle against China. Image | Google In Xataka | The United States may win the AI ​​race, but its problem is different: China is winning all the others

Google is serious about putting data centers in space. Elon Musk and Jeff Bezos rub hands

While there are municipalities debating whether to let big technology companies install data centers in their domainsGoogle wants a strike further: taking the data centers to space. Google. The company revealed its intentions a few weeks ago and your Suncatcher project wants to install two prototype satellites before 2027. Curiously, Elon Musk and Jeff Bezos are more than delighted with the idea of ​​their rival. Suncatcher Project. Push the capabilities of the artificial intelligence requires that we train it and, for this, they are necessary huge data centers with spectacular computing power. The problem is that the energy needs of these facilities They are astronomical, becoming resource sinksmaking oil companies set aside their renewable energy plans and even raising the opening of “private” nuclear power plants. Suncatcher couldn’t have a more appropriate name. In space, without the influence of the atmosphere, solar panels They capture the light spectrum in a different way, enough to feed those data centers that seem insatiable, and what Google proposes is to build constellations of dozens or hundreds of satellites that orbit in formation at about 650 kilometers high. Each of them would be armed with Trillium TPU (processors specifically designed for AI calculations) and would be connected to each other via laser optical links. Pichai puts the topic anywhere. Although 2027 is the key date, it is evident that Google is very interested in airing its plans because it is a sign of both technological power and an invitation for interested entities to invest in the process – and a way to continue inflating everything around AI-. And the person who is practicing this speech the most is the company’s CEO himself: Sundar Pichai. Since we learned of Google’s plans, Pichai has spoken of the topic in every interview he has given. It does not tell anything new beyond that hope of having TPUs in space in 2027 and the ambition that in a decade extraterrestrial data centers will be the norm. Musk and Bezos: competition, but allies. And if Google is interested in selling its narrative, those who are also interested are two of its most direct competitors: Elon Musk and Jeff Bezos. Both Musk with several of his companies and Bezos with Amazon Web Services are in the race for data centers and artificial intelligence. They have some of the largest on the planet, but they also have something that the rest of the competitors don’t: ability to launch things into space. Musk with SpaceX and Bezos with Blue Origin have the tools to put satellites into orbit, charging for each kilo they launch into space. And it is there, the more credible it seems that the future of computing is in low Earth orbit, the more economic and political sense they will make. SpaceX as Blue Origin. Both are Google’s competition, but also the option for Google to achieve its objective. And, ultimately, we keep seeing rival companies renting their services from each other. Data center fever in space. The truth is that, at first, it sounds like a crazy plan to build these extraterrestrial data centers, but from the most pragmatic point of view (removing logistics and the money that both development and each launch will cost from the equation), it is a plan that makes sense. In space, a panel can perform up to eight times more than on the Earth’s surface, in addition to generating electricity continuously by not depending on day/night cycles. It is something that would eliminate the need for huge batteries, but also for complex water-based cooling systems. And, as we said, Google is not alone in this. Currently, there is a fever for space data centers with big technology companies in the spotlight: Considerable challenges. Now, Google itself comment It will not be easy to carry out this strategy. On the one hand, the costs. The company claims that prices may fall several thousand dollars per kilo to just $200/kg by mid-2030 if the industry consolidates. They note that, in that case, the price of launching and operating a space data center could be comparable to the energy costs for an equivalent terrestrial data center. Another difficulty will be maintaining a close orbit between the satellites. They would have to be within 100-200 meters of each other for optical links to be viable. And most importantly: radiation tolerance by the TPUs. Google has been experimenting with this for years, but they must test the effects of radiation on sensitive components such as the HBM memory. Surely astronomers They will be delighted with this strategysame as with starlink. Image | THAT In Xataka | We are launching more things into space than ever before. And the next problem is already on the table: how to pollute less

We have been talking theoretically about data centers in space for months. A company already has a plan to set it up in 2027

The Californian startup Aetherflux has announced which will launch its first data center satellite in the first quarter of 2027. It is the initial node of a constellation that the company has named “Galactic Brain”, designed to offer in-orbit computing capacity powered by continuous solar energy. The underlying promise. Aetherflux presents an alternative to the years of construction that terrestrial data centers require. According to Baiju Bhatt, company founder and co-founder of the financial firm Robinhood, “the race toward artificial general intelligence is fundamentally a race for computing power and, by extension, energy.” The company is committed to placing sunlight next to silicon and completely bypassing the electrical grid. How the project works. The Galactic Brain satellites will operate in low Earth orbit, taking advantage of solar radiation 24 hours a day, something impossible on land. Advanced thermal systems would eliminate the limitations faced by terrestrial data centers, which require large amounts of water and electricity for cooling. In addition, the constellation fits within Aetherflux’s initial plans: transmitting energy from space to Earth using infrared lasers. The competition is already underway. Aetherflux is not alone in this bet. Google presented in November your Suncatcher projecta plan to launch AI chips into space on solar-powered satellites. Jeff Bezos too expressed his optimism on large data centers operating in space in the next decade or two, a goal that Blue Origin has been working on for more than a year. SpaceX also works in use Starlink satellites for computing loads of AI. Musk himself wrote in The real obstacles. Although launch costs have decreased considerably, they remain prohibitive. According to recent estimateslaunching a kilogram with SpaceX’s Falcon Heavy costs around $1,400. Google calculate that if these costs drop to about $200 per kilogram by 2030, as projected, the expense of establishing and operating space data centers would be comparable to that of terrestrial facilities. In addition, the chips will have to withstand more intense radiation and avoid collisions in an increasingly congested orbit. The urgency. Big tech is colliding with physical limits on Earth. From 2023, dozens of data center projects have been blocked or delayed in the United States due to local opposition over electricity consumption, water use and associated pollution. According to the consulting firm CBRElimitations in electricity generation have become the main inhibitor of data center growth around the world. The Aetherflux Calendar. The company, founded in 2024 and which has raised $60 million in financing, plans to first demonstrate the feasibility of transmitting space energy through a satellite that will launch in 2026. If all goes according to plan, the first Galactic Brain node will arrive in 2027. The company anticipates launching about 30 satellites at a time on a SpaceX Falcon 9 or equivalent, although if Starship becomes an option, they could orbit more than 100 data center satellites in a single launch. The long term strategy. Aetherflux hasn’t revealed pricing yet, but promise Multi-gigabit bandwidth with near-constant uptime. Their approach is to continually release new hardware and quickly integrate the latest architectures. Older systems would run lower priority tasks until the life of the high-end GPUs were exhausted, which under high utilization and radiation might not last more than a few years. Cover image | İsmail Enes Ayhan and NASA In Xataka | OpenAI launches GPT-5.2 weeks after GPT-5.1: a maneuver that aims to cut ground on Google’s Gemini 3

In Finland they already know how to deal with excess heat from data centers: convert it into district heating

Helsinki has found an unexpected ally to decarbonize its heating in the midst of the rise of artificial intelligence: waste heat from data centers. The same heat that servers generate when processing millions of queries, training AI models, or moving Internet traffic is no longer wasted. In the Finnish capital, this thermal flow – which is growing at the same rate as the digital world – is beginning to become shelter for tens of thousands of homes. A digital sector that is now heating up cities. For years, data centers were known for one uncomfortable characteristic: they generated a lot of heat and needed huge cooling systems to dissipate it. Now that residual heat is already being channeled to the Helsinki heating network, thanks to agreements signed with operators such as Equinix, Telia and Elisa. Data Center Dynamics remember that the company It has been testing this model for more than a decade – the first pilot tests date back to 2010 – but now the scale is completely different: the thermal demand of the city is enormous and the volume of heat generated by the digital economy is growing non-stop. The result can already be seen, a single data center can heat up to 20,000 homes, according to official figures from Helen. The Telia plant, for example, already recovers up to 90% of the heat generated by its servers, enough to heat 14,000 apartments, and in a few years it could double that figure to 28,000. A change in the way heat is produced. Digital heat recovery is more than just a technological curiosity. It represents a change in the way district heating is conceived. In the words of the Finnish company“the electricity consumed by data centers always ends up being converted into heat.” The difference is that now that heat is no longer released outside: it is reused. The engineering behind urban heat. Finland can convert digital heat into district heating because it has a network of district heating especially advanced: a network of pipes that distributes hot water to homes, schools and public buildings. The process is as follows. A data center generates heat: the servers run 24/7 and are continuously cooled. That heat, instead of being dissipated outside, is captured. It is then recovered and transferred; To do this, data centers can install their own recovery systems or use those offered by the energy company. The heat is sent to an “energy platform”, where heat pumps raise it to useful temperatures. Then, the temperature is adjusted to the 85–90 ºC necessary so that the water can circulate through the urban network. This is where high-temperature heat pumps come into play—some of which, like Patola’sthey work even with outside air at –20 ºC. Finally, the heat is injected into the grid and distributed throughout the city to heat thousands of buildings. Closing the energy circle. To understand why Finland leads this model, we must look at an essential technological element: heat pumps. Not only domestic ones, but also large-scale industrial ones, capable of raising waste heat to temperatures useful for an urban network. Europe—and especially the Nordic countries— has become a world leader of this technology. Finland has 524 heat pumps per 1,000 homes, a figure second only to Norway, and its cities have been electrifying heating for decades. This combination—cold climate, tradition of district heatingheat pump industry and the need to decarbonize quickly—turns Finland into an urban-scale energy laboratory. A model with limits. Although the system works, it is not a panacea. As Middle Parenthesis remembersnot all data centers are close to cores with thermal demand, not all generate enough heat to justify the investment, heat recovery improves efficiency but does not reduce the electrical consumption of data centers, and in hot climates or widely dispersed cities, replicating it is much more difficult. Still, the trend is clear. With the expansion of AI and the growth of cloudthe amount of heat available will only increase. The Nordic countries – Sweden, Norway, Denmark – already take advantage of it, and large operators such as Microsoft and Google They explore similar systems across Europe. From silicon to the stove. The Finnish model shows that, even at the heart of digital infrastructure – those data centers that power our online lives – there can be hidden a useful and concrete source of energy for everyday life. The heat produced by our searches, our videos or our conversations with AI can be transformed, with the right infrastructure, into heating a home in Helsinki. In a world desperately seeking clean heat, Finland has already found a tangible, scalable and surprisingly logical answer: turning the thermal problem of the digital age into a solution for the Nordic climate. A silent reminder that, sometimes, the energy transition advances with a simpler approach: taking advantage of the heat that servers already produce tirelessly. Image | freepik and freepik Xataka | Someone cut five undersea cables in the Baltic. Finland already points to a ship from the “shadow fleet” as responsible

We already have the world’s first fast neutron nuclear reactor. We are going to use it for AI data centers

The growth of artificial intelligence is driving global electricity demand to historical figures. The expansion of data centers, the advance of electrification and the industrial rebound are straining aging networks that are already suffering from saturation in multiple countries. In this scenario, the digital sector—a large consumer of electricity for the development of AI—faces a paradox: it needs much more energy, but it must do so without increasing its emissions. And there arises a proposal that until recently would have seemed like science fiction: data centers powered by a compact fast neutron nuclear reactor. The Stellaria–Equinix deal that no one saw coming. The French startup Stellaria, born from commissariat to the atomistic energy (CEA) and Schneider Electric, announced a pre-purchase agreement with Equinix, one of the largest global data center operators. According to the press releasethe agreement secures Equinix the first 500 MW of capacity of the Stellarium, the molten salt and fast neutron reactor that the company plans to deploy starting in 2035. This reserve is part of Equinix’s initiatives to diversify towards “alternative energies” applied to AI-ready data centers. Autonomy, zero carbon and waste management. It is a brief summary of the first reactor breed and burn intended to supply data centers. As explained by Stellariaoffers: Completely carbon-free and controllable energy, enough to make a data center autonomous. Underground design without exclusion zone, thanks to its operation at atmospheric pressure and its liquid core. Ultra-fast response to load variations, essential for generative AI. Virtually infinite regeneration of fuel, part of which can come from current waste from nuclear power plants. Multi-fuel capability, from uranium 235 and 238 to plutonium 239, MOX, minor actinides and thorium. For Equinix, this means solving one of its great challenges: operating with guaranteed clean energy 24/7 without depending on the grid. For Europe, it marks the entry into a new generation of ultra-compact reactors: the Stellarium occupies just four cubic meters. The technology behind the reactor. The Stellarium is a fourth-generation liquid chloride salt reactor, cooled by natural convection and equipped with four physical containment barriers. It operates on a closed fuel cycle, capable of maintaining fission for more than 20 years without recharging. Stellaria’s roadmap establishes that in 2029 there will be the first fission reaction and six years later a commercial deployment and delivery of the reactor to Equinix. According to the company, The energy density of this type of reactor is “70 million times higher than that of lithium-ion batteries”, which would allow a single Stellarium to supply a city of 400,000 inhabitants. As fusion progresses, fast fission arrives first. To understand why a fast neutron reactor comes to the world of AI before fusion, just compare the technological moment of each. The merger is making spectacular progress—such as the record of the French WEST reactorwhich maintained a stable plasma for 22 minutes, or the Wendelstein 7-Xwhich sustained a high-performance plasma for 43 seconds—but remains experimental. ITER will not be operational this decade and commercial prototypes will not arrive until well into the 2030s. Advanced fission, on the other hand, is much closer to the market. Reactors like Stellaria’s, with molten salt and fast neutrons, do not require the extreme conditions of fusion and can be deployed sooner. The company plans its first reaction in 2029 and a commercial deployment in 2035. The data centers of the future will no longer depend on the network. Equinix already operates more than 270 data centers in 77 metropolitan areas. In Europe they are powered by 100% renewables, but their future demand for AI will require a constant, carbon-free source that does not congest the electrical grid. According to Stellariathis agreement “lays the foundation for data centers with lifetime energy autonomy.” And, if the company meets its schedule, Europe will become the first region in the world where artificial intelligence is powered by compact reactors that recycle their own nuclear waste. The technological race between advanced fission and fusion is far from over, but, today, the first fast neutron reactor intended for AI does not come from ITER or an industrial giant: it comes from a French startup. Europe has just opened a door that could transform, at the same time, the future of energy and computing. Image | freepik and Stellaria Xataka | Google hit the red button when ChatGPT came upon it. Now it is OpenAI who has pressed it, according to WSJ

Sam Altman is trying to buy his own rocket company to compete with SpaceX. The key: data centers

The rivalry between Sam Altman and Elon Musk has just reached its highest point: space. And all so that OpenAI can deploy its own data centers in space. The news. As revealed by the Wall Street Journalthe CEO of OpenAI has been exploring the purchase of Stoke Space, a Seattle startup that develops reusable rockets, with the goal of building data centers in space. Although talks with Stoke Space cooled in the fall, the move confirms a trend we’ve been observing for months: Silicon Valley is outgrowing the Earth to fuel AI. Sam’s plan. According to the Journal’s sources, Sam Altman was not looking for a launch provider, but rather an investment that would ensure OpenAI majority control of Stoke Space. Stoke Space, founded in 2020 by former Blue Origin engineers, is developing a fully reusable rocket called ‘Nova’ to compete with SpaceX’s Falcon 9. So that. Altman maintains a tense rivalry with Elon Musk, so the logic of this move would be to reduce OpenAI’s dependence on Musk’s rockets in the event that it decided to deploy servers in space. But above that there is a purely energetic motivation. The computing demand for AI is so insatiable that the environmental consequences of keeping it on Earth will be unsustainable. In certain orbits, however, solar energy is available 24/7 and the vacuum of space offers an infinite heat sink to cool equipment without wasting water. The fever of space data centers. Altman is not alone in this race. What until recently seemed like an eccentricity has become a serious project for big technology companies: And what does Musk say? The irony of Altman pursuing his own rocket company is that the industry’s undisputed leader, Elon Musk’s SpaceX, already has the infrastructure in place. While his competitors design prototypes and seek financing, Musk has cut off the debate with his usual forcefulness: in the face of the discussion about the need to build new orbital data centers, He assured that there is no need to reinvent the wheel: “It will be enough to scale the Starlink V3 satellites… SpaceX is going to do it.” Images | Brazilian Ministry of Communications | Village Global In Xataka | Building data centers in space was the new hot business. Elon Musk just broke it with a tweet

Data centers consume a lot of water, but it is probably less than we thought. It’s a book’s fault

We can criticize the AI ​​boom for many reasons, but there is one that deeply affected society: the environmental impact, more specifically water consumption of each interaction with the AI, necessary to be able to cool the servers. The problem is realbut everything indicates that it has been magnified and the origin would be a miscalculation in a popular book. the book. It is ‘Empire of AI’ written by Karen Hao and which we already talked about in Xataka. After interviewing hundreds of former employees and people close to the company, the author constructs a detailed and highly critical account of OpenAI, more specifically its CEO Sam Altman. Among the criticisms of this ‘AI empire’, Hao mentions the excessive water consumption of AI, going so far as to state that a data center would consume 1,000 times more water than a city of 88,000 inhabitants. The criticism. Andy Masley tells it in his newsletter The Weird Turn Pro. According to their calculations, in reality 22% of what the city consumes or 3% of the entire municipal system. Furthermore, Masley states that the book confuses water extraction (temporary withdrawal that is returned to the network) with real consumption. The calculation error. The author herself has responded to the article de Masley citing the email he sent to the Municipal Drinking Water and Sewage Service of Chile (SMAPA), from whom he requested information on the total water consumption of Cerrillos and Maipu, the towns he used to make the consumption comparison. The problem is that Hao requested the amount in liters, but they responded without specifying the units and everything indicates that they were actually cubic meters, hence the large discrepancy. The author has consulted again with the SMAPA to clarify this information. It seems that, indeed, there is an error. Estimates. How much water AI consumes has been a recurring question in recent years. In September 2024, a study published by Washington Post He calculated that, to generate a 100-word text with ChatGPT, 519 milliliters of water were needed. The calculation was made taking into account the total annual consumption of data centers and the type of cooling used. It’s truly outrageous. What companies say. AI companies are not very transparent regarding the water and energy consumption of their data centers. The big technology companies give the total annual consumption data in their sustainability reports. We know that a large part of the consumption goes to data centers, but it is not possible to know the real consumption of each search. Google has been the only one that has published specific energy and water consumption data from its AI. According to the company, the water consumption for each Gemini consultation was 0.26 milliliters, or in other words, about five drops of water. We cannot extrapolate this data to all data centers or all companies, but it does seem that previous estimates are quite exaggerated. Water controversy. All of this doesn’t mean there isn’t a problem with water and AI. In fact, the Cerrillos data center where the alleged calculation error is It was never built because the Chilean justice system paralyzed it. due to the climatic impact it was going to have, especially in the context of drought in which the region found itself. Data centers need a lot of water, so much so that initiatives are emerging to cool them submerging them in the ocean. The other problem. Water is just one of the problems data centers face, energy demand poses an even greater challenge. In 2024, Data centers already accounted for 4% of total electricity consumption in the United States and in the surroundings of some of these beasts the electricity bill has risen 267% in recent years. Big tech is already warning: there is no power for so many chips and they are being raised since create nuclear power plants until take their data centers to space. Image | Google In Xataka | What is happening in the US is a warning for Spain: data centers driving up electricity bills in homes

AI data centers consume too much energy. Google’s ‘moonshot’ plan is to take them to space

Training models like ChatGPT, Gemini or Claude requires more and more electricity and water, to the point that the energy consumption of AI threatens to exceed that of entire countries. Data centers have become real resource sinks. According to estimates by the International Energy Agencythe electrical expenditure of data centers could double before 2030, driven by the explosion of generative AI. Faced with this perspective, technology giants are desperately looking for alternatives. And Google believes it has found something that seems straight out of science fiction: sending its artificial intelligence chips into space. Conquering space. The company Project Suncatcher has been revealedan ambitious experiment that sounds like science fiction: placing its TPUs—the chips that power its artificial intelligence—on satellites powered by solar energy. The chosen orbit, sun-synchronous, guarantees almost constant light. In theory, these panels could work 24 hours a day and be up to eight times more efficient than the ones we have on Earth. Google plans to test its technology with two prototype satellites before 2027, in a joint mission with the Planet company. The objective will be to check if its chips and communication systems can survive the space environment and, above all, if it is feasible to perform AI calculations in orbit. The engineering behind the idea. Although it sounds like science fiction, the project has solid scientific bases. Google proposes to build constellations of small satellites—dozens or even hundreds—that orbit in compact formation at an altitude of about 650 kilometers. Each one would have chips on board Trillium TPU connected to each other by laser optical links. Such light beams would allow satellites to “talk” to each other at speeds of up to tens of terabits per second. It is an essential capability to process AI tasks in a distributed manner, as a terrestrial data center would do. The technical challenge is enormous: at these distances, the optical signal weakens quickly. To compensate, the satellites would have to fly just a few hundred meters apart. According to Google’s own studyKeeping them so close will require precise maneuvering, but calculations suggest that small orbit adjustments would be enough to keep the formation stable. In addition, engineers have already tested the radiation resistance of their chips. In an experiment with a 67 MeV proton beam, Trillium TPUs safely withstood a dose three times higher than they would receive during a five-year mission in low orbit. “They are surprisingly robust for space applications,” the company concludes in its preliminary report. The great challenge: making it profitable. Beyond the technical problems, the economic challenge is what is in focus. According to calculations cited by Guardian and Ars Technicaif the launch price falls below $200 per kilogram by the mid-2030s, an orbital data center could be economically comparable to a terrestrial one. The calculation is made in energy cost per kilowatt per year. “Our analysis shows that space data centers are not limited by physics or insurmountable economic barriers,” says the Google team. In space, solar energy is practically unlimited. A panel can perform up to eight times more than on the Earth’s surface and generate almost continuous electricity. That would eliminate the need for huge batteries or water-based cooling systems, one of the biggest environmental problems in today’s data centers. However, not everything shines in a vacuum. As The Guardian recallseach launch emits hundreds of tons of CO₂, and astronomers warn that the growing number of satellites “is like looking at the universe through a windshield full of insects.” Furthermore, flying such compact constellations increases the risk of collisions and space debris, an already worrying threat in low orbit. A race to conquer the sky. Google’s announcement comes in the midst of a fever for space data centers. It is not the only company looking up. Elon Musk recently assured that SpaceX plans to scale its Starlink satellite network—already with more than 10,000 units—to create its own data centers in orbit. “It will be enough to scale the Starlink V3 satellites, which have high-speed laser links. SpaceX is going to do it,” wrote Musk in X. For his part, Jeff Bezos, founder of Amazon and Blue Origin, predicted during the Italian Tech Week that we will see “giant AI training clusters” in space in the next 10 to 20 years. In his vision, these centers would be more efficient and sustainable than terrestrial ones: “We will take advantage of solar energy 24 hours a day, without clouds or night cycles.” Another unexpected actor is Eric Schmidt, former CEO of Google, who bought the rocket company Relativity Space precisely to move in that direction. “Data centers will require tens of additional gigawatts in a few years. Taking them off the Earth may be a necessity, not an option,” Schmidt warned in a hearing before the US Congress. And Nvidia, the AI ​​chip giant, also wants to try his luck: The startup Starcloud, backed by its Inception program, will launch the first H100 GPU into space this month to test a small orbital cluster. Their ultimate goal: a 5-gigawatt data center orbiting the Earth. The new battlefield. The Google project is still in the research phase. There are no prototypes in orbit and no guarantees that there will be any soon. But the mere fact that a company of such caliber has published orbital models, radiation calculations and optical communication tests shows that the concept has already moved from the realm of speculation to that of applied engineering. The project inherits the philosophy of others moonshots of the company —like Waymo’s self-driving cars either quantum computers—: explore impossible ideas until they stop being impossible. The future of computing may not be underground or in huge industrial warehouses, but in swarms of satellites shining in the permanent sun of space. Image | Google Xataka | While Silicon Valley seeks electricity, China subsidizes it: this is how it wants to win the AI ​​war

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.