We already have the world’s first fast neutron nuclear reactor. We are going to use it for AI data centers

The growth of artificial intelligence is driving global electricity demand to historical figures. The expansion of data centers, the advance of electrification and the industrial rebound are straining aging networks that are already suffering from saturation in multiple countries. In this scenario, the digital sector—a large consumer of electricity for the development of AI—faces a paradox: it needs much more energy, but it must do so without increasing its emissions. And there arises a proposal that until recently would have seemed like science fiction: data centers powered by a compact fast neutron nuclear reactor. The Stellaria–Equinix deal that no one saw coming. The French startup Stellaria, born from commissariat to the atomistic energy (CEA) and Schneider Electric, announced a pre-purchase agreement with Equinix, one of the largest global data center operators. According to the press releasethe agreement secures Equinix the first 500 MW of capacity of the Stellarium, the molten salt and fast neutron reactor that the company plans to deploy starting in 2035. This reserve is part of Equinix’s initiatives to diversify towards “alternative energies” applied to AI-ready data centers. Autonomy, zero carbon and waste management. It is a brief summary of the first reactor breed and burn intended to supply data centers. As explained by Stellariaoffers: Completely carbon-free and controllable energy, enough to make a data center autonomous. Underground design without exclusion zone, thanks to its operation at atmospheric pressure and its liquid core. Ultra-fast response to load variations, essential for generative AI. Virtually infinite regeneration of fuel, part of which can come from current waste from nuclear power plants. Multi-fuel capability, from uranium 235 and 238 to plutonium 239, MOX, minor actinides and thorium. For Equinix, this means solving one of its great challenges: operating with guaranteed clean energy 24/7 without depending on the grid. For Europe, it marks the entry into a new generation of ultra-compact reactors: the Stellarium occupies just four cubic meters. The technology behind the reactor. The Stellarium is a fourth-generation liquid chloride salt reactor, cooled by natural convection and equipped with four physical containment barriers. It operates on a closed fuel cycle, capable of maintaining fission for more than 20 years without recharging. Stellaria’s roadmap establishes that in 2029 there will be the first fission reaction and six years later a commercial deployment and delivery of the reactor to Equinix. According to the company, The energy density of this type of reactor is “70 million times higher than that of lithium-ion batteries”, which would allow a single Stellarium to supply a city of 400,000 inhabitants. As fusion progresses, fast fission arrives first. To understand why a fast neutron reactor comes to the world of AI before fusion, just compare the technological moment of each. The merger is making spectacular progress—such as the record of the French WEST reactorwhich maintained a stable plasma for 22 minutes, or the Wendelstein 7-Xwhich sustained a high-performance plasma for 43 seconds—but remains experimental. ITER will not be operational this decade and commercial prototypes will not arrive until well into the 2030s. Advanced fission, on the other hand, is much closer to the market. Reactors like Stellaria’s, with molten salt and fast neutrons, do not require the extreme conditions of fusion and can be deployed sooner. The company plans its first reaction in 2029 and a commercial deployment in 2035. The data centers of the future will no longer depend on the network. Equinix already operates more than 270 data centers in 77 metropolitan areas. In Europe they are powered by 100% renewables, but their future demand for AI will require a constant, carbon-free source that does not congest the electrical grid. According to Stellariathis agreement “lays the foundation for data centers with lifetime energy autonomy.” And, if the company meets its schedule, Europe will become the first region in the world where artificial intelligence is powered by compact reactors that recycle their own nuclear waste. The technological race between advanced fission and fusion is far from over, but, today, the first fast neutron reactor intended for AI does not come from ITER or an industrial giant: it comes from a French startup. Europe has just opened a door that could transform, at the same time, the future of energy and computing. Image | freepik and Stellaria Xataka | Google hit the red button when ChatGPT came upon it. Now it is OpenAI who has pressed it, according to WSJ

Sam Altman is trying to buy his own rocket company to compete with SpaceX. The key: data centers

The rivalry between Sam Altman and Elon Musk has just reached its highest point: space. And all so that OpenAI can deploy its own data centers in space. The news. As revealed by the Wall Street Journalthe CEO of OpenAI has been exploring the purchase of Stoke Space, a Seattle startup that develops reusable rockets, with the goal of building data centers in space. Although talks with Stoke Space cooled in the fall, the move confirms a trend we’ve been observing for months: Silicon Valley is outgrowing the Earth to fuel AI. Sam’s plan. According to the Journal’s sources, Sam Altman was not looking for a launch provider, but rather an investment that would ensure OpenAI majority control of Stoke Space. Stoke Space, founded in 2020 by former Blue Origin engineers, is developing a fully reusable rocket called ‘Nova’ to compete with SpaceX’s Falcon 9. So that. Altman maintains a tense rivalry with Elon Musk, so the logic of this move would be to reduce OpenAI’s dependence on Musk’s rockets in the event that it decided to deploy servers in space. But above that there is a purely energetic motivation. The computing demand for AI is so insatiable that the environmental consequences of keeping it on Earth will be unsustainable. In certain orbits, however, solar energy is available 24/7 and the vacuum of space offers an infinite heat sink to cool equipment without wasting water. The fever of space data centers. Altman is not alone in this race. What until recently seemed like an eccentricity has become a serious project for big technology companies: And what does Musk say? The irony of Altman pursuing his own rocket company is that the industry’s undisputed leader, Elon Musk’s SpaceX, already has the infrastructure in place. While his competitors design prototypes and seek financing, Musk has cut off the debate with his usual forcefulness: in the face of the discussion about the need to build new orbital data centers, He assured that there is no need to reinvent the wheel: “It will be enough to scale the Starlink V3 satellites… SpaceX is going to do it.” Images | Brazilian Ministry of Communications | Village Global In Xataka | Building data centers in space was the new hot business. Elon Musk just broke it with a tweet

Data centers consume a lot of water, but it is probably less than we thought. It’s a book’s fault

We can criticize the AI ​​boom for many reasons, but there is one that deeply affected society: the environmental impact, more specifically water consumption of each interaction with the AI, necessary to be able to cool the servers. The problem is realbut everything indicates that it has been magnified and the origin would be a miscalculation in a popular book. the book. It is ‘Empire of AI’ written by Karen Hao and which we already talked about in Xataka. After interviewing hundreds of former employees and people close to the company, the author constructs a detailed and highly critical account of OpenAI, more specifically its CEO Sam Altman. Among the criticisms of this ‘AI empire’, Hao mentions the excessive water consumption of AI, going so far as to state that a data center would consume 1,000 times more water than a city of 88,000 inhabitants. The criticism. Andy Masley tells it in his newsletter The Weird Turn Pro. According to their calculations, in reality 22% of what the city consumes or 3% of the entire municipal system. Furthermore, Masley states that the book confuses water extraction (temporary withdrawal that is returned to the network) with real consumption. The calculation error. The author herself has responded to the article de Masley citing the email he sent to the Municipal Drinking Water and Sewage Service of Chile (SMAPA), from whom he requested information on the total water consumption of Cerrillos and Maipu, the towns he used to make the consumption comparison. The problem is that Hao requested the amount in liters, but they responded without specifying the units and everything indicates that they were actually cubic meters, hence the large discrepancy. The author has consulted again with the SMAPA to clarify this information. It seems that, indeed, there is an error. Estimates. How much water AI consumes has been a recurring question in recent years. In September 2024, a study published by Washington Post He calculated that, to generate a 100-word text with ChatGPT, 519 milliliters of water were needed. The calculation was made taking into account the total annual consumption of data centers and the type of cooling used. It’s truly outrageous. What companies say. AI companies are not very transparent regarding the water and energy consumption of their data centers. The big technology companies give the total annual consumption data in their sustainability reports. We know that a large part of the consumption goes to data centers, but it is not possible to know the real consumption of each search. Google has been the only one that has published specific energy and water consumption data from its AI. According to the company, the water consumption for each Gemini consultation was 0.26 milliliters, or in other words, about five drops of water. We cannot extrapolate this data to all data centers or all companies, but it does seem that previous estimates are quite exaggerated. Water controversy. All of this doesn’t mean there isn’t a problem with water and AI. In fact, the Cerrillos data center where the alleged calculation error is It was never built because the Chilean justice system paralyzed it. due to the climatic impact it was going to have, especially in the context of drought in which the region found itself. Data centers need a lot of water, so much so that initiatives are emerging to cool them submerging them in the ocean. The other problem. Water is just one of the problems data centers face, energy demand poses an even greater challenge. In 2024, Data centers already accounted for 4% of total electricity consumption in the United States and in the surroundings of some of these beasts the electricity bill has risen 267% in recent years. Big tech is already warning: there is no power for so many chips and they are being raised since create nuclear power plants until take their data centers to space. Image | Google In Xataka | What is happening in the US is a warning for Spain: data centers driving up electricity bills in homes

AI data centers consume too much energy. Google’s ‘moonshot’ plan is to take them to space

Training models like ChatGPT, Gemini or Claude requires more and more electricity and water, to the point that the energy consumption of AI threatens to exceed that of entire countries. Data centers have become real resource sinks. According to estimates by the International Energy Agencythe electrical expenditure of data centers could double before 2030, driven by the explosion of generative AI. Faced with this perspective, technology giants are desperately looking for alternatives. And Google believes it has found something that seems straight out of science fiction: sending its artificial intelligence chips into space. Conquering space. The company Project Suncatcher has been revealedan ambitious experiment that sounds like science fiction: placing its TPUs—the chips that power its artificial intelligence—on satellites powered by solar energy. The chosen orbit, sun-synchronous, guarantees almost constant light. In theory, these panels could work 24 hours a day and be up to eight times more efficient than the ones we have on Earth. Google plans to test its technology with two prototype satellites before 2027, in a joint mission with the Planet company. The objective will be to check if its chips and communication systems can survive the space environment and, above all, if it is feasible to perform AI calculations in orbit. The engineering behind the idea. Although it sounds like science fiction, the project has solid scientific bases. Google proposes to build constellations of small satellites—dozens or even hundreds—that orbit in compact formation at an altitude of about 650 kilometers. Each one would have chips on board Trillium TPU connected to each other by laser optical links. Such light beams would allow satellites to “talk” to each other at speeds of up to tens of terabits per second. It is an essential capability to process AI tasks in a distributed manner, as a terrestrial data center would do. The technical challenge is enormous: at these distances, the optical signal weakens quickly. To compensate, the satellites would have to fly just a few hundred meters apart. According to Google’s own studyKeeping them so close will require precise maneuvering, but calculations suggest that small orbit adjustments would be enough to keep the formation stable. In addition, engineers have already tested the radiation resistance of their chips. In an experiment with a 67 MeV proton beam, Trillium TPUs safely withstood a dose three times higher than they would receive during a five-year mission in low orbit. “They are surprisingly robust for space applications,” the company concludes in its preliminary report. The great challenge: making it profitable. Beyond the technical problems, the economic challenge is what is in focus. According to calculations cited by Guardian and Ars Technicaif the launch price falls below $200 per kilogram by the mid-2030s, an orbital data center could be economically comparable to a terrestrial one. The calculation is made in energy cost per kilowatt per year. “Our analysis shows that space data centers are not limited by physics or insurmountable economic barriers,” says the Google team. In space, solar energy is practically unlimited. A panel can perform up to eight times more than on the Earth’s surface and generate almost continuous electricity. That would eliminate the need for huge batteries or water-based cooling systems, one of the biggest environmental problems in today’s data centers. However, not everything shines in a vacuum. As The Guardian recallseach launch emits hundreds of tons of CO₂, and astronomers warn that the growing number of satellites “is like looking at the universe through a windshield full of insects.” Furthermore, flying such compact constellations increases the risk of collisions and space debris, an already worrying threat in low orbit. A race to conquer the sky. Google’s announcement comes in the midst of a fever for space data centers. It is not the only company looking up. Elon Musk recently assured that SpaceX plans to scale its Starlink satellite network—already with more than 10,000 units—to create its own data centers in orbit. “It will be enough to scale the Starlink V3 satellites, which have high-speed laser links. SpaceX is going to do it,” wrote Musk in X. For his part, Jeff Bezos, founder of Amazon and Blue Origin, predicted during the Italian Tech Week that we will see “giant AI training clusters” in space in the next 10 to 20 years. In his vision, these centers would be more efficient and sustainable than terrestrial ones: “We will take advantage of solar energy 24 hours a day, without clouds or night cycles.” Another unexpected actor is Eric Schmidt, former CEO of Google, who bought the rocket company Relativity Space precisely to move in that direction. “Data centers will require tens of additional gigawatts in a few years. Taking them off the Earth may be a necessity, not an option,” Schmidt warned in a hearing before the US Congress. And Nvidia, the AI ​​chip giant, also wants to try his luck: The startup Starcloud, backed by its Inception program, will launch the first H100 GPU into space this month to test a small orbital cluster. Their ultimate goal: a 5-gigawatt data center orbiting the Earth. The new battlefield. The Google project is still in the research phase. There are no prototypes in orbit and no guarantees that there will be any soon. But the mere fact that a company of such caliber has published orbital models, radiation calculations and optical communication tests shows that the concept has already moved from the realm of speculation to that of applied engineering. The project inherits the philosophy of others moonshots of the company —like Waymo’s self-driving cars either quantum computers—: explore impossible ideas until they stop being impossible. The future of computing may not be underground or in huge industrial warehouses, but in swarms of satellites shining in the permanent sun of space. Image | Google Xataka | While Silicon Valley seeks electricity, China subsidizes it: this is how it wants to win the AI ​​war

making cell towers mini data centers for AI

A few days ago we heard the news that NVIDIA had invested $1 billion in Nokiataking over 2.9% of the Finnish company. Although the check in itself is striking news, since for many people, Nokia had been lost off the map for many years, the movement makes all the sense in the world: it is the Western response to many of the Chinese technology companies that for years have been investing in the deployment of 6G. And of course, with NVIDIA behind them, telephony base stations can serve much more than just providing coverage to millions of devices: becoming small distributed data centers for AI. The plan behind the investment. NVIDIA and Nokia are not just designing equipment for mobile networks. They are redefining what a cell tower is. The idea is that each base station (the towers and small installations that we see on buildings and streets) become a computing node with the ability to execute operations involving AI technologies in real time. “An AI data center in everyone’s pocket”, according to Justin Hotard, CEO of Nokia. The key here is to bring processing closer to the user in order to eliminate latency, which is usually one of the most frequent problems in AI applications that require real-time processing, such as instant translation, augmented reality or autonomous vehicles. Without latency, everything changes. When we ask an AI to translate a conversation or analyze live images, every millisecond counts. Sending that data to a distant server, processing it, and returning it introduces a significant delay that mars the final experience. The most logical solution is to decentralize: that the AI ​​lives close to the userin the telecommunications infrastructures themselves. In this sense, NVIDIA will contribute chips and specialized software, while Nokia will adapt its 5G and 6G equipment to integrate that computing capacity. As announced, the first commercial tests will begin in 2027 with T-Mobile in the United States. The Nokia effect on the stock market. Nokia shares they shot up 21% after the news broke, reaching highs not seen since 2016. NVIDIA and OpenAI have become King Midas of technology: everything they touch goes up. The investment is also a boost to the strategy of Hotard, who since his arrival in April has accelerated Nokia’s shift towards data centers and AI. The company, which already acquired Infinera for 2.3 billion to strengthen its position in data center networks, it is now positioned as the only Western supplier capable of competing with Huawei in the complete supply of telecommunications infrastructure. EITHERafter space race. While Europe and the United States accelerate their 6G plans, China has been investing aggressively in this technology for years. This alliance between NVIDIA and Nokia is a somewhat late response, but necessary. Jensen Huang, CEO of NVIDIA, explained in his speech in Washington that the goal is “to help the United States bring telecommunications technology back to America.” It is not just about infrastructure, but about strategic control. And whoever dominates this network of brains distributed throughout cities and roads will control the AI ​​applications of the future. And now what. The McKinsey consulting firm esteem that investment in data center infrastructure will exceed $1.7 trillion by 2030, driven by the expansion of AI. Nokia and NVIDIA want their piece of the pie, but they are also betting on a structural change: that mobile networks stop being mere data tubes and become intelligent computing platforms. It remains to be seen if this model works commercially and whether operators are willing to update their infrastructure. Cover image | NVIDIA In Xataka | Xi Jinping wants two things: first, to create a global center that regulates AI. The second, that it is in Shanghai

The “foodies” have turned the historic centers of Italy into hell, so the cities are getting serious

Italy is at war. In a not so particular one that it shares with other countries and cities: the battle to stop mass tourism. He is trying with all his might through higher rates, entrance fees that they folded After initial success, a veto key boxes and even taxes on tourist dogs. Now, several cities have agreed on one thing: stop the ‘foodies’. As? Prohibiting the opening of new restaurants in historic centers. In short. Going through the historic center of any Italian city is like entering a culinary amusement park. There is not only restaurants wherever you lookbut these constitute a fair in which eye-catching posters appealing to tradition and artisans who prepare fresh pasta in front of the windows of the premises, like circus animals, are a constant. Now, cities like Rome, Turin, Florence, Palermo and Bologna have launched restrictions when opening new restaurants in their historic centers. Displacing the population. Although Italians love their traditional cuisine as much as anyone, they are getting tired of their city centers becoming theme parks. There are especially bleeding streets, like Via Maqueda in Palermo or Via del Pellegrino in Rome (to a lesser extent), which are basically a succession of premises. As he comments The New York Timeshundreds of new restaurants have opened over the last decade in just a few streets of those tourist spots, establishments that dress in tradition, but are not and displace the local population far from their homes. It is something that is seen in many other cities in the world in which the tourism is doing that the price of land rises in very specific points, also that of rents, and the locals see how traditional businesses disappear while others linked to that consumerism flourish. “We must protect the center”. In the case of Italy, the aim is to fight against gastronomic gentrification, which is replacing historical markets and local stores with businesses aimed at mass tourists, and they also want to protect the authenticity and daily life of citizens. But we also want to preserve tradition and diversity compared to more homogeneous or franchised models. Luisa Guidone, Councilor for Commerce of Bologna, comment that “the center must be protected, maintaining the mix of existing stores that allow citizens to have their daily experience when shopping.” Everyone makes their war. As we say, the prohibition or limitation on opening premises is not part of a national initiative, but rather of each municipality. In Palermo, new restaurant licenses have been expressly prohibited in emblematic areas such as Via Maqueda. In Florence, no new openings of bars, restaurants or any food establishments in more than 50 streets in the center and some peripheral ones. In the aforementioned Bologna, until June 2028, new projects aimed at commercial activities that want to open in the historic center and in Rome or Turin will be carefully studied. more of the same (especially around the Vatican). Then, there are exceptions. For example, Florence allows you to open establishments such as art galleries, bookstores or crafts, anyone that is not focused on mass hospitality. Not just food. But this goes beyond gastronomic gentrification. In it Corriete di Bologna we can read that the restrictions They imply that, until 2028, it will be prohibited to open new money exchange stores, call centers (which are telephone centers, Internet connection points and money transfer points) in the historic center, as well as “buy gold” or automatic cash machines.slot machine‘. Debate. Now, promoting something like this is complicated when tourism represents almost 12% of the Italian economy and the gastronomic tourism It is an important source of income. In fact, in the NYP article they include statements from tourists who only want to eat. Also those responsible for FIPE, the Italian Federation of Food and Tourism Companies, who point out that “sometimes, the Coliseum is an excuse for an American among a cacio e pepe and one amatriciana“In addition, it is criticized that each city is waging war on its own and there is no law promoted at the national level. In any case, as we said at the beginning, it is evident that Italy has a problem with this mass tourism that is displacing the population that really lives in those cities. Traditional businesses have closed or have been converted, going from selling useful foods for citizens to traditional dishes wrapped in a striking way for tourists. And finding the balance seems tremendously complicated. Images | Anna Church, Maxime Steckle, Matej Buchla In Xataka | “Fodechinchos free”: in a bar in Galicia, tourismphobia is being redirected against Spaniards from other regions

Building data centers in space was the new hot business. Elon Musk just broke it with a tweet

The debate over the feasibility of building gigantic data centers in orbit had been heating up for months. It is Silicon Valley’s new big idea to solve the insatiable energy appetite of artificial intelligence. Until, as usual, Elon Musk has entered the conversation with the subtlety of a hammer. Elon Musk has joined the chat. After weeks of debate about the feasibility of building servers in space, Eric Berger, editor of Ars Technica, argued that will end up being a more plausible option when the technology exists to assemble satellites in orbit autonomously. It was the moment chosen by Elon Musk to enter the conversation. “It will be enough to scale the Starlink V3 satellites, which have high-speed laser links,” wrote the CEO of SpaceX. “SpaceX is going to do it,” he said. A phrase that has probably fallen like a blow on startups that are taking advantage of the momentum of AI to go out in search of financing. Why the hell do we want servers in space? The idea of ​​moving computing to Earth orbit responds to a very real crisis: AI is an energy monster, and Demand for data centers continues to grow. Given this panorama, space offers two advantages that are impossible on Earth: Almost unlimited energy: In a sun-synchronous orbit, solar panels receive sunlight almost continuously (more than 95% of the time). Free Cooling: Land-based data centers consume millions of liters of fresh water to cool. With a large enough radiator, the gap can be “an infinite heatsink at -270°C.” The heat would be radiated into the vacuum without wasting a single drop of water. The new titans of space AI. Musk is not the first to see the business. In fact, he arrives at a party where the first contracts are already being distributed. Jeff Bezos predicted during the Italian Tech Week that we will see “giant training clusters” of AI in orbit in the next 10 or 20 years. Eric Schmidt, the former CEO of Google, bought rocket company Relativity Space precisely for this purpose. And Nvidia, the undisputed king of AI hardware, has actively backed startup Starcloud, which plans to launch the first NVIDIA H100 GPU into space this November, with the goal of eventually building a monster 5-gigawatt orbital data center. Why Musk would win. The vision of Bezos, Schmidt and Starcloud faces two colossal obstacles: the cost of launch and the construction of the servers themselves. Calculations for a 1 GW data center would require more than 150 launches with current technology. And Starcloud’s plan for a 4 kilometer wide array is a logistical nightmare. Elon Musk has Starship, the giant rocket on which all of his competitors’ business models depend to be profitable. And you don’t need build a new orbital data center. Just adapt and scale the one you already have. 10,000 satellites and counting. SpaceX’s Starlink constellation no longer competes against satellite internet, goes for terrestrial fiber. Musk’s company has already launched 10,000 satellites and is preparing the deployment of the new V3 satellites, designed for Starship with high-speed laser links. According to SpaceX itself, each Starship launch will add 60 terabits per second of capacity to a network that is already, in practice, a global computing and data mesh. While Starcloud needs to hire a rocket and assemble 4km-wide solar and cooling panels, Musk simply needs Starship to finish development to continue launching satellites. In Xataka | Starlink stopped competing with satellite Internet companies a long time ago: now it is going for something much bigger

NVIDIA has risen to the top for its AI data centers. Your next big leap: cars

NVIDIA has unveiled its platform Drive AGX Hyperion 10a computing and sensor system designed for any manufacturer to produce Level 4 autonomous vehicles. Uber has already signed an agreement to deploy 100,000 units across its global network starting in 2027, and Stellantis, Lucid and Mercedes-Benz have also joined the project. Why is it important. For years, autonomous driving has been a persistent promise often wrapped in marketing. NVIDIA has turned that promise into an industrial offering with standardized architecture, certified chips, and out-of-the-box simulations. It does not sell autonomous cars, but it does sell the operating system that will make them possible. The contrast. Tesla has been selling autonomy as a leap of faith for a decade, with permanent updates, its own fleet and promises of “millions of autonomous Teslas” every year. NVIDIA, on the other hand, offers an open platform where any manufacturer can plug in their hardware. Tesla wants to be an equivalent to Apple in cars. NVIDIA prefers to be something more similar to Windows. Between the lines. Automotive only accounts for NVIDIA 1.3% of its revenue, but that segment is growing faster than the rest. In any case, Uber’s announcement has no real timetable for those 100,000 units unless it has been made public. Waymo, which has been developing its robotaxis for years, is already its sixth generation and it has the financial muscle of Alphabet behind it, it barely operates 2,000 of them. There is a considerable gap between ambition and reality. The backdrop. Drive Hyperion 10 is based on two Thor chips (2,000 teraflops each), fourteen cameras, nine radars, one LiDAR and twelve ultrasonic sensors. NVIDIA has designed it with full redundancy: if a component fails, the vehicle stops safely to avoid chain errors that multiply the potential damage. Lucid will be one of the first in offering level 4 autonomous driving to individual customers and not just fleets. Its interim CEO has admitted that so far they have disappointed in terms of driving assistance. Their commitment to NVIDIA is the classic implicit recognition: it is better to buy the brain than to build it. The money trail. NVIDIA will not continue building robotaxis for now, but for now it sells infrastructure: chips, simulation software, synthetic data… And it charges for each vehicle that uses its platform. It’s a more predictable revenue model than depending on full autonomy to arrive one day. Huang, in any case, has said that that moment is near. The interesting thing is not whether he is right, but that his definition no longer depends on blind faith. It depends on regulators, certifications and industrial tests. Autonomy has ceased to be science fiction and has become an engineering problem. And those problems are solved with processes, not with promises. In Xataka | China has turned the electric car market into a crazy race. And Porsche pays for it with billion-dollar losses Featured image | Xataka

Data centers do not want to depend on the conventional electrical grid. Solution: build your own plants

AI data centers have sparked a new fever: the so-called “bring your own power.” The demand and consumption The pressure these plants impose is so enormous that they do not want to depend on external sources. The solution is theoretically simple, and we are already seeing how when a new data center is built, it is normal for some type of power plant to be built next to it. We are seeing it now. The data centers that OpenAI and Oracle are building in West Texas are accompanied by the creation of a natural gas-based power plant. Both xAI’s Colossus 1 and Colossus 2 in Memphis take advantage of gas turbines. And as they also indicate in The Wall Street Journalmore than a dozen Equinix data centers across the US are powered by stand-alone fuel cells. If the conventional electrical grid cannot be used, nothing happens: you create a power plant and that’s it. The US has an electrical problem. The technology giants would prefer to connect to the conventional grid, but bottlenecks in the supply chain, bureaucracy – permits, licenses – and the slowness in building the necessary transmission infrastructure prevent this. According to the ICV firmThe United States would need to add about 80 GW of new generation capacity per year to keep pace with AI, but right now less than 65 GW per year are being built. There is another direct consequence of this problem: the rise in the electricity bill. Data centers that look like cities. The needs and ambition of AI companies has made data centers become calculation and resource consumption monsters. One can only consume as much electricity as 10,000 stores in the Walmart electronics chain, WSJ estimates. Before 2020, data centers represented less than 2% of US energy consumption. By 2028 they are expected to represent up to 12%. A 1.5 GW data center, for example, would have consumption similar to that of the city of San Francisco, with about 800,000 inhabitants. China has a lot of advantage over the US in this. While the US deal with that lack of powerChina does not stop investing in new energy generation. According to data According to the National Energy Administration, the Asian country added 429 GW of new energy generation in 2024, while the US only added 50 GW. It is true that China has four times the population, but its centralized planning is helping to avoid problems that affect the US electrical grid. The white knight to the rescue. Faced with this shortage, natural gas has become the preferred resource for on-site energy generation. Although large turbines have long delivery times, smaller turbines or fuel cells that use natural gas are being used because of their rapid availability and installation. Renewables lose steam. Meanwhile, things are not promising for renewable energies (solar and wind, especially). There are about 214 GW of new generation theoretically in projectbut spending on such technologies could decline due to the potential loss of tax credits: the Trump administration criticizes that those clean energies do not provide a constant flow necessary for AI. The nuclear alternative. Faced with this apparent decline of nuclear energy, there is a growing interest in compact nuclear reactors (SMR), which allow us to provide the advantages of this type of center and a flexibility that can be very interesting for AI data centers. amazon, Google, Goal either Microsoft They are betting part of their future on nuclear powerbut that It doesn’t mean there aren’t challenges to overcome.. Image | Wolfgang Weiser In Xataka | World record in nuclear fusion: the German Wendelstein 7-X reactor has broken all records

Data centers shooting the light of light in the houses

Spain is betting very strong for the development and creation of new data centers. The AI ​​boom has infected our country, and although that attracts investment and economic capital, it can also lead to serious problems for consumers. Especially a very clear one: that we pay more for the light. Spain, care with data centers. In recent months we have seen how Data centers construction projects grow In our country. It is estimated that in the Community of Madrid there will be a power of 1.7 GW in 2030, which is paradoxical, because it is the region with greater energy deficit in Spain and the one that is staying with a good part of these projects. Aragon, in another league. Aragon has so many projects of data centers that He showed his disappointment When he knew that the reinforcement of the electricity network for those facilities throughout Spain will be 3.8 GW. Those responsible for the Aragonese government described the figure of “scarce”, especially considering that this region has projects that projects that projects that projects that They would exhaust that capacity alone. The US teaches us (worrying) future. A Bloomberg investigation It reveals how in the last five years the creation of new data centers is making the light invoice rise remarkably. Those centers, previously dedicated to expanding the cloud infrastructure and now totally focused on the AI ​​boom, are behind that increase in light invoice. Energy consumption is triggered in these facilities, and ends up affecting electricity prices in surrounding regions. Prices that almost quadruplic. In 2020 Baltimore residents paid average $ 17 per MW/h. In 2025 that price is $ 38 per MW/h. In Buffalo the thing is even worse, and prices have tripled in five years: they have gone from $ 11 to 33 per MW/h. In the areas of the United States close to large concentrations of data centers, the wholesale price of electricity has risen up to 267% In the last five years. LMP are nodes of the electricity grid that determine the wholesale price of electricity. Almost three out of four have seen price increases when they are close to data centers. Those who are in farther areas have come to see their reduced prices. Source: Bloomberg. Unequal climb. The study reveals how wholesale electricity prices in the US have increased significantly in recent years, although it is true that these increases have been applied unequally at the geographical level: certain areas have seen modest increases, but others have seen how the light of the light was fired and grew up to that aforementioned 267%, close to quadruple. The condemnation of data centers. 70% of the points at which price increases were recorded are less than 80 kilometers of data centers with significant activity. It is a fact that makes it clear that the impact of these data centers on the light of the residents is clear. And this goes to more. Current estimates, Indicates Bnefprevent the energy demand for data centers in the US will double for 2035 and will be the largest increase in energy demand since the 60s. Thus, in ten years that demand will represent 9% of the total. At the global level, the data centers are expected to consume more than 4% of the electricity consumed in 2035. If those facilities were a country, they would be the room in energy consumption, only behind China, USA and India. Perfect storm. The demand is also linked to the rise of cryptodivisas, the impulse of manufacturing in the US and the “electrification of the economy”, which includes areas such as electric vehicles or domestic heating systems. The withdrawal of traditional mining facilities in areas such as Baltimore has only aggravated the economic problem: there is less energy supply and more demand, which makes prices again increase. The world already knows what comes over and is reacting. What is happening in the US is already causing reactions in other countries. Holland: Water and energy needs made the Amsterdam City Council in 2019 imposed A moratorium for the construction of new data centers. Singapore: also established a pause for the creation of this type of facilities between 2019 and 2022, although the government made clear which would be more selective in future projects. Ireland: In 2024 the country reached a worrying milestone. Data centers They already consumed more than households. 5% of the country’s total consumption went through in 2015 to 18% in 2022 and 21% in 2023. Household consumption represented that year 18%. The solution: that the Big Tech pay that invoice. Public service companies in the US such as Dominion Power are clear that “data centers should pay the full cost of their energy consumption.” Large technological ones know very well that these facilities raise an extraordinary energy demand, and They are investigating solutions like him Use of SMR reactors for their AI data centers. The idea is interesting, But complex. Supply and demand. Spain faces a future in which energy supply and demand could be unbalanced as it is already happening in the United States. If data centers begin to impose more and more load on the network, it is reasonable to think that the cost of electricity increases and causes the least desirable effect for users: the rise in the light invoice. Renewables could help mitigate the problem, but only If the network is capable of absorbing Both the new generation and the new mass demand of the data centers. Image | Microsoft

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.