your dream of putting AI data centers in space is probably not feasible

The possibility of setting up data centers for artificial intelligence (AI) in space is very attractive. So much so that several CEOs of some of the largest technology companies in the US have not hesitated to get wet and ensure that support this strategy. Jeff Bezos predicted in early October 2025 that data centers will reach space over the next two decades with the purpose of solving in one fell swoop the power supply problems currently posed by these facilities on Earth. Elon Musk did not take long to encourage the discussion even more. Shortly after Bezos’ statement posted a tweet in X in which he assured that SpaceX only needed to scale its Starlink V3 satellites equipped with high-speed laser links to bring this idea to fruition. In fact, he closed his tweet with a forceful statement: “SpaceX is going to do it”. However, the laws of physics are implacable. And SpaceX has had no choice but to acknowledge to its investors the daunting challenges that this project entails. Orbital data centers may not come to fruition According to ReutersSpaceX has delivered an official document to its investors in which it recognizes that both orbital AI data centers and human settlement on the Moon and Mars depend on technologies that have not yet been developed or tested, and that, therefore, may not be viable from a commercial point of view. SpaceX is preparing its IPOand this evaluation puts on the table the caution required by the legal obligation to be extremely honest with the risks to avoid future lawsuits from new shareholders. “Our efforts to develop orbital AI computing and in-orbit, lunar and interplanetary industrialization are in the early stages and involve significant technical complexity and the use of technologies that have not yet been tested. For these reasons they may not be able to achieve commercial viability,” SpaceX clarifies. There is no doubt that the challenges that need to be solved for data centers to reach space are colossal. The challenges that need to be solved for data centers to reach space are colossal One of them is the impact of the ionizing radiation about the hardware. This form of radiation is a type of high-frequency energy, such as X-rays, gamma, alpha or beta, which is capable of tearing electrons from atoms, thus altering the structure of molecules. In space, server chips are not protected by the Earth’s atmosphere and magnetic field, which makes them very vulnerable to ionizing radiation, which has the ability to permanently degrade them. To solve this problem it will be necessary to develop some type of shielding capable of protecting the hardware of the servers of the cosmic radiation. This requirement leads us to the next critical challenge: in space it is not possible cool servers using convectionas on Earth, because in the vacuum of space there is neither air nor water. In addition, it would be necessary to use enormous radiators. It is possible to propose several solutions to these problems, but we must not overlook that it is crucial to minimize the weight and complexity of the material that needs to be put into orbit. Otherwise its commercial viability will be non-existent. The two challenges we just delved into are probably the most difficult to solve, but orbital data centers pose more difficulties. One of them is that to deliver the gigawatts per hour they require, it would be necessary to use enormous solar panels. Furthermore, in some applications the latency that these space installations would introduce would probably be unaffordable. And, on top of that, maintaining an orbital data center would be extremely expensive. In fact, it probably wouldn’t even be economically feasible, forcing its owners to introduce massive redundancy that would push it away from profitability. Image | freepik More information | Reuters In Xataka | Elon Musk knows that TSMC is overwhelmed: Terafab is his idea to completely change the global chip industry

Much of the world economy right now consists of setting up data centers. And there is already a game on Steam that simulates it

Surely what you want most when you come back from work is to turn on your PC or console to play a work game. There is not an ounce of sarcasm in this phrase, since for some time now games that are about that, about working, have become popular. And I don’t mean a ‘stardew valley‘ farm management or a ‘Animal Crossing‘mortgage payment: I mean games that are, directly, a second job. There are cleanof be an IT in a company, as a worker supermarket or of construction worker. Also being in charge of a data centerclear. With all the boom in data centers that have drunk the ram market and SSDs, it is possible that you can’t build a PC new because RAM is through the roofbut you can always fulfill the fantasy of being that person who has the power to set up servers and wire everything in their hands. Is called ‘Data Center‘, and as a game to learn how data centers work and turn off your brain, it is… interesting. The game of having an after-work job setting up data centers Don’t think of this game as a construction game like ‘The Sims‘ and the like. Here you already have the space and what you should do is internal management. You must buy the frames to install the racks, servers and switches, but not crazy, but depending on the needs of the clients who hire your services. Once you have the equipment, it is time to interconnect them with the Ethernet cables that link systems within the same rack, but that must also physically go to other platforms. The easiest thing is to pass those cables through aluminum structures hanging from the ceiling, and once you think you have everything ready, it’s time to turn it on. This is when your customers’ traffic is represented by light balls that travel along the cables. Those little balls have their reason because as things progress, Your clients will ask for more and more bandwidthand you will have to start managing and prioritizing. Equipment also breaks, so you will have to go to the PC to order spare parts or upgrades to have greater computing capacity. The idea is to create the perfect system with the best possible data flow, without bottlenecks and without wasting resources, carefully scaling to offer each client what they need and not oversizing. Those little balls represent data traffic. Each color is a customer It is, in short, a work game that can be repetitive, but that is why it works so well. In this type of titles you do not have to solve puzzles, You don’t have to be skillful with the controls or think too much. They are ideal for turning off the brain while we do a repetitive task and simply focus on what we have to do and what clients ask of us. It sounds like the most boring thing in the world, that second job that I mentioned at the beginning of the article, but they are perfect games to turn off the brain while we have a podcast in the background or something like that. In the comments of this particular ‘Data Center’, players highlight the “teaching” aspect and, despite the limitations of some systems, how realistic it feels. The store from which we must order the components Now, it is not a simulator. In the comments, players who claim to work in data centers point out that, although it is curious and represents some things very well, there are others that do not fit reality and technical options are missing such as VLAN systems or managing something as basic as power cabling. The best thing is that it costs nine euros and, if you don’t click on the first two, you can request a refund on Steam very easily. In the end, it is not a game for everyone. No game is, really.but ‘Data Center’ is one more of that much talked about wave of work games that is appearing recently. Because managing a data center may not be your thing, but for example, restore retro games or manage the latest video store of the city before Amazon eats it. Images | ‘Data Center’ on Steam In Xataka | It seemed like a game of imitating movements. It was actually diagnosing autism better than many clinical tests

Ford has been slow to adapt to the electric car, so it is going to start manufacturing batteries for… data centers

Ford has decided to convert its electric vehicle battery manufacturing capacity into a large-scale energy storage business. The move has its own name: Ford Energy, a new division with $2 billion in investment planned for the next two years and the stated objective of supplying batteries to data centers, electricity companies and large industrial consumers. Because now. The starting point is not exactly ideal for the company. Ford’s electric division accumulated net losses of 11.1 billion dollars only in the fourth quarter of 2025, according to Reuters. For this year, the company expects to continue losing between 4,000 and 4,500 million additional dollars in its electrical and software division. “I think the customer has already spoken,” Ford CEO Jim Farley told investors. With battery factories operating at low capacity and the electric vehicle market in the United States in free fall, especially after the elimination of the $7,500 aid last September, Ford has chosen not to dismantle that infrastructure, but to redirect it. What is Ford Energy and how it will work. The bet is articulated around the Glendale, Kentucky, plant, which will be converted to manufacture energy storage systems at network scale. According to counted Ford late last year, the facility will produce LFP (lithium ferrophosphate) cells and storage modules. The cell technology used is licensed by the Chinese firm CATL, with whom Ford already had agreements on its line of electric vehicles. The plan, according to the company itself, is to have initial operational capacity within 18 months and reach at least 20 GWh of annual production by the end of 2027. In parallel, the BlueOval Battery Park Michigan plant, in Marshall, will continue with its production of LFP cells for Ford’s upcoming midsize electric truck, but will also make lower amperage cells aimed at residential storage. Lisa Drake, the board of directors who heads Ford Energy, explained that the “predominant” business opportunity will be in commercial electric grid customers, with data centers as the second priority and the residential segment as the third leg. Drake also noted that when going out to market to explore demand, it became clear that the technology preferred by customers was precisely the containerized prismatic LFP system, something that Ford could easily manufacture thanks to its licenses. For his part, John Lawler, vice president of Ford, counted In the statement, Ford Energy’s core purpose is to “capture the growing demand for reliable energy storage that reinforces the stability and resilience of the electric grid for utilities and large consumers.” The market you want to conquer. The explosion of artificial intelligence electricity consumption in data centers is skyrocketing on a global scale. The International Energy Agency places the demand for these centers around 945 TWh by 2030approximately 3% of global electricity consumption, with a projected growth of 15% annually. In the United States alone, according to the Battery Council International, this consumption could double to between 400 and 600 TWh on the same date. In that scenario, large-scale energy storage becomes critical infrastructure and Ford, like many other converted manufacturersthey see a great business opportunity. Ford is late, but he is not alone. The problem is that Tesla has a decade of advantage. Its energy storage business deployed 46.7 GWh in 2025 alone, 48% more than the previous year according to TechCrunchand was also more profitable than its own electric car division, with gross margins close to 30% compared to 15% for the automobile. General Motors has also made a move: its joint venture with LG Energy Solution has just invested $70 million to convert its Tennessee plant, south of Nashville, into the production of batteries for storage. The transition, however, is neither easy nor cheap. Switching a factory from nickel chemistry, common in electric car batteries, to LFP can take up to 18 months and cost several hundred million dollars, according to share from Reuters. Added to this is technological dependence on China, which dominates the LFP supply chain, and 35% US tariffs on cathode and anode materials of Chinese origin. What this means in the long term. Just like they count From the middle, although the demand for energy storage in North America is expected to almost double in five years, going from 76 to 125 GWh, that is not enough to absorb the more than 275 GWh of productive capacity that the automobile industry has installed with electric in mind. Storage alleviates the problem, but does not completely solve it. Even so, this same reorientation is what many other car manufacturers have opted for in order to take advantage of their infrastructure and contain losses due to their electric cars, especially in the United States, which is where things are much weaker. Cover image | Hans and ford In Xataka | Australia has a straight highway of 150 kilometers. And to prevent you from falling asleep he has put hobbies on the posters

We believed that data centers in space were a thing of the future. Kepler has already activated the largest orbital cluster

For years, talk of data centers in space sounded like the kind of idea that always seemed a few years away. The conversation existed, of course, but almost always supported by long-term plans, ambitious announcements and an industry that had not yet shown much real muscle in orbit. That is why what has just emerged deserves attention. TechCrunch explains that Kepler Communications has already launched the largest computing cluster currently operating in space, a sign that this race is beginning to leave the field of promise to enter, little by little, the field of infrastructure. What has Kepler put into orbit. It is not a large facility suspended above our heads, but rather a distributed cluster made up of 10 operational satellites. Together they add up to around 40 Nvidia Orin processors aimed at Edge Computingconnected to each other by laser links. That set, launched in January of this year, as we say, is today the largest active computing cluster in orbit. The company itself also frames this network as a constellation designed to move data in space almost in real time. What it really is. So we are not facing a massive orbital data center that replicates the Earth model, but rather a distributed architecture that combines connectivity and processing in the full space environment. This difference matters because it allows us to separate two plans that are often mixed: one thing is the large-scale vision defended by actors like SpaceX or Blue Origin, and quite another is this first step, much more attached to immediate uses and specific needs of missions in orbit. The immediate business. If this orbital computing is starting to be interesting, it is because it addresses a fairly clear problem: it does not always make sense to send all the data to Earth to process it later. The initial value of these systems is in working with the information right where it is generated, something especially useful for more advanced sensors and for applications that require a faster response. Kepler also maintains that its network can serve as a basis for future processing and connectivity services between different space assets, and the media adds that the company already transports and processes data uploaded from the ground, as well as information collected by payloads hosted on its own satellites. Sophia Space. Here a startup comes into the picture that wants to upload its proprietary operating system to one of the satellites in the constellation and try to deploy and configure it on six GPUs spread over two ships. In a terrestrial data center that would be almost routine, but it would be the first time we would see something like this in orbit. For Sophia, in addition, the test has a clear risk reduction value before its first launch scheduled for the end of 2027. And we are not talking about a minor detail: the company is developing space computers with passive cooling, a way with which it seeks to attack one of the big problems in this sector: avoiding overheating. Kepler doesn’t want to be that. In the midst of so much noise around orbital data centers, the company itself is trying to position itself in a somewhat different place on the map. Your corporate presentation insists in a mission much more linked to communications, with a hybrid optical constellation designed to modernize the flow of data in low orbit and beyond. In this sense, it does not define itself as a data center company, but as infrastructure for space applications. The journey has begun. If this step by Kepler makes anything clear, it is that orbital computing no longer belongs only to the realm of great presentations. SpaceX wants to deploy a massive network of satellites for AI, Google prepares in-orbit tests with solar-powered chips and Blue Origin has announced a constellation of more than 5,000 satellites. In parallel, starcloud already launched a satellite in 2025 with an Nvidia H100 GPU and Aetherflux targets 2027 for its first node. Images | Kepler Communications | Sophia Space In Xataka | The mystery of the misinflated balloon: the more we calculate the size of the Universe, the less sense it all makes

China has just launched its first undersea data center with total energy autonomy. The idea makes more sense than it seems

In the AI ​​race, having a robust data center infrastructure to power it is essential, but first you need energy to power it all. The United States may lead the chip industry (at least, the strategic ones), but China follows closely at an unstoppable pace and furthermore, has the energy. And he is already beginning to connect the dots, showing off his technical power and ingenuity: already It has the largest data center in the worldis also a pioneer to submerge them under the sea. Now it has taken a twist with the first underwater data center that ‘drinks’ directly from the wind that just opened. This project represents the perfect union of two of China’s strategic priorities: digital sovereignty and carbon neutrality. By placing computing infrastructure on the seabed and powering it directly with clean energy on siteChina is solving one of the great current technological problems: the insatiable energy consumption of AI and Big Data. The project. About 10 kilometers off the coast of Shanghai, at the bottom of the East China Sea, a steel cylinder receives electricity directly from wind turbines and is cooled with sea water. It is the Lingang Subsea Data Centeran ambitious project promoted by Shanghai Hailan Cloud Technology (HiCloud) and built by CCCC Third Harbor Engineering. It consists of a series of data storage and processing modules encapsulated in watertight and submerged containers, which are connected via two 35 kV submarine cables to offshore wind turbines operating off the coast of Shanghai. With a planned capacity of 24 MW in two phases, the first is already operational: it has a capacity of 2.3 megawatts and includes a ground control center, a vertical data module installed under the sea and two main 35 kilovolt submarine cables. Why it is important. In addition to the fact that it does not occupy land, in cities as crowded as Shanghai it represents a valuable saving in land and that it can be installed close to where it is needed (if there is a coast, obviously), because it solves at the same time three structural problems of the sector: Refrigeration. Seawater acts as a constant and free heat sink, eliminating the need for industrial air conditioning systems that consume 40 to 50% of electricity. The metric that measures the energy efficiency of a data center by comparing the total energy consumed versus that used purely by the servers is the PUE, which for a standard data center on land is an average slightly higher than 1.5. The project promises to lower it to a figure not greater than 1.15. Without consumption of fresh water. Traditional data centers evaporate millions of liters of water to cool their servers, but this uses thermal exchange with the ocean, so it does not consume water resources. Take advantage of the surplus from wind power. One of the handicaps of wind energy is that generation depends on the wind and not on demand, so if you do not have a battery, the energy that is not consumed is wasted. Thanks to this direct connection, the data center absorbs wind production in real time, functioning as a constant consumer that reduces the waste of renewable energy due to lack of destination, In figures. The magnitude of the project, with some official numbers: The budget is 1.6 billion yuan, about 200 million euros. Total planned operational capacity of 24 MW (2.3 MW in the first phase). The design PUE is less than 1.15. More than 95 percent of electricity comes from renewable sources. Context. The name of HiCloud is not new because in fact it is an old acquaintance: it is the person behind the underwater prototype in front of Hainan which began to install in 2021. However, the international reference is the Natick project from Microsoft (2013–2024), which demonstrated the potential of underwater centers: only 8 of the 864 servers failed, a much lower mortality rate than that of any conventional data center in the same period and also got a very low PUE of only 1.07. Despite this, Microsoft shelved the matter: viability in terms of costs and maintenance is another story. However, the Lingang project has top-level institutional support: is present on the List of Green and Low Carbon Technology Demonstration Projects of the NDRC, China’s top economic planning body. How they have done it. Servers are placed in pressurized steel cabins filled with inert gases to prevent corrosion and fire with a design that maximizes interior space and minimizes the impact of waves. Heat is dissipated by pumping seawater through radiators located behind the racks. The most complicated operation was raising the cabin in the open sea: the separation between the legs of the support structure and the steel piles on the seabed was only 0.18 meters and the maximum allowable deviation was 10 centimeters, so GPS and the Sanhang Fengfan crane vessel were helped. Roadmap. The project follows a staggered progression that leaves certain unknowns. First was the prototype in Hainan (2021-2024). In 2025 the project began in Shanghai, whose phase 1 concluded in October of that year and it has just been launched a few weeks ago. The key phase that will take capacity up to 24 MW has no official public date. Of course, the consortium of companies made up of HiCloud, Shenergy Group, China Telecom Shanghai, INESA and CCCC Third Harbor Engineering signed a cooperation agreement in October 2025 to scale to 500 MW linked to offshore wind, although where and when is unknown. Yes, but. That 2.3 MW of phase 1 is practically a demonstration, not commercial infrastructure as a large conventional data center operates between 50 and 500 MW. And in addition, it has to resolve the issues that Microsoft’s Project Natick left unresolved, such as underwater maintenance: HiCloud has not published protocols or long-term repair costs. And scalability to 500 MW is at the moment more of an intention than a project In Xataka | Where you see a mountain, China sees a … Read more

We had a perfect plan to decarbonize the electrical grid. The brutal consumption of data centers has dynamited it

The daily headlines multi-million dollar investments announced in new language models and cutting-edge chips. Venture capital investors have pumped more than half a billion dollars into AI startups over the last five years. But, as a revealing analysis warns of TechCrunchthe smart money has begun to change sides: today, the best investment in Artificial Intelligence is no longer software. The reality on the ground has become extremely arid. Putting up walls and stacking servers in a giant data center has become the easy part of the equation. The real wall the tech sector is crashing into is finding the electrons needed to power it. According to a report by the analysis firm Sightline Climateup to 50% of data center projects announced for 2026 could face delays. Of the 190 gigawatts (GW) of capacity the company tracks globally, just 5 GW are under actual construction today. The bottleneck is no longer the microchips. It is access to the electrical network. The tyranny of 24/7. Consumption has run amok at a pace that 20th century infrastructure cannot process. A Goldman Sachs analysis projects that AI will shoot energy consumption of data centers by 175% by 2030. The figures all point in the same direction: the Open Energy Outlook predicts that electricity demand combined data centers and crypto mining will grow by 350% this decade. As a result, the pristine image of the technological cloud is evaporating. Google’s emissions have increased by 48% in the last five years, and Microsoft’s by 31% since 2020. The reason? What is known in the industry as the “tyranny of 24/7”. The algorithms do not sleep and require a continuous and steady power supply; They cannot be turned off simply because the wind stops blowing or the sun sets. Given the lack of mass storage systems globally, the fuel that is covering this urgent gap is not green. It is natural gas, which has returned from retirement as the great structural support of the sector. A global collapse with two faces. The pressure has already broken the market balances. In the PJM region—which supplies 13 eastern US states and has the highest density of data centers in the world—capacity prices went from $30 to $270 in a single auction at the end of last year. As John Ketchum, CEO of NextEra Energy, noted, we are facing a “golden era of energy demand”, but with an insurmountable physical limit: “the new electrons cannot reach the network quickly enough.” This electrical asphyxiation is redrawing the global map, and Europe is the best example. Historically, the European market was dominated by the “FLAP-D” markets (Frankfurt, London, Amsterdam, Paris and Dublin). But the network of these cities is no longer going strong. According to data from Greenpeacedata centers accounted for almost 80% of electricity consumption in Dublin, forcing Ireland to impose a moratorium. The market share of these traditional capitals will fall sharply by 2035causing a mass exodus to the Nordic countries (with unburdened networks and cold climates) and to southern Europe, such as Spain, Greece and Italy, in search of green megawatts. The hardware and network problem. When we scratch beneath the surface of this collapse, we discover that the physical problem splits into two large gaps. First, the machine to generate the energy is missing. Since intermittent renewables are not enough, companies turn to gas. However, gas turbines have become a rare commodity. Three years ago, Siemens Energy executives considered this market “dead”; Today, the factories are so overwhelmed that the delivery times for these turbines can extend up to seven years. Second, the “plumbing” is missing. Once the electricity is generated, the task of taming it within the building falls to the transformers. It is an iron and copper block technology that has barely changed in 140 years. As explained TechCrunchAs servers demand more power, traditional electrical equipment will take up twice as much space as the servers themselves. It is mathematically unsustainable. ‘Smart Money’ changes sides. Against this backdrop, venture capital is pivoting. Big tech companies (Amazon, Google, Oracle) are starting to behave like energy giants, devising alternatives to minimize their dependence on an outdated public grid through hybrid or generation approaches. on site. The solutions are divided into several fronts: The nuclear resurgence: Google has signed a pioneering agreement with Kairos Power to develop seven small modular reactors (SMR) by 2030, and Amazon tried (although regulators temporarily blocked it) connecting a data center directly to the Susquehanna nuclear power plant. Super batteries: Google is collaborating in Minnesota with the company Xcel Energy and the startup Form Energy to install batteries capable of discharging energy for 100 hours, thus stabilizing the peaks of renewables. Hardware innovation: Dozens of startups (such as Amperesand or DG Matrix) backed by investment funds are developing silicon-based “solid state” transformers, seeking to finally retire old iron and copper to save vital space in facilities. Regulatory surgery: In southern Europe, organizations such as the CNMC in Spain are applying “flexible access permits”, forcing centers to accept cuts in emergencies so as not to collapse the entire country. The paradox: AI as savior of the electrical system. However, the story has a fascinating twist. The same technology that today threatens to burn the cables of half the world could be the one that ends up saving the electrical system. According to the consultant’s estimates Deloittethe application of artificial intelligence to optimize industrial systems and electrical networks will save more than 3,700 TWh globally by 2030. That is, AI will save almost four times the energy consumed by all the data centers on the planet combined. A report of Ember over Southeast Asia (ASEAN) support thiscalculating that integrating AI into the management of its networks will save more than 67 billion dollars and avoid the emission of almost 400 million tons of CO2. But to get to that future of efficiency, you first have to turn on the machines today. And what is at stake is the world economic map. Hosting these centers is … Read more

Data centers are real “heaters”. And they are settling in regions as hot as Aragón

The data centers They are a black hole in several senses. They are drinking the global NAND chip manufacturing capacity (what affects SSDs, to RAM oa SD cards), the companies that they make batteries they can’t cope and consume wateryes, but much more alarming is energy consumption. In this sense, they are insatiable and, in the end, thousands of pieces of equipment that generate heat are causing another unexpected effect: they are turning the facilities into heat islands. And it is something that has the potential to affect 340 million people. What’s happening. Andrea Marinoni is an associate professor in the Earth Observation group at the University of Cambridge. Also the coordinator of a group of researchers from both the center and the Nanyang Technological University who have published a study called “Heat Island Data: Measuring the Impact of Data Centers on Climate Change.” In it, they present the results of measuring more than 6,000 data centers located far from dense urban areas with the aim of identifying whether these facilities, by themselves, are a notable heat source. The result? “An impact elderly than expected,” according to the researchers. They compared historical temperature measurements from the locations of those data centers over the last 20 years to compare how things have changed recently and identify whether those data centers have had any influence. And, as we said, the impact seems to have been strong: an average of 2°C, with maximums of up to 9°C in some cases. Doesn’t matter the place. This generates a heat island effect, which is when a large amount of heat is concentrated in one area that should not be there. In big cities It’s something that usually happens and that’s why the most efficient urban architecture seeks to combat the phenomenon. And it doesn’t matter where the data center is. In the study they present several examples: Bajío region in Mexico: high data center density, stable climate, but a land surface temperature increase trend of 2 degrees Celsius in the last two decades. It is something that was not identified in nearby areas without data centers. States of Ceará and Piauí in Brazil: increasing trend of 2.8°C with a projection of reaching 3.5°C in the next five years when this is not observed in the rest of the areas. Aragon in Spain: an anomalous increase of 2°C in surface temperature that stands out compared to neighboring provinces. Potential damage. Aragón is a worrying example because heThe region is consolidating as one of the ‘lungs’ of hyperclimbers in Europeas well as one of the regions of Spain key to the expansion of data centers and European technological sovereignty. And the problem is that, according to the study, the impact of this increase in surface temperature reaches up to 10 kilometers away from the hyperscalers. They detail that in the surrounding areas that are about 4.5 kilometers from the data centers, an increase of 1°C can be measured, which seems little, but when we talk about these climatic effects, it is a lot. And, furthermore, they estimate that the impact of increased temperatures due to this broad heat island effect is something with the potential to affect 340 million people. Yes, but. This research has not been the only recent one on the effect of data centers on the land on which they are installed. Researchers at Arizona State University they installed sensors on cars driving near these centers to capture measurements and noticed the same thing as the Cambridge researchers. But one thing to keep in mind: both studies show measurements, but they have not been peer-reviewed. And there are experts, such as Ralph Hintemann, principal investigator at the Borderstep Institute for Innovation and Sustainability, who point out that, although the results are there and are interesting, some figures “seem very high.” In fact, it focuses not so much on the heat that is concentrated around data centers but on the big problem: the amount of energy they need and the return to fossil fuels to meet peak demand. Image | Tedder In Xataka | Data centers in space promise to save the planet. And also ruin the earth’s orbit

China says it has built its largest data center. And confirms that your problem is precisely in the chips

China has just turned on its new technological pride in Shenzhen: an AI cluster with 14,000 petaflops built entirely with Huawei Ascend 910C chips. the city has presented it as the first scale computing center with 10,000 cards with completely national technology. It is an undeniable milestone, but if we give it context, an alarm signal and a dose of reality. Why is it important. The Shenzhen cluster, with all its rhetoric of technological sovereignty, represents about 1% of the capacity of the largest US data center in operation today. In other words: China has built, with great institutional effort, what OpenAI already had available to train GPT-4 in 2022. The gap is not a question of ambition (China has it) or capital (it also has it) or energy (of course, he also has it). It’s a chip issue. What are they capable of manufacturing and in what volume today. Between the lines. The Shenzhen government statement highlights energy efficiency metrics and occupancy rates of 92%. It’s really good data. But the selection of indicators (the cherry picking) says a lot so it is omitted: there are no direct comparisons with the clusters of NVIDIA H100 that colonize the data centers of Microsoft, Google or Amazon. Posting only what you have is also a way of not publishing what you lack. The context. At this point no one doubts that China does not lack electricity, not even engineersnor money to build large-scale AI infrastructure. What is still missing, despite the advances, are the chips. Export restrictions imposed by Trump They have cut off access to advanced semiconductors from NVIDIA and TSMCand that has forced China to accelerate its own ecosystem. Huawei has responded with the Ascend 910Ca capable chip but that still has performance limitations and, above all, volume production. If wafers were not in short supply, this data center would be a hundred times larger. Yes, but. Can China close that four-year gap before it gets even bigger? The answer depends almost entirely on how much its domestic semiconductor industry manages to scale, and whether or not Western sanctions manage to stifle that process. At the moment, in Shenzhen they are celebrating an achievement as undeniable as it turns out that in the eyes of Silicon Valley they are still in 2022. Featured image | Huawei In Xataka | Memory prices have started to fall in some markets. There is still a long way to go to close the AI ​​crisis

The French AI startup profiting from geopolitical chaos just raised $830 million. For European data centers

The French startup Mistral has raised 830 million dollars and it has done so with one objective: to create AI data centers in Europe that will be based on NVIDIA chips and technological solutions. That’s good news, but it also has a disturbing side. Merci, Monsieur Trump. There is a geopolitical irony in the rise of Mistral. The French AI startup has become a reference in Europe, but it has done so not so much because of its models or technology (that too) but because of Donald Trump. Since the American president returned to power and began to destroy the era of globalization, the demand for “sovereign” European alternatives to the large US technology platforms has skyrocketed. Governments and companies that previously turned to Microsoft, Amazon or Google without thinking are now trying to look for options that free them from those dependencies. Mistral is precisely the clear alternative in terms of AI. 830 million to have its own infrastructure. The round that Mistral has raised is not venture capital, but debt financing granted mainly by French banks such as Bpifrance, BNP Paribas, HSBC and MUFG. It is an interesting aspect and shows that the company no longer needs to convince investors, but rather finance the infrastructure necessary to scale its business. Those $830 million are destined for its future European data centers, starting with its facilities in Bruyères-le-Châtel, near Paris. Said center will house 13,800 GB300 chips from NVIDIA and will begin operating before the end of June. Debt, not equity. There is an important difference between the venture capital rounds that have financed Mistral until now and this new round of debt. Venture capital is not returned: investors bet on a stake in the company and get paid if the company grows and is sold or goes public. The debt is repaid, and it is with interest, regardless of how the business is going. That Mistral has opted for this mechanism suggests that it is optimistic about the future, but it also represents added pressure for the company, which will not be able to afford consecutive quarters of losses. Betting with other people’s money has its problems, but doing so with borrowed money also has important problems. The success of the 13,800 chips. May that French data center get 13,800 GB300 chipsthe most advanced from NVIDIA, is not a minor detail. These AI accelerators are on the waiting list of many companies, and here Mistral competes with hyperscalers like Microsoft, Google or xAI that buy tens of thousands of units and have priority agreements. That this European startup has managed to secure that amount seems to demonstrate that it has negotiating capacity or a special relationship with NVIDIA and its CEO, Jensen Huang. European AI ecosystem. Mistral is little by little becoming the perfect European ecosystem for companies that want not to be exposed to dependencies on North American partners. Having everything under European control is what more and more governments are looking for in Europe, and here we are facing an effort that wants to offer that certain independence… which of course is anything but complete. Be that as it may, Mistral has become the great European seller of sovereignty as a product. But. Mistral expects to achieve 200 MW of computing capacity by the end of 2027, including a €1.2 billion facility in Sweden with 23 MW that will begin operating next year. These are decent numbers in a European Union that has barely raised its head in this segment, but they are very far from those in China and especially the United States. OpenAI and its partners have agreements worth several hundreds of billions of dollars in infrastructure, and while here we move in megawatt capacities, there we talk about gigawatts. The distance is still enormous. And the dependency still exists. The paradox that no one seems to want to allude to is important: the European “sovereign” infrastructure that Mistral is building depends entirely on chips designed by an American company and manufactured in Taiwan. If for any reason Washington decides to make Europe a banned region for its technology and prohibits the export of GB300 chips, Mistral’s expansion would be paralyzed. The quest for digital sovereignty is interesting, but the reality is that Europe will continue to depend on US technology and Taiwan’s manufacturing capacity to an even greater extent than the US o China depend on its rival. The old continent has activated some measures for mitigate the problembut that will not prevent it from continuing to exist in the long term. Paris, European capital of AI. The French startup has turned France into one of the great European references in AI. Mistral was valued at $12 billion after raising $1.7 billion in financing led by ASML. In addition, they expect to exceed 1,000 million in annual recurring revenue. This company is now joined by the recently launched startup Yann LeCun: Advanced Machine Intelligence Labs (AMI Labs) has already managed to raise more than 1 billion dollars and will also be based in Paris. Another detail should be highlighted: Bpifrance, the French public investment bank, is leading the round. That is significant, because that means that the one supporting this initiative is the French state. In Xataka | Mistral does not generate hype, it is a discreet AI, it does not boost the shares of any company, but it already makes more money than Grok

from sharing mobile data to paying again like a decade ago

Saturday is a good day to have your internet cut off. At first you don’t notice, because you’re at home and you don’t use the computer (as much). But you end up finding out, and that’s what happened to me when I realized that I was without my O2 fiber connection, againfor the underground works from the A-5, again. Two days later I’m still the same, like many residents of the area, and this is becoming a small (but bearable) headache. The cuts are back. These works have already caused cuts in the past. They did it in July, August and November of 2025, and also in January 2026. Each time the affected areas and operators have occurred, they have varied, but for example on social networks there is data that indicate that this time the cut has been important and has affected to Movistar/O2 clients,Orange, VodafoneJazztel or Digi. Meanwhile, unlimited data. Spotting the problem on Saturday morning, I called my operator, O2, to find out what was happening. They confirmed to me that it was a fiber optic cable cut due to the works on the A-5, and they explained to me that they hoped to resolve the problem as soon as possible. And as in the previous outage, they told me that during this period I was offered unlimited mobile data on all the lines associated with my contract. It is something that operators usually offer in these cases and that certainly makes the problem mitigate… although it does not disappear. He tethering saves (quite a lot) the papers. Since then I have been using my computer with mobile data: I have shared the connection on my smartphone through tetheringwhich allows me to work normally and at decent speeds without problems. This weekend I have also used this connection, sharing it with the Chromecast on my TV to watch a series or movie without problems. Paying as before. Businesses in the area have also been affected by these service cuts, and the example is a supermarket near my house where this weekend there was no option to pay with a mobile phone. The POS did not accept contactless payments and you had to pay either in cash or with a physical debit/credit card, inserting it into the POS slot. Better to be proactive. Users have few options here beyond calling the operator to find out what happened and to have them activate that unlimited data if they had not already done so. Here it is advisable to be proactive and call because at least in my case until I called they did not activate those unlimited “bonuses”, and it makes sense: the operators may not know which users exactly are affected. If we want to have this option we will have to call and probably wait a few minutes until an agent answers us, something that may take time because these breakdowns affect many people. In my personal case the wait was about 5 or 6 minutes this time. It’s time to wait. As is often the case on these occasions, there is no clear estimate of when the problem can be resolved. In January the disconnection lasted approximately two days, and this time the outage is already on its way to lasting up to three days or more. Neither the operators nor the Community of Madrid offer much information in this regard, and in most cases the only thing that users can do is be patient. In Xataka | There is an extensive system to avoid being cut off in the 48 km underground of the M-30. It’s time to renew it

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.