The region with the largest energy deficit in Spain is staying data centers

Spain is being filled with data centers. A report The Iberian Peninsula reveals from the real estate consultant CBRE has the interest of large technology companies. The fact is striking, but it is even more the fact that the great focus of these technology is in a region that a priori It would not seem ideal For these facilities: Madrid. Hyperscalers. The cbre study cited In five days It points out this unique concentration in Spain of various data centers projects of the so -called “hyperscales” (Hyperscalers). A Hyperscaler is a Mass provider of cloud services that operates a gigantic network of data centers distributed throughout the planet. Amazon is a good example of this type of companies, but there are more, and they all seem to focus their attention on the Iberian Peninsula. Big Tech bet on Spain … Elliot Zounon, responsible for the report, explained how “there is no investor, a large operator or technological that does not have in its strategic plans to establish its data center project in the Iberian market.” But especially for Madrid. Especially striking was the deployment of projects that indicated the current and future capacity expected in the Community of Madrid, and which amounts to a total of 203 MW. Some of the most important companies in the sector, such as Microsoft, Google, Oracle, IBM, Kyndryl or Ovhcloud have data centers in the community. Various projects with an investment of 23.4 billion euros Until 2028 they propose sensitive growth in this area, and it is expected that by 2026 the capacity of Madrid ascends at 222 MW. Madrid, near the “flap-d”. In the European Union this market has been dominated by the group called Flap-Dwhich is an acronym for Frankfurt, London, Amsterdam and Paris, to which in recent times Dublin has joinedwith a capacity of 328 MW. Madrid is part of the so-called Tier-2, a kind of “second division” of cities with a lot of capacity in data centers. The capital is ahead of Milan, Zurich, Berlin and Oslo, and is also in this Barcelona group, which occupies the tenth position of the TIER-2 with 42 MW installed. And the energy, what? This proliferation of data centers in the Community of Madrid is paradoxical, especially since it is the region that produces less energy from all over Spain and It depends almost completely on external supply. In 2024 Madrid produced 1,334 GWh, more or less the same as in 2021, while its annual electricity consumption in 2024 was of 27,487 GWh. Thus, the community concentrates 11% of the national electricity demand. Of course: Spain is becoming a real Power Exporter Powersomething that favors that role in Madrid as a focus of attention for the creation of future data centers. Emptied Spain produces, the big cities consume. The truth is that the situation of the Madrid energy deficit is logical if we take into account that it brings together a great population and industry density. Here, as in other great Spanish capitals, Energy inequality is clear: while energy occurs in much more depopulated regions – the example of Aragon with wind It is remarkable – that energy ends up taking advantage of in large cities. Our country He has opted very strong for renewablesbut Madrid is a separate case: for not, in Madrid There are no wind farms. Not everything is megawatts. The choice of Madrid not only depends on the gross megawatts, but also on a combination of intangible advantages that technological ones take into account. The capital concentrates interconnection nodes and a dense network of operators that facilitate the exchange of data traffic (something crucial for cloud services and AI applications). The presence of corporate venues also influences, as does the fact that logistics costs are reduced against remote locations that can have cheaper energy, but are more isolated in terms of network and services. The human factor. There is also the Welfa Market and its technical profiles. For companies, deploy infrastructure near where talent is compensated, and professionals in the sector They usually establish their residences in large cities like Madridprecisely because there and other capitals it is where the job offer is concentrated. The same happens in the case of that “first division” of large capitals with data centers in Europe. Frankfurt, London, Amsterdam and Paris also agglutinate that range of technical profiles. The risk of being an energy black hole. Its practically zero self -production converts the Community of Madrid into a kind of “energy black hole”: it absorbs resources generated far and depends totally on the strength of the Spanish network, which recently suffered a worrying one – although it is difficult to be repeated– General blackout. But. Even with that energy deficit, hyperscalars reach these agreements with long -term contracts (PPAS, Power Purchase Agreements), previous agreements with networks and even investments in renewables. The idea is to disconnect the location decision of these data centers where the local energy production is. Madrid must of course ensure its capacity for interconnection and supply – perhaps with network reinforcement if necessary – but energy production in Spain (even Pull energy in the trash) It is a guarantee for this type of facilities. Image | Kyndryl | Community of Madrid In Xataka | Spain was supposed to have a “antiapagones” plan. It has encountered an insurmountable obstacle: politics

There are two suspicious companies of the theft of critical data of TSMC and none of them is China: the two are Japanese

TSMC leadership has a price. This Taiwanese company is The largest semiconductor manufacturer on the planet and has built its success on the tuning of Extremely competitive integration technologies. Your most advanced photolithography is currently The 2 Nm; In fact, it is about to start the large -scale manufacture of chips of this class. All probability of their competitors, they could know their most sophisticated processes, especially those that are linked to their 2 nm node. And, apparently, some of them are trying to get this information. As We explain three days agothe Taiwanese authorities have arrested three TSMC employees because they have allegedly stole commercial secrets of this company. As we can expect, behind this detention is TSMC itself, as He has revealed The Taiwan Superior Prosecutor’s Office in a statement. According to Nikkei Asiathose responsible for this company have realized that two employees and a former employee have been made with critical information about their photolithography of 2 Nm. This information is very valuable. In fact, it could be used by a competitor to optimize its own semiconductor manufacturing processes. Two unexpected suspects: Tokyo electron and rapidus corporation The research has not yet determined if this stolen information has reached another company, but United Daily News ensures that researchers have registered the offices of the Japanese company Tokyo Electron. The latter is specialized in the design and manufacture of wafering processing equipment, and currently its most ambitious project is the tuning of wafering engraving machines by plasma. These equipment are involved in the definition of the pattern that will later be transferred to the wafer. Rapidus is making a chip manufacturing plant in northern Japan in which it plans to produce 2 Nm semiconductors According to SCMPTokyo Electron has confirmed that he has fired an employee of his Taipéi subsidiary (Taiwan) for being involved in the theft of TSMC’s critical information. This Japanese company also ensures that He is collaborating with the Taiwanese authorities They are carrying out the investigation. “That Tokyo Electron is located in the center of attention for this incident is an unfortunate accident,” has declared ASUSHI OSANAIProfessor at the University of Waseda (Japan). However, this company is not the only Japanese company that has been involved in this conflict. And is that Money.udn.com maintains that some of the TSMC employees who have been arrested have delivered to Rapidus corporation Hundreds of photographs and data linked to their most advanced process integration techniques. This company is intended to compete from you to you with TSMC, Intel or Samsung in the chip production market. Interestingly, it is very young: it was founded on August 10, 2022 by the Japanese government with an initial capital of 7,346 million yen (just under 46 million euros) contributed by, and here comes the interesting, Sony, Toyota, Nec, Softbank, Kioxia, Denso, Nippon Telegraph and Mufg Bank. Rapidus is currently putting a circuit manufacturing plant integrated in northern Japan, in the city of Chitose (Hokkaido), in which it plans to produce 2 Nm semiconductor. The first prototypes of these chips are already ready, but large -scale manufacturing It will not arrive at best until 2027. Anyway, as in relation to Tokyo Electron, the possible implication of Rapidus in the theft of data to TSMC has not been officially confirmed. In fact, it is possible that the authors of this crime have acted on their own and have offered the stolen information to Rapidus without this last company having requested or accepted. Those responsible for the investigation will have to settle. More information | Money.udn.com | SCMP In Xataka | South Korea fears US reprisals. To avoid their old lithography equipment, they take dust on a warehouse

will invest 30,000 million euros in data centers for AI

Europe cannot lose the train of the artificial intelligence (AI). You can’t afford it. This technology already has a very deep impact on the economy, scientific and technological capacity, and the military development of a country, and currently USA and China lead with forcefulness In this area. So far Europe seemed to settle for the wake of the two great powers they are disputing world supremacybut its strategy is about to change. And is that according to CNBC The European Union plans to invest 10,000 million euros in the construction of thirteen data centers for AI, as well as 20,000 million euros in a network of “Gigavatio Class” facilities. These latest data centers are the largest and most ambitious, and their denomination indicates that by their size they consume a lot of electricity. In fact, a gigavatio is equivalent to one billion watts, and a small city can consume this amount of energy. At the moment sixteen European countries have been interested in receiving these facilities, and, According to CNBCthe first of these large data centers will reside in Munich (Germany). Each Gigavatio class installation will cost between 3,000 and 5,000 million euros, and will bring together no less than 100,000 avant -garde gpu for AI (they will be possibly chips NVIDIA H100). All this paints very well, but raises a doubt that we cannot ignore: it is not clear how the countries involved in this plan will resolve the supply of electricity to These demanding facilities. It will cost Europe a lot to follow the rhythm of the US and China The US government led by Donald Trump is determined to lead in the field of the cost of what costs. And in principle this initiative, baptized by the new administration as ‘Stargate project’will cost 500,000 million dollars. This money will leave the coffers of the Japanese investment group SoftBank; of those of OpenAI, the creators of Chatgpt; of those of Oracle, and, finally, it will also be provided by the investment firm Emiraratí MGX. These companies will support the construction during the next four years of an advanced network of data centers that will house the high performance computing infrastructure necessary to sustain US leadership in the AI field. The spearhead of these facilities It is already being built in Texas (USA), in a town called Abilene. And it is colossal. In fact, this first data center of the ‘Stargate’ project will bring together, According to OpenAimore than two million chips for ia. The ‘Stargate’ infrastructure should be fully ready before President Trump’s current mandate expires When the US government announced to Bombo y S pay this plan left a great question open: how did he plan to solve the supply of electricity required by the new facilities? Large data centers for AI consume a lot of electricity, which has caused Some technology have opted for investing in nuclear centrals to guarantee the supply of electricity that these facilities require. At the moment this question is not completely resolved. And it is not because the ‘Stargate’ infrastructure should be completely ready before President Trump’s current mandate expires, and a new nuclear power plant can hardly come into operation in four years. Even so, Openai and Oracle They have officialized that have reached an agreement to build the necessary infrastructure to Deliver additional 4.5 GW to your data centers. Interestingly, SoftBank does not participate in the financing of this expansion, although, as I mentioned a few lines above, it does in the ‘Stargate’ project. Anyway, in this equation there is another unknown that also has a lot to say: China. “We hope that China significantly increase its investments in AI and semiconductors in response to the US domain in AI,” CBM consultancy analysts foresee. It makes sense. These two great powers are being disputed world supremacy, so it is understandable that each significant step that give one of the two Receive a more or less overwhelming answer from the other. We can be sure that 2025 will be a year even more agitated than 2024 in the geopolitical and technological fields, so we will be attentive to the steps that US and China will surely give. And Europe. Also Europe. Image | Christina Morillo More information | CNBC In Xataka | Huawei attacks Nvidia positions in China: he wants to have dominant hardware in inference processes in AI

The most bestial data centers on the planet, gathered in this graphic

The development of AI has promoted a New ‘Armamentistic’ career globally. It is not sought to dominate another territory, but to get the more computing power, the better. The main technology companies are deploying centers from Data around the world With a goal in mind: train the artificial intelligence. There are data centers that are authentic burged, and in this graph we can see the most powerful data clusters in the world with an outstanding protagonist: Elon Musk. Cluster. Before entering numbers, a nuance. When we talk about calculation power, we can talk about a computer cluster or a supercomputer. The latter is a system Extremely powerful which can be built with processors specially designed to reach extreme calculation powers or, most commonly common, from thousands of high performance servers. They are used for scientific simulations and tasks that require a huge calculation process, and its cost is brutal. On the other hand, We have the “affordable” version of a supercomputer: the computer cluster. It is a series of interconnected work stations that work in parallel solving problems. It is similar to a supercomputer, but the advantage is that It is a more flexible system Because, as you need more teams, you can expand the cluster. In addition, the components are more standard, which also allows the cost to be lower. But well, it is a concept that has blurred in recent years. The 100,000 club. That said, let’s go back to the graph elaborated by Visual Capitalist With the data of EPOCH AI. In it, we can see the most powerful clusters currently, but with some trap: they are both planned and operational. X, Elon Musk’s company, lit the XAI Colossus Memphis Phase 1 last year, a huge data center with 100,000 NVIDIA H100 GPU With the aim of training ‘Grok’, his AI model. It was something that He even surprised Jensen HuangCEO of Nvidia. It is a computing monster with an enormous calculation power, but the figure is expected to increase up to 200,000 GPU. We will see later the energy consequences of this. Following Musk’s company, we have Meta by stating that they have a cluster “greater than 100,000 GPU H100“For her model ‘calls 4’. Then there are those who maintain something else the mystery. For example, Microsoft with its cluster For Azure, Copilot and the OpenAi AI estimated they have 100,000 GPU between H100 and H200, Two worlds. Out of that 100,000 club we have Oracle With its 65,536 NVIDIA H200, another Musk company -you with the Cortex Phase 1 and its 50,000 GPU, and the United States Department of Energy with The Captainhe Most powerful supercomputer in the world. Officers or estimated, what is clear with this graph is that there is a country that has taken the calculation of AI seriously: United States. They are the ones that seem to push stronger with their data centers in the United States (of the 10 clusters, the first nine are in the US and the last in China) and are not only building inside their borders: Also outside. An example is the finish plan for Build data centers in Spain or the one who has practically Manhattan’s size. European expansion. In the graph, we can see two European clusters. On the one hand, The Jupiter from the Jülich Supercomputing Center in Germany with its confirmed GPUs. On the other hand, the Nexgen In Norway, with about 16,300 GPUs. Europe has undertaken several financing initiatives with the objective of Promote your competitiveness Thanks to programs such as Genai4eu and its budget of 700 million euros between 2024 and 2026. The objective is to build large data centers and, for the call of 2025, 76 proposals were presented in 16 different countries. Now, that development of the European AI must be aligned with Ai actthe agreement in force since February of this 2025 that ensures transparency and an ethical AI. Number vs. efficiency in China. Who has put the batteries in AI, beyond US companies, is China. Following one Road map very different from the westernChina is focusing on having (supposedly) less GPU working, operating with greater efficiency, much lower costs than those of American companies and with equivalent results. Deepseek or the most recent Kimi They are two samples of it. Nvidia rubs her hands. And of all this battle for AI, there is a clear winner: Nvidia. As much as it may be, and beyond who has more or less GPU to do the job, the clear winner is Nvidia. In China it is not so clear due to the commercial veto, but the main world data centers use Nvidia’s architecture with their GRAPHS H100 Y H200. And that if we talk about “normal” cards for AI, since they have the B200 with four times the performance of H100. In fact, the company seems so focused on that AI career that would have neglected what led by AMD for years: Your players cards. Those are servers of Lenovo data centers. Companies seek to reduce the footprint reusing hot water after dissipation to, for example, fill pools or showers. Image | Xataka The planet, not so much. And the consequence of that expansion of the AI is that data centers not only need huge energy amounts To function, also water to dissipate the heat of the equipment. There is an important absent in the graph, Google, which also operates its data centers for AI and that, together with others as goal or Microsoftneeds nuclear centrals to feed its facilities. Consumption is so exaggerated that renewables are insufficient during demand peaks, Using fossil fuels like coal or gas ( esteem That, the 200,000 Colossus GPU consume 300 MW, enough to feed 300,000 homes) and, as we said, the Water use has become discussion material in the candidate territories to house new data centers. So much dissipation needs China to already Building at the bottom of the ocean. In Xataka | China wants to become AI world engine. … Read more

Openai and Oracle prepare a brutal data center with 2 million chips for ia

The plan ‘Stargate’ It is the great bet of the government led by Donald Trump to keep the US at the head of development in artificial intelligence (AI). When this project was made public on January 22, it hit the economic endowment that would make it possible: nothing less than 500,000 million dollars. This money will leave the coffers of the Japanese investment group SoftBank; of those of OpenAI, the creators of Chatgpt; of those of Oracle, and, finally, it will also be provided by the investment firm Emiraratí MGX. These companies will support the construction during the next four years of an advanced network of data centers that will house the high performance computing infrastructure necessary to sustain US leadership in the AI field. The spearhead of these facilities is already being built in Texas (USA), in a town called Abilene. And it is colossal. In fact, this first data center of the ‘Stargate’ project will bring together, According to OpenAimore than two million chips for ia. 4.5 GW Dan for a lot When the US government announced to Bombo y S pay this plan left a great question open: how did he plan to solve the supply of electricity required by the new facilities? Large data centers for AI consume a lot of electricity, which has caused Some technology have opted for investing in nuclear centrals to guarantee the supply of electricity that these facilities require. Oracle began to install the first ‘racks’ with NVIDIA B200 platform servers in June At the moment this question is not completely resolved. And it is not because the ‘Stargate’ infrastructure should be completely ready before President Trump’s current mandate expires, and a new nuclear power plant can hardly come into operation in four years. Even so, Openai and Oracle have formalized A few hours ago they have reached an agreement to build the necessary infrastructure to Deliver additional 4.5 GW to your data centers. Interestingly, SoftBank does not participate in the financing of this expansion, although, as I mentioned a few lines above, it does in the ‘Stargate’ project. According to OpenAia part of the gigantic Abilene data center is already in operation. In fact, Oracle began installing the first racks With platform servers B200 of NVIDIA In June. It is surprising that in just six months a part of this installation is already active, but it is important that we do not overlook that the Abilene data center seeks to demonstrate that the ‘Stargate’ plan is viable within the deadline that those responsible have promised. Anyway, in this equation there is another unknown that also has a lot to say: China. “We hope that China significantly increase its investments in AI and semiconductors in response to the US domain in AI,” CBM consultancy analysts foresee. It makes sense. These two great powers world supremacy are being disputedso it is understandable that each significant step that give one of the two Receive a more or less overwhelming answer from the other. We can be sure that 2025 will be an even more agitated year than 2024 in the geopolitical and technological fields, so we will be attentive to the steps that US and China will surely give. Image | Christina Morillo More information | OpenAI In Xataka | Huawei attacks Nvidia positions in China: he wants to have dominant hardware in inference processes in AI

They already use a mixture created by algorithms for their data centers

Goal has used a concrete mixture designed by algorithms in one of its data centers. According to the companythis formula promises to be more sustainable, faster to apply and has been developed with open source tools. With this approach, not only is it sought move towards zero emissions: Also accelerate the construction of infrastructure that grow to time, As the data center that is raising under provisional structures demonstrates. Invisible concrete weight. Few materials are as omnipresent as concrete. It is used on roads, bridges, homes … and also in data centers where a good part of our digital life is housed. The problem is that manufacturing its components, especially cement, generates a huge amount of CO2. The World Economic Forum indicates that It represents about 8% of global emissions. Goal has proposed to reduce that footprint without compromising the resistance or work speed. And that’s where his new model enters. An AI that does not create chatbots, but mixtures. To develop this system, goal was allied with Amrize – One of the world’s largest cement manufacturers – and with the Urban-Champaign University of Illinois. Together they have created an AI model that proposes concrete compositions. The model is based on Bayesian optimization And it is built with Botorch and AX, two open source tools developed by the goal itself. A slab test in the Rosemount Data Center The challenge was not minor: each mixture involves combining different types of cement, aggregates, water, additives and supplementary materials such as scum or flying ashes. The exact proportion, its origin or even the time of the year can alter the result. Traditionally, they explain, validate a new formula has been for weeks. With AI, this process accelerates because the system learns from the previous data, proposes new promising combinations and refines its predictions after each test. Implementation of the concrete formulation generated by AI in the data center From the laboratory to the field. One of the first large -scale validations was made in the data center that Meta builds in Rosemount, Minnesota. There, the contractor Mortensen applied the new mixture in one of the building’s support slabs. The objective was not only to check its resistance, but also its workability and the final finish: these slabs must be perfectly smooth and durable. The result, according to the firm, exceeded all technical standards. The formula designed by AI not only met the demands of resistance and cure, but also behaved well into work: it was poured without problems and offered an adequate surface. After two iterations, and with minimal human adjustments, the model had generated a recipe that improved the usual industrial formulas in speed, resistance and potential for emission reduction. An open model. The system developed by goal is not a commercial product or a closed tool. The company has published the code, data and technical approach in An open github repository called SustainableConcrete. The idea is not to keep the formula, but share the method: a way of applying artificial intelligence to concrete design that can adapt to other works, suppliers or materials. Touch wait to know if we will see more initiatives like this. This could facilitate the adoption of alternative mixtures in a variety of constructions. As we have seen, goal has not invented a new material. What he has done is to use AI to find new concrete formulas. Images | Xataka with Gemini 2.5 Flash | Mark Zuckerberg | Goal (1, 2) In Xataka | Nvidia says that China has the best open source AI in the world. These praises have a very clear intention

You are building a Data Center in Tents

Mark Zuckerberg already has the galactics he needed for his Superintelligence team. Getting talent was only part of the plan. The other is what to do with him, and putting it to work requires one thing: computational power. The goal has been in a hurry and, while progressing in different directions, it has begun to build data centers in outdoor tents. As amazing as it sounds. In the finish line they are marking their own rhythm to lift data centers. Given the emergencies that have entered them to add computational capacity, they needed to build data centers as quickly as possible, and the shortcut has been to build them in outdoor tents. This is something that confirmed Business Insider A company spokesman. That they are doing it does not mean that it is common: it is not about adding power without more, but doing so that the integration of the equipment does not put at risk the energy and cooling sustainability of the system. It is surprising that they will use tents, especially in summer, given The difficulties they add to heat management. Zuck making Musk. Zuckerberg’s measure reminds another rival in the race for winning artificial intelligence, and is not accidental. Elon Musk already made the madness of installing 100,000 NVIDIA GPU in 19 days, something with which Jensen Huang hallucinaba himself. In SEMIANALYSISthe first means to reveal the pharaonic finish lines, said that the design of the finish lines is influenced by the speed with which Musk has operated in XAI and in Tesla. Facebook in its beginnings was the company that led the “Move quickly and break things“And now shows that philosophy is still alive, as in OpenAi. The great plan. While finishing tents, Mark Zuckerberg announced They were going to “invest hundreds of billions of dollars in computing to build superintelligence.” In practice, he mentioned the construction of two mega data centers. The first will be Prometheus, which they intend to use since next year. Second, they will build ‘Hyperion’, with which he presumes that he will have A size similar to Manhattan and a capacity of up to 5 gigawatts over the next few years. The tents are provisional until they reach the fixed power they pursue. For contextualizing the numbers, in SemiSalysis Remember that, so far there is no Clusters NVIDIA H100 and H200 operations of more than 200 megawatts. Why so much haste. The context is everything. Goal was very well positioned in the artificial intelligence career, despite the fact that its great commitment of the decade was metaverso, where I was burning huge amounts of money. However, everything changed with flame 4. The last great language model of the company not only disappointed in its principlesbut has been overshadowed by such powerful competition as O3, Claude 4 either Gemini 2.5 Pro. In the finish line they are so aware of the lost competitiveness that His artificial intelligence by default to program is Claude. Yes, above Code flame. In any case, investors do not seem worried about The spectacular goal stock cycle. Having the best thing will be much more than talent and models. The goal plan to put the head of artificial intelligence shows that the race will have many legs. First, Zuckerberg has done with the talent of his great rivals. In second place, acquiring de facto Scale aihave ensured to have quality data. But it is not enough. Access is needed to a brutal calculation capacity, which will obtain with the aforementioned data centers. And that is where the race also becomes energy: goal goes launched by nuclear energyreaching nuclear AT THE CLOSURE. And goes further, with the construction firm of a plant that will use a technology focused on Take advantage of underground heat without leaks. Images | Mark Zuckerberg | Goal In Xataka | We have calculated how much money the Big Tech are being spent on data centers. The numbers are dizzy

Japan destroys the data transmission record again with 1.02 petabits per second. The impressive has been the distance

Japan has established A new record World Cup in data transmission per fiber optic, sending information to 1.02 Petabits per second through 1,808 kilometers away. The achievement, achieved by a joint team of Electric Industries Summit and the National Institute of Information and Communication Technologies (NICT) From Japan, it marks a new milestone in long -distance optical communications. And best of all: it has been achieved with fiber optics compatible with any device. The key technological jump. The advance does not reside solely in speed, but in the distance traveled maintaining a standard cable diameter. The previous records in speed Pure had reached 1.7 Petabits per second, but only covered 63.7 kilometers. This new brand multiplies by 28 the distance without increasing the thickness of the cable, bringing technology to real commercial applications. Image: Ispreview How it works. The system uses an optical fiber of 19 nuclei integrated in a cable with standard coating diameter of 0.125 millimeters, the same as the current fibers. Instead of a single beam of light, the fiber transports 19 parallel signals taking advantage of both the C and L bands of the optical spectrum. Electric Sumitomo optimized the structure and disposition of the nuclei to minimize losses, while NICT developed amplifiers capable of simultaneously enhancing the signals of all nuclei. Dimension of achievement. To contextualize the magnitude: 1 Petabit is equivalent to 1,000 terabits or 1 million gigabits per second. Compared to The average broadband speed in Spain (which is usually around 250 Mbps), this record is approximately 4 million times faster. At this theoretically, 10 million 8K video channels could be transmitted simultaneously. Or download the entire Netflix library in seconds. Practical implications. The record establishes a new standard within the framework of “capacity-duty” (1.86 exabits per second-kilometer) using standard diameter fiber. This means that future networks could exponentially multiply their ability without changing the existing physical infrastructure. Intercontinental submarine cableslike those that connect Europe with America, they could benefit from this technology, although at greater distances (more than 5,000 kilometers) the speeds would be lower but still impressive. The way to commercialization. Although these advances will not immediately reach domestic connections, they do mark the future of long -distance communications. The team now works on improving the efficiency of amplifiers and signal processing to bring technology closer to its real implementation. Global Internet traffic grows, but it’s nice to know that the optical fiber and the form of the cables we use today, they are still enough to evolve in this field. Cover image | Kirill Sh In Xataka | Mobile’s internet is going wrong: what can you do when you don’t have a connection

China has the ability to stop the construction of new AI data centers. It is a nightmare for the US

During the last two years The Chinese government has fought The US sanctions and its allies resorting to a strategy that has proven to be very effective. China controls the production and processing of several critical minerals For semiconductor industries, renewable energies or electric car, among other sectors, which has led to the administration led by Xi Jinping to regulate its export in a very strict way. In early December 2024 He chose to prohibit The export of some critical minerals to the US, among which were three essential metals for the chips industry: Gallium, Germanio and Antimony. Shortly after the Chinese government added two more critical metals to its list of export restrictions: Scandio and Disposio. However, there is a much less exotic chemical element than those I just mentioned the one that is barely talking. China also controls it and is using it to put the US against the strings. Bismuth is a fundamental metal for the global technology industry Although it is not monopolizing as many headlines in international media as rare earths, bismuth (BI) is an essential chemical element not only for the integrated circuit industry, but for the entire global technology sector. It is a whitish, crystalline and relatively fragile metal that acquires a pinkaceous tone when coming into contact with the air. It shares some physicochemical properties with lead and tin, but it has a distinctive characteristic that has helped it be erected as the essential metal that is: it is much less toxic than other heavy metals, such as lead. However, this is not at all its only quality. In addition, it is the most diamagnetic metal, so when introducing it into a magnetic field it is repelled very weakly. On the other hand, its electrical resistance is high and its thermal conductivity is very low. Interestingly, the only metal that has even smaller thermal conductivity is mercury. And its melting point is relatively low (about 271.3 ° C), while Its boiling point touches the 2,000 ° C. Finally, the bismuth has another very unusual property among the metals that are worth not overlooking: when it solidifies it expands. The bismuth is an essential metal thanks to its intervention in welds and the tuning of thermoelectric materials If we had to stay with only two characteristics of all that we just reviewed the chosen ones would be their low toxicity index and their ability to expand when solidified. In fact, these properties largely justify their use in industries that have a strategic role for many countries, such as chips, consumer electronics, renewable energy or electric car. Although it participates in a wide range of applications, the bismuth is an essential metal thanks to its intervention in the welds and the tuning of thermoelectric materials. For many decades the metal usually used in welds was lead, but it has an important problem: it is very toxic. Gradually this metal has been displaced by the alloys of bismuth and tin, which are much less toxic, and, in addition, They have a very low melting point. In fact, these alloys have a leading role in the manufacture of flexible substrates, printed circuit plates and all kinds of electronic components. On the other hand, thermoelectric materials allow generating electricity taking advantage of temperature and vice versa differences, so they are very important in the development of efficient cooling systems. China is currently the largest world producer in bismuth. In fact, control between 80 and 84% of the supply of this metal, so the global distribution chain is in your hands. Only in 2024 this Asian country produced 13,000 metric tons of this chemical element, while outside the borders of China, only 3,000 more tons were refined. This essentially absolute control has led to the Chinese government to drastically restrict Bismuth export with the purpose of responding to the sanctions of their rivals. In the US, some technology companies are already against the ropes because their bismuth reserves are running out. And it is not precisely unimportant companies. Google, Amazon and Nvidia are three of the US companies that Chinese bismuth urgently need In order to sustain the construction of your new data centers for applications of artificial intelligence (AI), so They have asked the US government that he reaches an agreement with his Chinese counterpart. Otherwise the development of AI in the country led by Donald Trump will be compromised. In this area, as we have just seen, China has the pan well grabbed by the handle. More information | China / Business Inside In Xataka | The two most important chip companies in China have a problem: the 5 Nm have been choked

Spain promised them very happy as the power of the data centers. Did not have the heat waves

On July 19, 2002, at 06:33 PST, Google and Oracle Data Centers in London They stopped working. The reason was not a human or electrical failure. The only thing that happened is that 40 ° C were exceeded in the British capital, and the cooling systems of that data center could not support those temperatures. The result: falls of multiple services for hours. It was a dangerous advance (and we are talking about London, which is not especially hot) for data centers, especially considering what comes to us. Climate change also threatens data centers. The only good news of this summer’s extreme heat is that It will be carried compared to the one in the coming years. The theme, which is worrisome to humans, has a surprise derivative: these extreme temperatures are going to be a colossal challenge for data centers. We do not stop seeing record temperatures one summer after another, and that will test the cooling systems of those centers. Hello, cooling. There are no definitive data on the temperature at which data centers must work, and while certain experts They recommend that work in a range of 18 to 27 ºC, others They hold that the range must be even cooler, between 17 and 21 ºC. And of course, that is only achieved with powerful air and liquid cooling systems. Heat comes out. If it is more hot, it is necessary to refrigerate more, and that imposes greater electrical consumption and therefore, a greater electricity bill. Of not controlling those temperatures the efficiency of the components is decreasing. As in mobile. It is exactly the same thing that happens in our mobiles and computers when they overheat: if the cooling systems fail to bother those temperatures, The ‘throttling’ is normally produced: The components “go round” to consume less and dissipate less heat. And the water, too. In many data centers liquid cooling also plays a fundamental role, and in the face of heat waves, water consumption also rises. That is especially worrying now that Big Tech have announced that they will invest tens of billions of dollars in new data centers for AI. Liquid cooling to power. In those facilities dedicated to AI a huge amount of chips accumulates in small spaces, which makes liquid cooling solutions much more appropriate. And the same thing happens again: before extreme temperatures, it is up to “climb the cooling systems to deal with possible overheating. Evaporative cooling. Of course, engineers who develop these types of projects go to solutions to try to avoid problems, especially when data centers are in areas where summers are especially extreme. There come techniques coom the Direct evaporative coolingin which the refrigerated and humidified air is directly entered the data center. There are other techniques such as water cooling towers and of course an intelligent management of air flow management is optimal. Be careful where you look up your data center. That makes it more and more important to choose the ideal location for new data centers. In Spain Aragon is becoming In an absolutely protagonist region. There are several projects in which Big Tech will put (theoretically) in progress data centers in this autonomous community, although the risk of extreme temperatures there is not so high as would be further south of the peninsula. But with increasingly high temperatures in summer, what can be done? Ice cream clusters. The option in some cases is simply to choose locations in which the weather is much cooler … or even icy. Facebook has already built several Data centers in Lulea (Sweden) In 2013, but in Spain we have an even more striking case: the Social Security CPD He moved from Madrid to Soria for the simple reason that it is colder there. That, among other things, allowed to save The 150,000 euros that would have cost the cooling of these systems in Madrid during the summer. Image | Goal In Xataka | We have calculated how much money the Big Tech are being spent on data centers. The numbers are dizzy

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.