Sierra was the second most powerful supercomputer in the world. When its time came it ended up in the shredder, literally

Supercomputers represent the extreme of modern computing: machines capable of performing enormous amounts of calculations every second and supporting scientific or strategic projects of enormous complexity. Saw He was one of those giants. For years he operated in the Lawrence Livermore National Laboratorywhere he was in charge of highly sensitive simulations for the United States Government. At the time he came to occupy second place in the TOP500 rankingwhich ranks the world’s fastest supercomputers. But in high-performance computing, even the most advanced systems have a limited lifespan. After seven years of service, Sierra has been retired. A giant for simulations. When Sierra began operating in 2018 at the Livermore facility, it was incorporated into the center’s high-performance computing infrastructure to support the nuclear arsenal maintenance program managed by the National Nuclear Security Administration. Instead of resorting to real nuclear tests, scientists use computer simulations capable of reproducing the behavior of the weapons and materials involved in their design. This work requires extraordinary computing power and also has implications in areas such as nonproliferation and counterterrorism. Almost at the top of the ranking. As we noted above, for several years the Sierra was among the fastest machines on the planet. According to the TOP500 ranking, it recorded 94.64 petaflops, that is, tens of quadrillion floating point operations per second. To achieve this, it used an unusual architecture at the time, based on IBM Power9 processors combined with NVIDIA Volta V100 graphics accelerators. This design allowed work to be distributed among thousands of computing nodes and offered a notable leap over previous generations of supercomputing. When the hardware starts to fail. Supercomputers do not escape a reality common to any technological infrastructure: over the years, the hardware begins to deteriorate. In this type of systems, The usual useful life is usually around five to seven yearsa period after which the failure rate begins to grow and maintaining the system becomes more complex. As these machines accumulate hours of operation, the likelihood increases that certain components will fail or need to be replaced. In the case of Sierra, furthermore, part of the problem was already very specific: some of its components had stopped being manufactured and the version of the operating system it used had lost support. The successor. Sierra’s retirement is also related to the arrival of a new generation of supercomputing at the center. In 2025 it began operating The Captainthe system destined to take its place within the laboratory’s computing infrastructure. Although at first glance both may seem similar facilities, the difference is inside. El Capitan uses an architecture based on the AMD Instinct MI300A APUs and a shared memory system between CPU and GPU, which allows it to achieve much higher performance. According to data released by the lab, this machine can reach 1,809 exaflops, about 19 times faster than Sierra at its peak according to TOP500. Disassemble a supercomputer piece by piece. The end of Sierra was not simply about shutting down the system and leaving it out of commission. The process was carried out in several phases that began with the progressive removal of computing nodes and internal components. Technicians dismantled entire racks, extracted batteries and separated different elements for recycling or controlled destruction. Some parts, such as system plates or metal structures, were sent to specialized facilities for shredding. Since Sierra had worked with simulations linked to the US nuclear arsenal, the laboratory had to prevent any possibility of partial data recovery or reconstruction of sensitive information, hence the storage devices received even stricter treatment. Images | United States Department of Energy In Xataka | Meta has been buying chips from NVIDIA and AMD for years. Now it also makes its own so as not to fall short

For 45 years we thought we understood how stars like our Sun rotate. A Japanese supercomputer has just cast doubt on it

Understanding how stars rotate may seem like a technical detail, but it is actually a central piece to understanding their evolution. For 45 years, theoretical models held that Sun-like stars would eventually change the way they rotate as they aged. The idea was that, as it lost speed over billions of years, the spin pattern would reverse and the poles would rotate faster than the equator. Now, new research from Nagoya University suggests that that prediction might not come true. The findings. The work, published in Nature Astronomysuggests that solar-type stars could maintain the same rotation pattern that we observe in the current Sun throughout their lives. That is, the equator would continue to rotate faster than the polar regions even as the star slows down with age. The simulations carried out by the team indicate that magnetic fields play a decisive role and could prevent this regime change that was taken for granted in theoretical models for decades. How a star like the Sun actually rotates. Unlike the Earth, which rotates as a solid body, the Sun is made of extremely hot plasma. That causes different regions to spin at different speeds. In the case of the Sun, the equator completes one revolution approximately every 25 days, while the regions near the poles take about 35 days. This phenomenon is known as solar-type differential rotation. For decades, theoretical simulations predicted that this pattern would not be permanent. As stars age and their global rotation slows over billions of years, the plasma flows within them should reorganize. Predictions indicate that there would come a time when the behavior would be reversed: the equator would rotate more slowly and the poles would rotate faster, a regime that the researchers called differential anti-solar rotation. The unexpected role of magnetism. The new simulations suggest that the scenario predicted by theoretical models for decades may not come to pass. According to the results of the study, stars similar to the Sun would maintain the same type of differential rotation throughout their lives. Even if the star slows down with age, the equator would continue to rotate faster than the poles, rather than reversing the pattern as proposed in previous simulations. A supercomputer on stage. To reach that conclusion, the team turned to FugakuJapan’s most powerful supercomputer, installed at the RIKEN research center in Kobe and operational for shared use since March 2021. With its help, researchers carried out an extremely detailed simulation of the interior of solar-type stars. Each simulated star was divided into about 5.4 billion calculation points, a much higher resolution than that used in previous work. This level of detail is important because previous simulations worked at much lower resolutions. Under these conditions, the magnetic fields tended to disappear artificially within the model, which led to underestimating their influence on the internal dynamics of the star. In the new simulation, however, the magnetic fields remained stable and showed a clear effect: they help prevent the reversal of the rotation pattern. The implications. Understanding more precisely how Sun-like stars rotate is key to interpreting their magnetic activity over time. This aspect is related to well-known phenomena on our own star, such as the approximately 11-year solar cycle that regulates the appearance of sunspots and episodes of magnetic activity. A better understanding of these processes could also help improve stellar evolution models used by astronomers to study distant stars. Images | POT In Xataka | PLD Space has raised 180 million euros with Mitsubishi at the helm: the Spanish space startup grows with Japanese money

Google has solved problems in two hours that would take three years on a supercomputer. It’s the quantum advantage we needed

Google has taken a notable step into the field of quantum computing with a new algorithm called Quantom Echoes. This algorithm has been able to demonstrate for the first time a “practical and verifiable quantum advantage” that makes its quantum computer make fools of today’s large supercomputers. 13,000 times faster than a supercomputer. The new algorithm, called Quantum Echoes (“Quantum Echoes”), has made it possible to demonstrate that a quantum computer – based on Google’s Willow quantum chip— successfully executes a verifiable algorithm that exceeds the capacity of today’s large supercomputers. Thus, that computer managed to execute that algorithm 13,000 times faster than the best current classical supercomputer when executing similar code. “Quantum verifiability”. Google’s quantum supercomputer solved the problem in just over two hours, when in the second supercomputer most powerful in the world, Frontier, would have taken 3.2 years. But it also did it in a verifiable way: the result can be repeated in the quantum computer itself or in any other of similar caliber. Quantum echoes. The algorithm resembles an advanced echo: you send a signal to the quantum system, perturb a qubit, and then precisely reverse the evolution of the signal to “listen” to the resulting echo. This echo is special because it is amplified by constructive interference, a quantum phenomenon where waves add up to become stronger, which allows this effect to be precisely measured. The algorithm allows modeling the structure of systems in nature, from molecules to black holes. An achievement with a lot of Nobel Prize behind it. The milestone is based on decades of research in this area, including that carried out by the recent Nobel Prize winner, Michel H. Devoretwho is part of the Google team. Together with his colleagues John M. Martinis and John Clark he laid the foundations for this advance at the University of California at Berkeley in the mid-1980s. “Quantum verifiability”. Google’s quantum supercomputer solved the problem in just over two hours, when in the second supercomputer most powerful in the world, Frontier, would have taken 3.2 years. But it also did it in a verifiable way: the result can be repeated in the quantum computer itself or in any other of similar caliber. Quantum echoes. The algorithm resembles an advanced echo: you send a signal to the quantum system, perturb a qubit, and then precisely reverse the evolution of the signal to “listen” to the resulting echo. This echo is special because it is amplified by constructive interference, a quantum phenomenon where waves add up to become stronger, which allows this effect to be precisely measured. The algorithm allows modeling the structure of systems in nature, from molecules to black holes. An achievement with a lot of Nobel Prize behind it. The milestone is based on decades of research in this area, including that carried out by the recent Nobel Prize winner, Michel H. Devoretwho is part of the Google team. Together with his colleagues John M. Martinis and John Clark he laid the foundations for this advance at the University of California at Berkeley in the mid-1980s. Hello qubit. His discovery: the properties of quantum mechanics could also be observed in electrical circuits large enough to be seen with the naked eye. That gave rise to the creation of superconducting qubitswhich are the basic blocks with which Google has created (like other companies) its quantum computers. Devoret joined Google in 2023, thus strengthening the company’s trajectory in its search for the now famous “quantum supremacy”. Promising practical applications. The advance is directed directly to the solution of important problems in fields such as medicine or materials science. Quantum computing remains an experimental technology and faces a key challenge with error correction, but Quantum Echoes demonstrates that “quantum software” is advancing at a pace parallel to hardware. Google applied Quantum Echoes to a proof of concept experiment for Nuclear Magnetic Resonance. This technique acts as a “molecular microscope”, a powerful tool that will help design drugs or, for example, establish the molecular structure of new polymers. a marathon. This new milestone demonstrates the progress that this technology has made in recent years, but Google is not alone here. Microsoft or IBM have also made notable advances in recent years, and of course there are numerous startups both in the US like in china who work in this area. In Xataka | Decoherence is the biggest problem with quantum computers. This superconductor wants to end it

The Nvidia IA supercomputer costs three million dollars. And to function wears a switch with three km cable

When Nvidia presented her new AI chips, The B200 with Blackwell architecturetook the opportunity to present an AI accelerator called GB200. And by joining 36 of those accelerators created its AI server, the monstrous DGX GB200 NVL72, which also keeps some spectacular surprises. Each node is bestial. Each of those GB200 accelerators has a CPU Nvidia Grace with 72 ARM Neaven V2 nuclei and two B200 GPUS. By combining its power we end up having a kind of bestial GPU combined with a power of 1.44 Exaflops in precision FP4. A closet that weighs a quintal. The appearance of the GB200 NVL72 DGX is that of a small and narrow closet that is above all very dense: this rack weighs 1.36 tons. Inside there are 18 Bianca computing nodes in 1u format, and each of them has two GB200, or what is the same, with four B200 GPUS (hence 18 x 4 = 72). He estimated cost of this AI server is about three million. Liquid cooling is key. The heat dissipated by these components is remarkable, which makes in this case the best option to cool those elements is the liquid cooling. This system not only applies to the CPU Grace or in the B200 GPUS, but in the NVLink chips of the switches, which can also be heated a lot due to the massive transfer of data between the accelerators. Interconnections everywhere. For all these GPUS to work together, each of the 36 GB200 has specialized network cards with NVLINK support of fifth generation that allow each of the computer nodes to be connected to others. For this there are nine switches that provide that huge amount of interconnections. 3 km cable. The system allows you to enjoy a bidirectional bandwidth of 1.8 TB/s between the 72 server GPUS. But as they point out In The Registerthe really surprising thing is that in total inside that “closet” there are 3.2 kilometers of copper cable. Only the module with the switches weighs more than 30 kilograms due to both these components and the more than 5,000 cables that are used so that all Nvidia GPUS work together and in perfect synchrony. Why copper? It may be able to opt for copper cable seems strange, especially taking into account the needs in terms of bandwidth imposed by this machine. However, the solution with fiber optic cables imposed clear problems: we would have to use electronic components necessary to stabilize and convert optical signals. That would have increased not only the cost, but the consumption of the final system. Can Crysis run? The performance of each B200 chip It is already brutal on its own: Its power is the triple than that of the GeForce RTX 5090, and the entire server includes 72 of these specialized GPUSs for AI, which demonstrates the computing capacity that said machine possesses. It also has RT (Ray-Training) nuclei of the fourth generation, which would theoretically allow you to use these AI chips to play video games, although of course that is not even its purpose. In fact your performance in this area will probably be almost as poor as the Nvidia H100. Cloud consumption. Although new chips are much more efficient than H100 –25 times less, says Nvidia – this AI server has an estimated TDP of 140 kW. Since the average consumption of an average home in Spain round The 3,000 kWh per year, in an hour of use of the Nvidia server we consume the same as an average Spanish home in 17 days. Have it on and running all year raises a consumption similar to 415 middle homes throughout the year in Spain. In Xataka | AMD has a splendid roadmap for its AI chips. The problem is still in your software

Elon Musk turned an abandoned US factory into the most powerful supercomputer in the world. Nobody thought of the neighbors

It would be said that Elon Musk has created A perfect circle Around what until recently it was an abandoned factory on the outskirts of Memphis. There has installed Colossusthe most monstrous supercomputer on the planet to boost AI to new limits and that it defines the future of cars (of Tesla). In order for the combo, it is complete to the richest type in the world, everything is in turn fed by the megabaterías of Tesla. A Win Win For Musk’s framework, although with a problem: the neighbors. A colossus … toxic. The story brought it This week the CNN. In the summer of 2024 Musk transformed an old abandoned factory in southwest Memphis into what he himself proclaimed as “the most powerful supercomputer on the planet.” The project, promoted by its artificial intelligence company XAIpromised to turn the city into a New Technological Mecca (The so -called “Delta Digital”) with Quality jobs and dozens of millions in taxes. However, for Botxtown residents, a mostly black and impoverished community that has lived with decades with Industrial pollutionXai’s arrival has meant a déjà vu environmental: A new source of pollution settled without clear permits, with an apparent contempt for public health. A computer that consumes. To feed “Colossus”, XAI installed 35 gas turbines capable of generating Up to 420 megawattsreleasing toxic gases Nitrogen oxidesultrafine and formaldehyde particles. The problem? Who did Without licenses of air required, welcomed to a legal exemption for temporary machinery, which according to experts does not correspond to it. The area already houses 17 polluting facilities, and various studies indicate that the risk of cancer in the area quadrupple acceptable levels For the EPA. Memphis also has the higher rates of children’s hospitalizations by asthma in all Tennessee. Realities. While the mayor of Memphis, Paul Young, celebrated the transformative potential of the project and anticipated more technological investment, local leaders such as state representative Justin Pearson They have denounced have been excluded from the process. The lack of transparency adds to an obvious regulatory collapse: an installation with the power of an electric plant operating without permits in the middle of a residential neighborhood. To this we must add the most recent thermal snapshots (image below) that indicate that at least 33 of the turbines They were operational in April. Following the controversy, XAI finally requested permission for 15 of them and withdrew 12, but, as CNN counteddamage to trust is made. Promises Project defenders say that “leading standards in emissions” will be achieved, but residents see the employer repeat: Employment promises well paid that do not specify (because the reality is that the data centers use very few), while the environmental load falls on those who have less resources to defend themselves. Plus: The story of Botxtown It is not new. Already in 2021, its inhabitants achieved stop a pipeline that would cross their lands, and in 2023 they closed A sterilization plant that emitted ethylene oxide. For them, therefore, XAI is simply the last chapter of a long struggle for the right … to breathe. Innovation or regression. It is the last of the legs to be treated with the controversy. XAI installation reflects a broader national dilemma than We have counted before on the rise of artificial intelligence and its real cost. Amid the enthusiasm for turning the United States into the “Global Capital of AI” (according to New EPA Guidelines Under the government of Donald Trump), the expansion of data centers Devoradors of energy advances without a serious evaluation of its environmental implicationsespecially in vulnerable communities. The unconditional support of the Executive to Musk, one of Trump’s closest advisors, coincided with the Weakening of environmental policiesthe elimination of ecological justice programs and a rhetoric that prioritizes economic efficiency over human health. The first “stone.” The contradiction seems clear: IA is promoted as the future, but it is fed with fossil technologies of the past, generating private benefits while the risks and damage are socialized. “If innovation chains you to fossil fuels, that’s not progress,” Keshaun Pearson remembereddirector of Memphis Community Against Pollution. Thus, residents fear that what happens in Memphis is just a general essay of what could soon be replicated in similar neighborhoods throughout the country. A tireless struggle. He counted An NBC report That in Botxtown, indignation coexists with fatigue. Many, such as Sarah Gladney (respiratory and resident patient to a few km from the installation), feel they live in a perpetual battle. The possibility of a second meginstalization of XAI, already projected in the city, only increases the sensation of siege. “It seems that we are always at war,” He underlined. A paradox, since while local officials speak of economic transformation, neighbors simply speak of survival. In the background, the collision between the promises of peak technology and the old reality of systemic pollution raises an uncomfortable question: Who pays the price of this digital revolution? In the southwest Memphis, the answer seems sadly clear. Image | Southwings for the Southern Environmental Law Center In Xataka | Musk has created the perfect circle: Tesla’s megabers feed the AI ​​that will define their next cars In Xataka | Some researchers have disassembled the batteries of Tesla and Byd. You know which one yields better and is much cheaper

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.