Kubrick was obsessed with this masterpiece of war cinema

Stanley Kubrick one of the most demanding directors in the history of cinemawas unable throughout his life to stop talking about a film, to the point that he “spoke enthusiastically about it until shortly before his death.” It was not his, but from an Italian filmmaker practically unknown to the general public. It was filmed in 1966 with non-professional actors and on the streets of Algeria: ‘The Battle of Algiers’. Says who knows. Anthony Frewin worked as Kubrick’s personal assistant between 1965 and 1999, except for short interruptions at times in the 1970s. No one knew the director’s cinematographic tastes better than him, as demonstrated in an interview where he states that Kubrick “was generally very disappointed with Hollywood cinema.” What interested him was something else: international directors who questioned the conventions of the medium and sought new forms of expression. Your favorite. Among all those films, one occupied a separate place. According to Frewin, Kubrick was “excited” by it for decades ‘The Battle of Algiers’by Gillo Pontecorvo. The first time this assistant started working for him, Kubrick already told him that it was impossible to understand what cinema could really do without having seen that film. And he continued saying it until shortly before he died, in 1999. What is ‘The Battle of Algiers’. Winner of the Golden Lion in 1966 at the Venice Festivalreceived three Oscar nominations. Gillo Pontecorvo filmed it in black and white, on the real streets of the Casbah of Algiers, with thousands of local extras and a handful of non-professional actors. The result was so convincing that the film’s advertisements warned that the images did not come from documentary archives. The film reconstructs the most intense years of the Algerian conflict against French colonization, between 1954 and 1957. Pontecorvo based it on the memories of FLN commander Saadi Yacef, who also acted in the film itself, playing a character inspired by himself. The director spent an entire month testing before shooting a single scene, using multiple cameras to make the crowds appear larger, and even repeating some takes more than twenty times to exhaust the actors. The music, signed by Ennio Morricone, flirts with traditional North African percussion and traditional military marches. And finally, the film stands out for its refusal to offer a clear moral perspective: both FLN guerrillas and French paratroopers commit atrocities, and no one plays the role of unequivocal hero. What did Kubrick see in him? In an interview included in the aforementioned article, Kubrick commented that “all films are, in a sense, mockumentaries. You try to get as close to reality as you can, but it is not reality. There are people who do very intelligent things that have fascinated and completely deceived me. For example, ‘The Battle of Algiers’. It is very impressive.” Frewin added a detail: the director went so far as to say that ‘The Battle of Algiers’ and Andrzej Wajda’s ‘Danton’ were the only two films he would have liked to have directed. Parallels with his cinema. Above all from a thematic point of view, the influence of ‘The Battle of Algiers’ on Kubrick’s cinema is indisputable: ‘Paths of Glory’ examines the mechanics of military hierarchy and the corruption it generates, and ‘Full Metal Jacket’ divides its story into two almost incompatible points of view to show that war does not have only one face. In none of these films is there a protagonist who triumphs morally, and in that sense, Pontecorvo and Kubrick shared that war films should not generate catharsis but rather discomfort. At the Pentagon. The influence of ‘The Battle of Algiers’ exceeds the cinematographic sphere. In August 2003the Pentagon’s Directorate of Special Operations organized a screening of the film for senior military and civilian officials. The invitation brochure said: “How to win a battle against terrorism and lose the war of ideas. (…) The French have a plan. It works tactically, but it fails strategically.” The background was the occupation of Iraq: the US army was looking for clues to understand why military victories did not translate into political stability. They weren’t the only thing: the Black Panthers used the film as training material in the 1960s. The IRA also studied it. Argentine intelligence used it in the seventies, for radically different purposes. And today, is screened regularly at West Point, at the Naval War College and at the Academy’s Combating Terrorism Center. In the world of cinemaNolan cited him as an influence when he released ‘Dunkirk’ and (2017) and ‘The Dark Knight Rises’. In Xataka | ‘2001: Flashes in the Dark’: An HBO Max immersion in Stanley Kubrick’s masterpiece that surprises with its visual inventiveness

Paying more for a very fast NVMe SSD is wasting money if you only save PDFs, but it is the only option if you are also going to work from it

Like me, you have probably also at some point faced the purchase of a new storage unit, internal or external, for your desktop PC or portable. Something that, until a few years ago, was quite simplified: either you chose a 5,400 rpm HDD (revolutions per minute), or you chose one of 7,200 rpm. End of story. To something else. But since SSDs came onto the scene, purchasing (and usage) possibilities have changed a lot, making opting for one type or another is not so simple. Today, taking into account the price differences between HDDs (the “old” mechanical disks) and SSDs (the “modern” solid state drives), the choice is clear: SSDs win by a landslide, offering wide capacities and much, much higher speeds. Although well, the current context of AI surcharges It changes the film a little and, whatever purchase we make now, it will entail a greater outlay. But this shouldn’t last forever and, under normal conditions, SSDs are still the best value for money purchase option for general use. The price could vary. We earn commission from these links So, well, you already have one thing clear: to expand capacity, in general terms, the ideal in 2026 is to go for an SSD. However, the choice is not so simple because different technologies and different models come into the field of SSDs, each with a series of advantages and disadvantages. All of them, valid for any use you plan to give them, be careful. But not all of them cost the same and, depending on what you need your new unit for, Smart purchasing will tip the balance on one side or the other. And your pocket, of course, will thank you for choosing carefully. In other words and to give them first and last names: in a scenario in which you need more space for your PC or portable and you have to go through the checkout to expand it using an SSD, you will have to choose between an NVMe SSD or a SATA SSD (which are the main types of SSD that are generally sold). The first, more expensive and faster. The second, cheaper and slower. AND each one, in its proper context, shines with its own light. Next we are going to see how they differ and why they are a better purchase option compared to their rival, depending on the context. And thus pay more if the situation requires it or save as much as possible if you are not going to take advantage of its full potential. SATA SSD: not as fast but cheaper When SSDs burst onto the scene, they did so in a format we know as SATA. In units of different sizes (although also ostensibly more compact than mechanical HDDs) that are still commonly marketed in 2.5-inch models. If you have a laptop or desktop PC from a couple of decades ago, probably contains one of these. These SSD units were, at the time, night and day compared to mechanical HDDs. What used to take you half an hour to wait was suddenly completed in minutes. And also, without noise. The “problem” is that today, with much more modern and faster units (spoiler: NVMe), this type of SSD have been relegated more to pure storage than as devices for daily work. That is to say: what we once stored on HDDs, we now do on these SSDs. A digital storage room that, in any case, is much faster and makes it easier (and faster) to move large amounts of data and copy and paste files. In addition, the SATA SSD is probably the only option when it comes to somewhat “old” laptops: today, practically all models come with an M.2 connector (where the NVMe are installed), but if you have a laptop that is a few years old (around 2018 or earlier) it will probably not have said connector and the 2.5-inch SATA SSD is the one you will have to use. If you are also using a mechanical HDD, the change will be spectacular. Does this mean they are a bad choice? Not at all, they’re still great in 2026… but especially for what I’m doing: storing. Because if what you need is a “hard drive” on which to install the operating system, applications and games, or on which work intensively on tasks that require constant writing and reading of data (such as video editing), then you will be limited. This leads us to the next model: NVMe SSD. NVMe SSDs: faster and more expensive While SATA SSDs are somewhat larger and slower (but cheaper), NVMe SSDs are a rocket. The quickest and most direct way to describe them is: speed, speed, speed. While the former would become a one-lane national highway, the latter become a highway with eight lanes in each direction. This means that if a sporadic car (some file, such as PDFs) is going to pass through these “roads”, SATA is enough for you; If you need several heavy trucks moving at the same time (video editing, for example, with thousands of MB of data moving at full speed) then That national highway will collapse and there is no choice but to drive on the highway.. NVMe SSDs also stand out in design: they are compact, stylish and very small. The inseparable companion of any current desktop or laptop PCbut also in video game consoles by offering better performance in all types of tasks and taking up less space (something vital, for example, in the case of consoles). In fact, this is the type of SSD that the PlayStation 5, the Steam Deck… come with in the M.2 connectors that they incorporate. Connector that, by the way, has been present on practically any desktop or laptop motherboard for a few years now. This type of SSD is more expensive than its SATA relatives, but that extra financial effort is worth it if, in addition to storing data as such, you plan to work on them. … Read more

the one that Russia gave him

In modern warfare, see before the enemy can be more decisive than shooting first, hence some military systems current ones are capable of monitoring areas the size of an entire country from the air. We are talking about devices whose cost can exceed 500 million dollars per unit. The problem is that even these key pieces depend on something much more fragile than it seems: information. Without “eyes” in the war. In the last 48 hours, Iran has achieved something much more relevant than destroying a plane: has rendered useless one of the few key systems that allow the United States to see the battlefield from hundreds of kilometers, the E-3 Sentrya true aerial nerve center that coordinates fighters, detects threats and maintains superiority in the air. Its destruction is not symbolic (it barely keeps a fraction of the 16 it had operational), It is functionalbecause it eliminates real capacity surveillance and command at a critical moment, forcing the few remaining aircraft to take on more load and increasing blind spots in the theater of operations. In a conflict where every second of detection makes a difference, losing one of these assets is equivalent to fighting with your eyes partially bandaged. 500 million. They counted in the Telegraph that satellite images showed a few hours ago the destroyed fuselage of the four-engine United States Air Force plane on the runway of the air base in Saudi Arabia. Among the twisted metal remains, what looked like a large flying saucer lay face down. It is, or was, the crotating radar dome which typically sits atop E-3, the $500 million air operations nerve center that allows commanders to track everything in the air over hundreds of miles. Images of the destroyed E-3 Invisible help. The attack, furthermore, not only reveals precision, but also high-level prior intelligence, and that is where a decisive factor comes in: Russia. According to various sources, including his own president of UkraineMoscow provided images satellites from the base days before the attack, allowing Iran to know the exact location of the planes and choose the most vulnerable point, right where the E-3 radar is located. This support transformed a conventional attack into a surgical operation, demonstrating that war is no longer decided only by who shoots, but by who sees first and best. The Russian-Iranian collaboration turns each strike into more than just a tactical impact: it is a demonstration of network warfare against the American military architecture. Aging fleet. The severity of the blow is multiplied because the United States barely has these systems and its fleet is aging. As we said, only It had 16 units in total and many of them not operational at all times. Therefore, although the loss of one may be replaceable, since there is no immediate active production and replacement programs accumulate delays and political doubts. This leaves Washington in an awkward position, where each casualty is not just a material cost, but a structural reduction of capabilities in the middle of war, just when it is most necessary to maintain constant coverage over the airspace. The bombed base Bases exposed to missiles and drones. The attack also exposes an increasingly obvious weakness: America’s most valuable assets remain parked. in poorly protected bases against long range weapons. Although an attempt was made to disperse the planes to make them difficult to locate, the combination of satellite intelligence, drones and missiles has shown that this strategy is insufficient. Without hardened shelters and adequate protection, even key systems can be destroyed on the ground without the need for direct confrontationconfirming that technological superiority is of little use if critical assets are vulnerable before takeoff. War of attrition. Meanwhile, Iran has adapted its strategy toward a sustained pace of attacks. smaller but constantseeking not so much to saturate the defenses as to wear them down over time. With a still significant arsenal and the ability to coordinate still complex strikes, Tehran maintains continuous pressure while forcing the United States and its allies to expend interceptors and critical resources. This attrition logic, combined with selective attacks on key nodes such as radars or aircraft of command, multiplies the impact of each action and reinforces the central idea: it is not about launching more missiles, but about hitting where it hurts most. Silent shifting. Be that as it may, the episode points to a deeper transformation: modern war no longer revolves only around destroying forces, but to blind systems. Iran has not only attacked infrastructure or troops, but the information layer that supports the entire US military operation, and it has done so relying in external intelligence. The result is a clear signal, another onefor future conflicts: that whoever manages to disable the adversary’s sensors and command networks will have a decisive advantageeven against technologically superior powers. Image | USF In Xataka | Iran has achieved something unprecedented in the Middle East: that the US has to abandon its military bases In Xataka | While the US bombs Iran, something unusual has happened: drones attacking the nuclear bases in North Dakota

Google has made AI consume up to six times less memory. Micron, Samsung and SK Hynix are paying dearly

we carry months wrapped in the memory crisisbut maybe there is a way out. Last week Google Research published a study in which he revealed a technique called TurboQuant. This is a compression algorithm capable of compressing the working memory of AI models up to six times without appreciable loss of quality or performance. Great news for end users, who see a light at the end of the tunnel, but terrible news for manufacturers, who this golden age could end. Let’s explain what KV cache is.. To understand TurboQuant you have to understand what that memory is that it manages to compress. When a language model processes a long conversationyou need to remember the context. Each token that is processed is stored in the so-called KV cache, a type of working memory that grows as we chat. The longer the conversation, the more memory the model requires. Compressing what is a gerund. It is one of the main bottlenecks in the AI ​​inference stage (that is, when we use the models), and one of the reasons why data centers they need as much RAM or HBM memory. TurboQuant uses a vector quantization method to compress this cache while maintaining the precision of the model. Pied Piper. As soon as this Google study appeared, the analogies began with the plot of the series ‘Silicon Valley’. In it, the fictional startup in the plot managed to develop an extraordinarily efficient compression algorithm called Pied Piper that threatened to revolutionize the technology industry. These days, multiple references to the series appeared on social media, which had already been referred to as visionary for reflecting what is happening with spectacular accuracy even when the series was a comedy. Six times less memory. The Google Research paper states that this method is capable of reducing the KV cache six times without an appreciable difference in performance in long conversations. The researchers will present their results at an event next month and explain the two methods that allow it to be put into practice. If they confirm what they’ve already teased, the implications are huge: less memory for inference means data centers can do the same thing with much less hardware/memory. Google’s DeepSeek moment. The discovery has some analysts calling this Google’s “DeepSeek moment.” A year ago, the Chinese startup DeepSeek launched an AI model that competed with the best but had cost much less to develop. That shook the industry, and now we return to a technical achievement that points to the same thing. In AI, doing the same with less is crucial, given the enormous resources that this technology requires. There are those who already have done evidence preliminaries with TurboQuant and have confirmed that the method does indeed work. Micron, Samsung and SK Hynix pay dearly. The impact of this technique can be enormous, and this has already begun to be noticed in the stock market valuations of DRAM memory manufacturers and HBM. Companies like Micron, Samsung, SK Hynix, SanDisk and Kioxia fell noticeably last week from their recent highs. On March 18 it was around $471, and today its shares are at $357, a staggering 24.2% drop. The same has happened with the rest of the manufacturers, which were already falling since that date, but have accelerated that fall with the launch of TurboQuant. But. The technique can theoretically be applied only to the inference phase, but the training phase of AI models is not affected by this compression technique. Therefore, huge amounts of memory will still be needed during the training phase. Besides we will have to wait for AI companies to actually start applying said system if it is confirmed to work, and that will be when we can see the real impact. Theoretically this will give a lot of room for maneuver to big tech, which will be able to reduce token prices even further, but it remains to be seen if they do so. RAM memories drop in price. The impact of TurboQuant has also been clear in the prices of memory modules, which have dropped appreciably in price. For example, the Corsair Vengeance DDR5 32 GB 6000MHz (2x16GB) modules were at 489.59 euros on Amazon until a few weeks ago according to CamelCamelCamel, but right now they are at 339.89 euros, a notable discount. It is true that not all components are falling equally, but there are indeed cases in which reductions seem to be occurring. In Xataka | The RAM crisis is destroying all of Valve’s plans with its Steam Machine

today it is a luxury hotel

Not all hotels start from scratch. Some are born on buildings that already had a history long before becoming a tourist destination, and few cases in Spain are as clear as that of Canfranc. This old international station stands in the Aragonese Pyrenees, a large-scale railway project that It ended up closing its doors in 1970. For decades, its imposing silhouette remained abandoned, converted into one of the most recognizable images of the forgotten railway legacy. Today, that same space has changed its function without completely losing what it was. To understand why Canfranc became what it was, you have to look beyond the building and focus on its function. The station was born as a piece key in the railway connection between Spain and France, at a time when this type of infrastructure set the pace of European transport. Its location was not accidental, it was designed to articulate the passage through the Pyrenees and facilitate the international exchange of travelers and goods. Everything about it responded to that logic, from its size to the complexity of its facilities, which placed it among the large railway complexes of its time. From monumental station to five-star hotel The history of Canfranc goes far beyond its function as railway infrastructure. Its position on the border made it an especially sensitive point in one of the most turbulent periods of the 20th century. During World War II, the station was the scene of constant movementssome visible and others much less, linked both to the transit of people seeking to leave Europe and to operations related to the conflict. This context left a mark that is difficult to separate from the building itself, which went from being a symbol of international connection to becoming a place crossed by tensions. That stage ended definitively in 1970, when the station closed its doors and left behind a large-scale infrastructure that was left without a clear function. From there began a long period of abandonment in which the building was exposed to deteriorationwithout activity and without a project that would guarantee its conservation. For decades, Canfranc went from being a transit point to becoming an immobile presence in the landscape, as imposing as it was disconnected from everyday life. Even so, its size, its architecture and everything it represented prevented it from falling into oblivion. Canfranc’s recovery was not immediate or easy. After decades without use, the building required a profound intervention to adapt it to a new purpose. without erasing what made it recognizable. The transformation project opted to convert the old station into a hotelbut with a clear premise, preserving its character and its distinctive elements. The challenge was even greater in the case of a property declared an Asset of Cultural Interest in 2002, which required respecting its architecture and its heritage value while incorporating the necessary infrastructure to give it a second life in the 21st century. That intention of preserving the identity of the building was transferred directly to the interior. The hotel’s design seeks to evoke the 1920s through materials, colors and decorative details, while maintaining constant references to the place’s railway past. Elements like woodbrass or the richest fabrics coexist with an atmosphere that looks back to that era, while old transit spaces have been converted into areas of the hotel, such as the reception. Everything is designed so that history is not just on the walls, but is part of the experience of whoever stays there. Beyond its historical value, the hotel operates today as a high-end accommodation with a fairly complete offering. It has 104 roomsincluding four suites, designed to offer a comfortable stay in a very particular environment, surrounded by the landscape of the Aragonese Pyrenees. Added to this is a wellness area with a heated pool and gym, as well as other services typical of its category. It is not a minor fact: Canfranc Estación is, according to Barceló, the only five-star Grand Luxury hotel in Aragon. An important part of the current proposal involves what happens beyond the rooms. The hotel articulates its offer around three restaurants, with a gastronomic commitment that combines Aragonese tradition and contemporary techniques, and which includes a Michelin star and a sun from the Repsol guide. All of this is framed in a very specific mountain environment, that of the Aragonese Pyrenees, with close access to ski resorts such as Candanchú and Astún, as well as different natural routes. This combination expands the experience and turns the stay into something more than just a night in a unique building. Today Canfranc is not only visited, it is also inhabited in a different way than it was originally conceived. What was once a rapid transit space has become a place to stop, spend time and experience the environment from within. This new function does not eliminate its past, but rather incorporates it as part of the experience, allowing the visitor to understand the place while they visit and use it. A good part of its uniqueness rests on that balance between what was and what is. Images | Barceló Group | SGH | Jon Worth In Xataka | A century ago Denmark built an island to defend its capital. Now it is full of tourists and is sold for ten million

Singapore is the hidden “heart” of the Internet and global telecommunications. It all started with a tree from there.

We live in a connected and globalized world where (almost) everything is in the cloud and available through the internet. Although these connections seem invisible to the eye, they are not: submarine cables are responsible for of 97% of intercontinental traffic. If you take a look at the world submarine cables mapyou will see that there are areas that are true deserts and others that are tangles. One of the most congested points is precisely in Singapore. That the enclave is on the maritime route between Europe, the Middle East and East Asia partly explains why: geography is a historically compelling reason. However, the real trigger was a very curious Scottish doctor and a tree native to the Malay Peninsula. The impressive Singapore node. That Singapore is Asia’s great connectivity hub is a reality: it unites East Asia, South Asia, the Persian Gulf, the Mediterranean and Europe. But it is not only a busy area, it is among the large exchangers that keep the world connected through their interconnection density and operational resilience. Approximately 30 active cables and many others in imminent deployment converge in just 720 square kilometers of territory, according to TeleGeography. To prevent your seabed from becoming a tangle of cables, the deployment is restricted to three specific areas awarded in strict order of arrival eight landing stations. On the Equinix campus is the Singapore Internet Exchange (SGIX), a point where traffic is literally exchanged between hundreds of operators throughout Asia at a very short physical distance, which translates into ultra-low latency. In addition, its redundant capacity is such that when other critical routes fail, it is capable of absorbing traffic diversions, as happened during the Red Sea crisis in 2022. That tangle of cables is Singapore. Submarinecablemap Context: geography as state policy. Singapore’s reality as a first-rate hub is largely to blame for its strategic location: it is at the southern end of the Malaysian peninsula, where the Indian Ocean and the South China Sea meet. In the Strait of Malacca, right where it becomes the Strait of Singapore, its narrowest point is only 2.8 kilometers wide and there are areas where the depth around 25 meters. over there 80,000 ships pass through each year. Its position is key, but there is a milestone that marked everything: in 1819 the British East India Company obtained the right to establish a trading post over there. Since then, the Strait of Malacca has been a usual suspect in international trade: it is where much of the world’s oil (even more so than Hormuz, which is currently raging with the conflict between the United States, Israel and Iran). Is one of China’s doors to the world. And also the area through which any cable that connects the West with East Asia passes. Many ships, many cables and little space constitute a potential recipe for disaster, which your government conscientiously manages and continues to promote vigorously. favorable regulatory conditions to attract more wiring. The material that started submarine cables. We have made a small flashback to the 19th century with the British East India Company that we now return to. When in 1822 the Scottish surgeon William Montgomerie was in Singapore precisely at the service of the East India Company, something caught his attention: the handles of parang (a type of machete) were made of a material that looked like plastic wood. Of course, unlike wood, this material did not splinter, was resistant to impacts, molded to the workers’ hands and was immune to water. A marvel, come on. A material with properties that he had never seen in his life, so he sent a sample to London for exhibition at the Society of Arts. There were no wires in Montgomerie’s head, what he had in mind were surgical instruments. In 1845 the Society awarded him an award and engineers began to work with this prodigious substance. Illustration of the Palaquium gutta. Franz Eugen Köhler, Köhler’s Medizinal-Pflanzen – (1883) Köhler’s Medizinal-Pflanzen in naturgetreuen Abbildungen mit kurz erläuterndem. Plastic before the plastic boom. Gutta-percha is the dried sap of trees native to the Malay Archipelago such as the Palaquium gutta, a natural latex that becomes rigid when cooled and has waterproof, saltwater-resistant and electrically insulating properties. Taking into account that Bakelite did not arrive until 1907in the 19th century it was the only material with that magnificent combination of properties, ideal for insulating an electric cable at the bottom of the sea. At that time there was no fiber optics, but there was telegraph. The rapid industrialization of gutta-percha. British engineering stepped on the accelerator and by 1851 we already had the first submarine cable with gutta-percha crossing the English Channel, led by the brothers Jacob and John Watkins Brett. The “nervous system” of the British Empire It grew at dizzying speed: by 1866 it had 15,000 nautical miles and by 1900 it reached 200,000 nautical miles. Singapore was already on the wiring map thanks to London’s connection to Hong Kong through India and the Strait of Malacca, laid by the British-Indian Submarine Telegraph Company. That stretch of coast where the cable reached in 1871 is where the Meta or Google cables pass today for identical geographical reasons as they do now, a century and a half later. The environmental drama. We have already seen that in the West there was a real furor over gutta-percha, the obtaining of which had small print: unlike rubber, it was not enough to bleed the tree, it had to be cut, removed the bark and boiled. An adult tree produced between one and seven kilos. For the first attempt at a transatlantic cable, which dates back to 1858, it required an enormous amount: for 2,500 nautical miles in length (4,630 km) 300 tons were needed. Only two years after Montgomery introduced gutta-percha to the old continent, Tomas Oxley estimated that the 412 tons exported to Europe had caused the felling of 69,000 trees. He Palaquium gutta disappeared from Singapore by 1857 and much … Read more

The chemical composition of galaxies has always been full of unknowns. James Webb has taken a huge step to solve it

The James Webb Space Telescope sees where others can’t: its infrared vision pierces clouds of cosmic dust and reaches galaxies so far away that it took billions of years for its light to reach us. Looking far into space is, in that sense, looking back in time. However, what James Webb has seen in these galaxies differs from what was expected: these early galaxies seem to have too much nitrogen, much more than expected. Among the exotic possible explanations of science, hypotheses such as gigantic stars never seen before, black holes functioning as catalysts for galactic chemistry or large quantities of stars have passed. In fact, that was the topic of conversation in the middle of a phone call while Mexican astrophysicist José Eduardo Méndez-Delgado waited in line for the doctor. On the other end of the line, his colleague Karla Arellano-Córdova, who was in Edinburgh. In that informal talk they decided to change the prism: perhaps the problem was not the galaxies, but how we measure them. The discovery. The proposal from this international team is to analyze three light signals from the same oxygen ion to calculate temperature and density at the same time, without starting from one to calculate the other (the original source of error). The result: the gas was a hundred or a thousand times denser than was assumed in those galaxies. With that correction, the galaxies turned out to be richer in metals than they appeared and the excess nitrogen was drastically reduced. Why it is important. First, because the metallicity of a galaxy is directly related to its history: the more metals there are in its composition, the more stars have been born and died within it. Until now we were underestimating this figure, which made those early galaxies appear very different from our own and suggested a sharp and discontinuous evolution. Now they look more like what we know. But the elements essential for life, such as carbon, oxygen or nitrogen, did not exist when the universe was born: they were manufactured by the stars inside and expanded when they died. Hence the interest in knowing the chemistry of galaxies: it helps to understand when the universe had the necessary ingredients for life. With the wrong measurements, we don’t know if those ingredients were there earlier and in more places than we thought. Context. The standard method to know the composition of a distant galaxy is to analyze the spectral lines of its light based on the density of the gas and its temperature. The problem is that in these primitive galaxies the gas is much denser than expected, so its application as a thermometer works poorly. And from here on, everything failed. The nitrogen anomalies appeared in the first scientific data from the James Webb Space Telescope, as this either this. Since the results did not fit the models, the scientific community threw itself into trying to find explanations. This paper proposes to take a step back: before interpreting stellar physics, check that the measurements are correct. Besides, the Webb now allows it: simultaneously detects oxygen lines in the ultraviolet and in the optical in such distant galaxies. How they do it. In essence, the trick is choosing the right signals. One of the oxygen light lines, visible in ultraviolet, has a special property: it does not distort even if the gas is very dense, something that happened with the lines they were using previously. By combining it with two other signals from the same atom, the research team can calculate temperature and density at the same time, as if they were solving two simultaneous and independent equations. Using statistical simulations, the team found that the results were consistent with other independent measurements of the same galaxies. Yes, but. As the team explains in the work, their method corrects the density error, but not other possible errors that are equally important: the gas of these galaxies also has internal temperature variations, and that can bias the results in ways that this study does not resolve. Furthermore, the method only works well when all three light signals from oxygen are clearly detected. In three of the six galaxies analyzed this was not possible, and the results are less precise. Nitrogen remains a problem. The overabundances come almost entirely from a particular ion whose emission is extraordinarily sensitive to temperature: a variation of just ten percent in that parameter would reduce the calculated nitrogen by half. No one has yet measured that temperature directly. However, it points out a path to follow before looking for “exotic” explanations: verify that the measurement tools are up to par. In Xataka | For a time it was one of the asteroids most watched by astronomers: the Webb has just resolved a key doubt In Xataka | James Webb has been detecting red dots in the universe for years: the only problem is that we don’t know what they are Cover | Oleg Moroz

Predicting a drought six months in advance was a utopia. The UPV has achieved this with a system that uses AI

In recent years drought episodes have intensified in some regions and fear of a global drought flies over the environment. In this scenario, a team of researchers from the Polytechnic University of Valencia have created a system that can predict whether there will be a drought six months in advance. The system. The work has been carried out by the team from the Institute of Water and Environmental Engineering (IIAMA) of the UPV and has been published in the journal Earth Systems and Environment. The method integrates predictions from four reference climate systems (ECMWF-SEAS5, Météo-France System8, DWD-GCF2.1 and CMCC-SPSv3.5) and are processed using artificial intelligence techniques. From this data, the team calculated two of the most important international drought indices (the Standardized Precipitation Index and the Standardized Precipitation-Evapotranspiration Index), using data windows of 6, 12, 18 and 24 months. The method has been applied in the Júcar River basin, which usually goes through stages of recurrent and quite intense droughts. Why is it important. The novelty of this system is that it is not limited to using a single climate model or index, but rather it merges three pieces that are usually used separately and adds AI processing to correct biases and adapt the models to a regional scale. This allows the prediction to be more reliable since it does not depend on a single model. Furthermore, all of this has been integrated into an operational web toolintended to be used in water management and not only as an academic exercise. Results. The system is correct with a reliability of 90% when the prediction is made for that same month. If they want to obtain predictions three months in advance, the reliability is 60%, while for longer periods (12, 18 and 24 months) they do not give a percentage, but they affirm that the model is still useful for predicting what will happen up to six months in advance. Héctor Macián, co-author of the study, states that “The results confirm that the system is especially effective in reinforcing early warning of droughts, a fundamental aspect to anticipate management measures, reduce socioeconomic impacts and increase resilience to climate change.” Action window. As we said, the methodology has been developed in the Júcar river basin, which is a semi-arid area with long, dry and very hot summers, although researchers highlight that it is transferable to other drought-prone areas. Being able to foresee these episodes with up to six months of margin opens a window to implement the drought management plans much more in advance and thus be able to mitigate the effects. Image | UPV In Xataka | The remains of an ancient Mayan city leave us lessons for the future: an amazing system against drought

has skyrocketed its production and is about to say goodbye to imports

Although officially the war that is grabbing all the headlines these days is the conflict between the United States, Israel and Iranthe reality is that global geopolitics is such a hornet’s nest that the whole world is rearming itself. And while Europe discovers that it is missing essential things as ammunition opqualified personnel to manufacture themChina reaches this critical moment in an almost unbeatable position: the army of its great rival depends more and more of the Asian giant and is also just a breath away from being self-sufficient. The document of “Trends in international arms transfers, 2025” published a few days ago by the Stockholm International Peace Research Institute, collects the trends, changes and main actors in the global trade in heavy weapons between the periods 2016-20 and 2021-25 and makes one thing clear: in weapons, China cooks it and China eats it. China’s change. While the global volume of arms transfers has grown by 9.2% in the 2021-25 period, China has remained the fifth largest exporter in the world (with 5.6% of the global share). But his way of interacting with the market has changed radically: he now sells more and buys much less. 10 years ago China was the fifth largest arms importer in the world and today it barely appears in 21st place: it has dropped out of the top 10 for the first time since 1991. It is not that it has disarmed by any means. In fact, is producing fighters as if there were no tomorrow and that’s it has surpassed the United States in the production of nuclear submarines. The thing is that you no longer have to buy what you make at home abroad. This is how global arms imports are distributed: the 10 largest importers and the rest. China is in that rest. SIPRI Why is it important. Because China is the second military power in the world in spending (according to the International Institute for Strategic Studies) and that a country of its size and investment stops depending on the foreign market is further confirmation of the maturity of its industry. And reduce his Achilles heel: if he does not depend on anyone for weapons, there is no pressure to try to cut off his supply. Without going any further, one of China’s first measures in the tug of war over tariffs was to tighten its control framework for rare earths, essential for weapons. On the other hand, China’s influence is not only measured by its borders, but by who depends on it: we have already seen how it is essential in the United States supply chain, but the SIPRI report highlights how it stands as the pillar of Pakistan’s defense, is the largest supplier of weapons to Sub-Saharan Africa and is opening new markets in Europe (Serbia). Global context. The SIPRI document places this change in a context of global rearmament, especially in Europe (where there are 210% more imports) and direct competition from the United States. According to the report, the US arms export policy towards Asia and Oceania is partly determined by its objective of containing the influence of China, highlighting key recipients such as Japan, Australia and South Korea. From ‘Made in Russia’ to ‘Made in China’. China has reduced its imports between 2016 and 2025 by 72%. Historically, the Asian giant was dependent on Russian technology, but not anymore. Of course, Russia continues to be its main supplier: it accounts for 66% of the total imported. After the end of the Cold War, Beijing continued to depend on Moscow and its technology, but throughout the 1990s there were key moments for this turning point in Chinese strategy, such as Yinhe’s trauma in the Malacca Strait either the Taiwan Strait crisis of 1996 in which American military superiority and the need to build its own defense industry were evident. China is rearming. Beijing already has the largest navy in the world in terms of number of ships, according to the US Department of Defense and has established itself as the reference in the deployment of hypersonic missiles. At the strategic level, the Pentagon plans that China will have more than 1,000 nuclear warheads by 2030. If We analyze your most recent budgetwhich grew by 7.2%, technological self-sufficiency and scientific innovation in defense appear as the absolute priority to break any external dependence. What it means for the rest of the world. For Russia it obviously means losing its largest and most loyal historical client. According to SIPRI data, the fall in Chinese purchases has dragged Russian exports to historic lows, aggravating the crisis in its defense industry. For the United States it is a poisoned candy: while Washington tries to reinforce its allies in the Pacific, it faces a rival that sets a pace of industrial and technological production that today is difficult for them to follow. For figures like Pete Hegseth, China is no longer just a competitor, it is the pacing threat: the threat that sets the pace and scale to which the rest of the world must try to adapt. Countries geographically close to China are also accelerating their purchases, driven both by US reinforcement plans and their own fear. The question is how long they will be able to sustain this pulse, because, in terms of industrial mass and speed, today no one seems capable of keeping up with China. In Xataka | The US has a problem in its military career: China has “infiltrated” its army’s supply chain In Xataka | The US has a very serious problem with its F-35s: China is producing fighter jets beyond its capabilities Cover | CCTV, SteKrueBe

We haven’t colonized Mars yet and we already know how to build bricks to live there: with urine and bacteria

Humanity has between an eyebrow and an eyebrow to reach Mars and eventually plant a colony there. Missions like NASA’s Curiosity rover have been scanning its surface for years for signs of past habitability (with promising findings that leave big unknowns) and the program Artemis II It is the technological springboard towards the first manned mission to Mars. Sooner or later there will come a day when humanity sets foot on Mars and the conditions to inhabit it are met (or manufactured). So the next question will be: how do we make a house there? It’s not so much a question of design, but of survival. A research team is already working on it and believes they have the solution, which they have published in the journal Frontiers in Microbiology. The concept. The research work from Politecnico di Milano, the University of Central Florida and Jiangsu University consists of using two bacteria that work together: one is capable of surviving in extreme conditions and produces oxygen and the other that turns human urine into stone. This promising duo is capable of manufacturing bricks directly from the Martian soil, without the need for kilns, factories or bringing materials from Earth. Why it is important. Because from an engineering point of view, moving materials and machinery over long distances (as long as going to Mars) makes the cost skyrocket and becomes technically unfeasible. Furthermore, building them with the materials available on Mars is not (yet) an option. So this concept solves those two problems and some others, such as energy consumption. According to the paperbiocementation consumes up to 7 times less energy than melting soil with microwaves and almost 50 times less than thermal sintering. Finally, because it is convenient: it converts human metabolic waste into construction material, thus solving the logistical problem of what to do with that waste. Context. Because the different space agencies have the arrival to Mars in the 2030-2040 decade on their roadmap. Biocementation (microbiologically induced calcium carbonate precipitation) has been under study for two decades for uses such as stabilize soils, stop desertification either build with less carbon dioxide. This research transfers this knowledge to space and has its applications on Earth in the form of more sustainable construction, soil repair or self-healing concrete. chow they did it. This point is essential because the research team has neither built anything on Mars nor in the laboratory, using real regolith. This is a perspective paper, reviewing the known knowledge about this technique to provide a concept analyzing the Martian regolith from data from robotic missions. From that point and after identifying the deficiency of calcium oxide with respect to terrestrial cement, they have studied what biological routes can compensate for it. That’s where your proposal comes from, with the combination of Chroococcidiopsis + Sporosarcina pasteurii as the most promising, which is accompanied by a conceptual design of a bioreactor and 3D printing nozzle integrated with autonomous robotics. Yes, but. The previous point makes the first handicap clear: this combination of batteries has never been tested, neither on Mars nor in the laboratory. And on Mars the scenario is tricky: the reduced gravity weakens the microstructure of the resulting material (at least, conventional cement) and the perchlorates in the Martian soil are toxic to organisms. As if that were not enough, the temperature range in which bacteria can operate is narrow. Additionally, the water required may not be suitable. There is also no long-term stability data for this crop. If we talk about technological maturity, this project is in a primitive phase: a concept on paper financed with a long road ahead. In Xataka | China has found a “vital” element to colonize Mars: it resists in lethal conditions for other forms of life In Xataka | We have a serious problem in our plans to colonize Mars: the astronauts’ blood is mutating Cover | Rain Morales and Planet Volumes

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.