NASA’s alliance to finally understand dark matter

This week, NASA launched the Nancy Grace Roman Space Telescope, better known as the Roman Space Telescope. With its launch scheduled for September of this year at the earliest, it will become the space agency’s newest space telescope. It will coexist with others like Hubble or James Webbbut it has something that these don’t have. The ability to track vast expanses of the Universe at once. That’s what makes it special. Much more space. The Roman Space Telescope has 18 detectors that give it a panoramic view of space. It has been baptized with this name in honor of what is known as the mother of Hubble, for her important role in the development of this other space telescope. However, both have major differences. It is capable of looking at a field 100 times larger than that of Hubble. As a result, is expected that discovers tens of thousands of planets, billions of galaxies and stars and thousands of supernovae. An ideal companion for James Webb. The Roman Space Telescope also has advantages over the James Webb. If it is capable of analyzing a field 100 times larger than that of Hubble, in the case of James Webb exceeds it by 50 times. This allows you to observe without a clear objective on the part of the researchers. When exploring such large expanses, you may find something unexpected at any time. That’s where James Webb comes into play. And, although it can analyze less space at once, it is much more precise. Its mirrors are larger, so it captures more light and can discern more details. If the Roman detects something interesting, the James Webb analyzes it with a magnifying glass. Context matters. We have already seen that the James Webb can study the Roman detections with more precision. However, they can also help each other in the opposite direction, since the Roman is capable of providing context around James Webb’s objectives. Together to unravel dark matter. The biggest difference between the Roman Space Telescope and the James Webb compared to Hubble is that they can analyze space by focusing on emissions in the infrared spectrum, rather than visible light. As a result, it can see through cosmic dust, detect cold objects, and look further back in time. The latter is extremely useful for understanding the expansion process of the universe and, incidentally, unravel some mysteries about dark matter. The Universe expands. We have known for a long time that the universe is expanding. That is, the galaxies are moving away from each other, but not because they are moving, but because the space between them is stretched, like a balloon that is inflating. It is also known that this is happening more and more quickly. But why does it happen? It is not clear, but it is suspected that it may be due to dark matter. Supernovas that act as lighthouses. To better understand what is happening, it is important to measure very well how galaxies are separating. One of the ideal ways to do this is by using Ia supernova explosions as beacons. They are phenomena with a known maximum brightness, so they are used to measure distances, taking into account the analysis of their relative brightness from Earth or the place where a space telescope is located. The problem is that they only occur once every 500 years in the Milky Way. A telescope that measures in the infrared can travel very far back in time, but the James Webb only does so in small pieces. The Roman, on the other hand, can analyze such large areas that several of these explosions could be detected at the same time. That would allow several beacons to operate simultaneously to better map the Universe and understand why it is expanding as it does. Once the beacons were located, the James Webb would enter the game to do its detailed analysis. Together they can unravel very ancient mysteries of astrophysics. There is no one better than the other. Image | POT In Xataka | We have been studying the planets of TRAPPIST-1 for years with great hope. James Webb just knocked it down

In Spain we have glorified the long nap. In scientific studies they have a different opinion on the matter.

The siesta is, for many, a fundamental pillar of the Mediterranean lifestyle and an essential pleasure during the afternoon to be able to endure the rest of the day. However, scientific evidence has put this habit under the microscope, especially when naps last several hours and even give you time to dream several times. And the duration, frequency and especially age have a lot to say about the impact on health. The border of time. The current scientific consensus draws a fairly clear line between the classic power nap and the nap of putting on your pajamas and getting into bed for several hours. Because the barrier is marked precisely at the half hour mark, meaning that whoever passes it may begin to notice changes in their health. Here, a recent study from the University of Murcia analyzed to more than 3,000 adults in a Mediterranean environment to analyze the effect of naps. And the reality is that spending more than these 30 minutes was associated with having a higher BMI, a higher incidence of obesity and also being more likely to have a metabolic syndrome such as, for example, diabetes or hypertension. And there is more. When it comes to cardiovascular health, the reality is that the heart can suffer. Here the European Society of Cardiology presented In 2023, different data associated naps longer than 30 minutes with almost double the risk of developing atrial fibrillation. But also the American Heart Association took data who supported this point by pointing out that naps lasting longer than an hour increased the rate of cardiovascular disease by 1.82 times. The age factor. In this sense, one of the most important studies published is found in JAMA, that after following 1,338 older adults for 19 years and objectively measuring their sleep, they were able to see the effect it had. Here it was seen that sleeping more during the day, doing so more frequently or concentrating the nap in the morning was associated with with higher mortality from any cause. Specifically, each extra hour of daytime sleep increased the risk of mortality by 13%. There is much left to investigate. Among the studies that are currently available, no clear correlation has been found, that is, that someone who takes a three-hour nap a day should not have any problems. The only thing that is pointed out is that having the need to sleep excessively during the day can be a consequence of a poor night’s rest because there is a disease that is beginning to see the light, such as sleep apnea. You can take a nap. Although it may seem that we are demonizing the nap, the reality is that it has an important beneficial component when it comes to naps. less than 30 minutes. Here we are achieving an improvement in cognitive performance and it is also a way to recharge our energy a little for the rest of the day. But from here to actively planning a nap that can last for hours, there is a long way that should undoubtedly be avoided. Images | Unsplash In Xataka | Sleeping four hours a day and performing at your best is not a myth, it is a genetic rarity of 1% of the population

There is a company that remains committed to saving the manual gearbox no matter what the cost: BMW

The manual gearbox has been around for years on the tightrope within the motor world. More and more brands are abandoning it, emissions regulations are stifling it and suppliers are not exactly in favor of manufacturing it in smaller quantities. However, BMW’s M division has not yet signed his death certificate. What BMW said. Sylvia Neubauer, Vice President Customers, Brand and Sales at BMW M, confirmed in an interview with the German media Automobilwoche that the division’s engineers continue to actively work to find a solution that allows the clutch pedal to be maintained in its future models. Neubauer did not go into technical details, but according to the publication, the executive “promises a solution.” The technical problem. The obstacle is not so much power as torque. BMW M’s inline six-cylinder engines generate torque figures that current manual gearboxes cannot absorb without mechanical compromise. A clear example: the BMW M2 CS arrived without a manual gearbox option precisely because the transmission was not capable of managing the engine torque. The same S58 that produces 553 HP in the 3.0 CSL has torque limited to 550 Nm with manual, while in other configurations it can deliver an extra 100 Nm. And developing a completely new and more robust manual transmission for use in only a handful of models is, according to the head of BMW MFrank van Meel, “something that does not add up economically.” The possible solution: decelerated engines. What the engineers would be exploring is artificially limit torque output in engines that are paired with a manual transmission. It is not a new concept, it is already happening currently with the M2, whose automatic version has 50 Nm more torque than the lever variant. The question is whether buyers will be willing to accept that compromise in upcoming models. What models are left with a manual. After the Z4 M40i goes out of production this month, BMW M is left with only three cars equipped with a stick shift: the M2, M3 and M4. The current M3 is close to the end of its life cycle, with a replacement expected in 2028. What we do not know is if its new generation will arrive with a manual gearshift. From BMW Blog they are not very clear. The M2 and M4, however, still have plenty of power for a while. Why is it so difficult to save he manual. It is a constant pressure that comes from several fronts. Emission regulations in Europe they tighten more and more (in 2030, manufacturers must reduce fleet emissions by 55% compared to 2021) and automatic vehicles consume less in the approved cycle. Driving assistance systems are designed almost exclusively to work with automatic transmissions. And the transmission providers themselves They prefer to work with large volumesnot with short runs of manuals for niche enthusiasts. What this means. BMW M isn’t closing the door, but it isn’t opening any wide either. The brand is betting on saving time (and not disappointing its most purist customer base) while solving an engineering problem that is very economical. If the solution is to decelerate the engines with manual transmission, that could generate debate among those who expect maximum performance in each configuration. But for those who value the driving experience over the information on paper, it may be enough. In Xataka | China has been boasting about its driverless robotaxis for years. Until more than 100 have stood at once in Wuhan

Light and gas have become luxury items. Europe’s plan is to intervene in prices no matter what the cost

Turning on the heating, running a washing machine or keeping a factory blind up has become, overnight, a luxury. Faced with the economic asphyxiation that threatens citizens and companies, the European Union has crossed the Rubicon: the free energy market, as we knew it, cannot sustain this crisis, and Brussels is preparing a drastic intervention to lower the bill at any cost. ORn global market on fire. The epicenter of this new financial earthquake is in the Middle East, as we have been counting these days in Xataka. The price of oil in international markets continues to suffer shocks; as the firm points out Sparta Commodities to EUobserverit is the “largest daily movement since 1988.” Investors assume that the blockage in the region will cause real cuts in the global supply of crude oil, leaving behind the idea of ​​​​a simple logistical delay in ships. Gas has not been left behind. As detailed BloombergEuropean natural gas futures—the Dutch benchmark—soared 30% in a single day, reaching €64/MWh. Europe emerges from the winter with its reserves depleted and is now facing an all-out war with Asia to obtain the scarce shipments of Liquefied Natural Gas (LNG) available for the summer. The daily roller coaster of the bill. To understand why this crisis punishes the consumer so much, we must look at how the price of electricity is formed hour by hour. An analysis of Finance Times shows how prices in Europe now suffer wild volatility. The example of last March 4 is devastating: at the height of the solar peak (2:00 p.m.), a megawatt hour in Denmark cost just 26 euros; Just three hours later, after the sun set and the gas plants came into play, the price catapulted to 430 euros. This “roller coaster”, with jumps of up to 1,700% in one afternoon, has been replicated with the same harshness in the Netherlands, Germany and Belgium. Gas thus imposes a “law of luxury” every time the sun disappears, preventing the industry from planning its production. Intervene “whatever the cost.” With a heavy industry (steel, chemicals, aluminum) on the brink of the abyss – it is worth remembering that, according to a document from the European Commission cited by Euronewsindustrial electricity in the EU was already twice as expensive as in the US and China before this crisis—Europe has decided to act. According to the documents discussed by the European leaders to whom has had access Euronewsthe emergency plan seeks quick relief by putting the scissors directly into the bill in three ways: National tax cuts: Which currently vary enormously and can amount to up to 22% of the electricity bill. Cap on tolls and network charges: Which represent 18% of the bill for large industrial consumers. Review of carbon emission costs: Which add 11% to the cost of electricity generation. The intervention beyond of tax cuts. The Prime Minister of Italy, Giorgia Meloni, has toughened her tone towards companies. In statements cited by Euronewswarned: “We will do everything possible to stop speculation. I am ready to react, if necessary, including by increasing taxes on companies that speculate on prices through energy bills.” Furthermore, the panic button for strategic reserves has been activated. As explained Reutersthe finance ministers of the G7 and the EU are negotiating to release part of the 1.4 billion barrels of strategic reserves that Europe keeps to flood the market and artificially sink prices. The impact of not intervening in time. Bloomberg details the case of Domo Chemicalsa plant in the German industrial city of Leuna, which has had to declare insolvency consumed by energy costs. This erosion of the industrial fabric also coincides with a delicate political moment in Germany, where the conservative party (CDU) of Chancellor Friedrich Merz has just suffered an electoral setback against the Greens in the regional elections in Baden-Wuerttemberg. The Spanish shield. Despite the urgency, the overall European response is being fragmented. EUobserver points out that Ursula von der Leyen has proposed as a patch to expand the Caspian Sea oil and gas corridor. Ironically, the only royal coat of arms right now is Spain. As highlighted by this same medium, the Spanish market has registered the lowest and most stable prices this week thanks to its gigantic previous investment in renewable energies, partly isolating its system from fossil volatility. Finally, the markets have experienced a slight respite thanks to geopolitics. According to the latest update of BloombergEuropean bonds rebounded and gas fell 17% on Tuesday after US President Donald Trump predicted the conflict with Iran would be resolved “very soon.” However, investors assume that if the war drags on, prices will remain high for a long time. Waking up to reality. With 67% of its consumption still tied to imported fossil fuels, the bloc is aware that depending on Middle Eastern trade routes is a huge risk for its economy. Until now, the European Union trusted that the free market would solve consumer problems and guarantee the best prices. This energy crisis has shown that this is not always the case. The authorities now assume that, in extreme situations, intervening in bills, capping profits and emptying state reserves is the only viable solution. Whatever the cost, Europe has decided to take control to ensure that turning on the lights is not a privilege reserved for times of peace. Image | freepik and Haydn on Unsplash Xataka | Neither oil nor gas: if a total war breaks out between the US and Iran, the definitive weapon will be desalination plants

saying that opera and ballet don’t matter to anyone

A conversation about the future of cinema in theaters unleashed, almost accidentally, one of the most unexpected cultural controversies of the final stretch of the awards season that we are experiencing. Timothée Chalamet had the unfortunate idea of ​​using opera and ballet as symbols of cultural irrelevance, and the institutions in the sector have responded, while Chalamet’s chances of winning an Oscar that many took for granted have begun to be questioned. I didn’t want to dance. Chalamet did not intend to talk about opera. The conversation, held last March 4 with his partnerInterstellar‘ Matthew McConaughey, revolved around something broader: whether theatrical cinema has a future and whether actors should beg audiences to come see it. Chalamet defended that good films (he gave as an example the Barbenheimer phenomenon) they don’t need anyone to promote them. And to illustrate the alternative, he resorted to a somewhat cornerstone image: “I don’t want to work in ballet or opera, which is like ‘hey, keep this alive even if no one cares anymore.’” And he added: “with all due respect to the people of ballet and opera.” Too late. Some answers. The institutions linked to opera and ballet were the first to respond: the Royal Ballet and Opera of London posted on Instagram on Friday a video of artists and technicians on the theater stage. In the description they invited the actor to reconsider his position, without any conflict. The English National Opera was somewhat more aggressive: posted a photo of Chalamet along with his viral date and offered him free tickets with the code “Timothée” so he could “fall in love with opera again.” The Seattle Opera went in the same direction: 14% discount on your production of ‘Carmen‘ using the same code. In a later interview, the Royal Ballet and Opera made it clear: Ballet and opera have influenced contemporary theatre, film, fashion and music for centuries, and millions of people around the world continue to attend their performances. That is, it is not a dying industry. In addition, it was mentioned how the company distributes its productions in more than 1,500 movie theaters in 50 countriesand its own executive director noted in the presentation of that season that three quarters of the institution’s activity occurs outside the Royal Opera House. The artists come in with a bang. People like the Colombian opera singer Isabel Leonard have been less diplomatic, saying on social media that “only a weak person or artist feels the need to belittle the arts that precisely inspire those who seek slower and more contemplative experiences.” The Colombian dancer Fernando Montaño published a formal letter on Instagram: comparing artistic forms, he wrote, limits growth and blocks the ability to develop one’s talent. London dancer Anna Yliaho was more succinct: Only an insecure artist, she said, destroys another discipline to elevate his own. The Irish baritone Seán Tester commented that confusing popularity with value is a fundamental error. From Spain, the orchestra director Alondra de la Parrafrom the Orchestra and Choir Foundation of the Community of Madrid extended the invitation of so many other institutions to Chalamet to come see them and change his mind. Many of these statements were collected in the aforementioned article from The Hollywood Reporter. The worst moment. The statements come at the worst possible time for Chalamet’s campaign in search of the Oscar for Best Actor for ‘Marty Supreme’, one of nine for the film, including the top prize. Chalamet has had a certainly notable career in awards, since at only thirty years old he became youngest male actor to accumulate three nominations for best performance since Marlon Brando. For months, in fact, it has been the favorite, and won at the Critics Choice and the Golden Globes. But the back-and-forth of the months leading up to the awards seems to have taken its toll on the film: first an article about director Josh Safdie’s behavior on a previous shoot. Then the defeat at the BAFTAs (without a single award and with 11 nominations, a record for failure in the contest), followed by the defeat at the SAGs, where Michael B. Jordan won for ‘Sinners’ (becoming the new Oscar favorite). And now, these statements, in line with the Chalamet’s aggressive promotion stylebut that can turn away the most traditional voters. In Xataka | Cameron’s ‘Titanic’ was going to be a flop. Until a trailer that broke several Hollywood rules changed the narrative

an “invisible” galaxy made up almost 100% of dark matter

The Universe continues to be that great unknown. Not only because of its vast immensity, but because human research and subsequent theories that explain its functioning continue to require tweaks and reformulations. we have seen it when calculating the distances between planets of the Solar System, the size of Jupiter either what is the training mechanism of planetary systems. Now, a research team from the University of Toronto has discovered to the strongest candidate for a “dark galaxy”, something that until now was only considered a possibility. The candidate. The study presents the discovery of CDG-2, an acronym that literally means Candidate Dark Galaxy 2. It is an object 300 million light years away, in the Perseus cluster, with a peculiarity: it is almost completely dominated by dark matter, with a minimal number of stars. Thus, between 99.94% and 99.98% of its total mass would be dark matter and it only delivers a light of “only” six million suns compared to the brightness of tens of billions of the Milky Way. Context. Galaxies are something like the Lego pieces that make up the universe and they all contain dark matter. The dark matter It is something scientifically fascinating: it is invisible, it does not emit or reflect light, but its gravitational influence was the scaffolding on which galaxies were formed and is what holds them together today. In the Milky Way, estimates suggest that between 65 and 90% of the mass is dark matter, depending on the model, but astronomy has always wondered if there were even more extreme galaxies. The “dark galaxies” were until now just a theoretical prediction. Why is it important. To begin with, because it empirically confirms what the theoretical models contemplated, but it has more implications: It opens a new way to detect galaxies from statistically significant groupings of globular clusterswhich serve as a trace. As a case study: the most probable hypothesis is that neighboring galaxies ripped away the gas necessary for star formation, leaving only the skeleton. It makes us look at its “twin” CDG-1 with different eyes. Detected previously, it could be a case of an even more extreme dark galaxy, possibly a pure dark matter halo. How they discovered it. The research team came across this galaxy in a striking way: looking for its shadow, since it is practically undetectable. Their fingerprints were four globular clusters, small dense concentrations of stars around the Perseus cluster. After analyzing its disposition with statistics and ruling out that its grouping was a matter of chance, they pointed the three most powerful telescopes available, Hubble, Euclid and Subaru, towards that region and created the image that evidenced its existence. Is, in the words of the main researcher Dayi Li, “the first galaxy detected solely through its population of globular clusters.” Image: NASA, ESA, Dayi Li (UToronto); Image Processing: Joseph DePasquale (STScI) Pending subjects. However, CDG-2 is still a candidate and not a confirmation, which is why it keeps its name. To confirm with certainty the mass of dark matter it would be necessary to measure the velocities of its stars or clusters, something technically very difficult with current technology due to how little they shine. It will be necessary to wait for James Webb and new Euclid observations to improve the image of this object to better define it or continue finding more dark galaxies like it hidden in the universe. It would require measuring the velocities of its stars or globular clusters, something technically very difficult given their extreme tenuity. The next few years, with James Webb and new observations of Euclid, will be crucial to refine the portrait of this object and track whether there are more galaxies like it hidden in the large clusters of the universe. In Xataka | A new “solar system” has just been discovered. There’s just one problem: it shouldn’t exist. In Xataka | We have been deceived by the distances of the Solar System: the closest neighbor to Neptune is Mercury Cover | POT

Productivity science says it’s not just inches that matter

It has happened to me and it may happen to you too: you have a monitor and you notice that it is no longer enough. You could take a leap and swap it for something a little larger, but just adding inches to the equation isn’t going to change things too much. To change our experience, we need something different, like opting for an ultrawide monitor or adding one more monitor to our setup. What is the best option for you? Both are great, but both may not suit your needs in the same way. For this reason, we are going to take a look at the advantages and disadvantages of these two configurations so that you know what to choose according to your priorities. Choosing an ultrawide monitor An ultrawide monitor is larger than a conventional one, but we cannot stop at that alone. These monitors usually have a 21:9 format, which means they are wider. This means that we have a longer horizontal space, which is a wonder for productivity. And not only that: being a single screen, there is no type of barrier or frame that cuts off the visual experiencesomething ideal for working with long lines of code or spreadsheets with countless columns. Also three windows with documents or applications open at the same time. Your entire workspace, without interruptions. And for gaming, they are the best because you have a larger field of vision and the immersion they provide is not comparable to that of a normal monitor. To this elongated screen we must add another factor, which is the curvature. There are options for flat ultrawide monitors, although if you dare to take the leap, I would recommend opting for a curved one. The reason is very easy to understand: the small curve of the monitor helps you see the entire thing at a glance. What does this imply? You don’t have to turn your headsomething you will appreciate when you finish your day. In addition, the ultrawide allows you to work centered and with a straight spine. With two monitors, your “center” will be the frames of both. Therefore, more neck movements. Another element that works in favor of the ultrawide: Fitts’ Law. This, in short, predicts that the time needed to move to a target depends on its distance and size. And how does this apply to monitors? With two of them, we will have the frames as a “barrier” separating them both. that the brain will understand as an interruption. That does not happen with the ultrawide, since the mouse and everything will move fluidly across the screen. Without constantly jumping from one monitor to another, the cognitive load is reduced and that is great for less fatigue. It is not the main reason to choose one of these monitors, but I have friends who have opted for an ultrawide because they prefer a more minimalist and tidy space. In the end, it is a continuous visual experience that you place on your desktop, which, of course, also has its downside: you need a large desktop background. I will leave for last two more cons that, without being a drama, I would value a lot before opting for this option. Since it is a screen, if one day you start the computer and the monitor does not turn on, you will be left with nothing (having two monitors clearly wins there). In addition, by having many more pixels than a traditional widescreen monitor, you are going to need a medium powerful graphics card if you don’t want your games to drop below 60 FPS. Choose two monitors The other side of the coin: two monitors, side by side. If I had to define this setting in one word, it would be versatility. To build a setup with two screens, we can go ahead and buy them both or simply purchase one and add it to the one we already have, whether identical or of a different size and characteristics. And not only that: we can also change its height as we wish or rotate one of them to make it vertical. The latter is great for reading long documents or taking a look at social networks while, at the same time, you have another horizontal screen for a normal experience. I have been working with two monitors for years and it is my choice because it offers the feeling of having two separate spaces. For example, I usually have a document open on one screen where I write and email or Slack on the other. In return, there is one thing in which the ultrawides win by a landslide: you are going to find a frame in the middle and you are going to have to move your neck more. I’m going to stop at this last point for a moment. It is very necessary that the two monitors are well placedsomething that is not as simple as it sounds. If they are identical it is easier, but it can be an odyssey as they are different sizes or manufacturers. If possible, I would pull a monitor standalthough that adds to the bill. And it is better not to skimp there, since they will have to support the weight of the monitors all the time. The good and the bad of both options, face to face ultrawide monitor two monitors THE GOOD 🟢 You work without frames in between. It is ideal for editing video (infinite timeline) or having 3 legible columns of text, and it helps you avoid straining your neck. Allows you to have two separate workspaces THE BAD 🔴 They are not for all desktops: you need a robust stand, table background and a good graphics card They involve more neck movement and there are black frames in the middle Ideal for: Have all your documents or apps on the same screen to see them at a glance More versatility: you can put one vertically (ideal for … Read more

we have to get to the month of March no matter what

Russia has intensified a strategy of attrition that aims less to gain ground than to disrupt daily life, and it has done so hitting the energy system Ukrainian to leave the country without electricity, without heating and without basic services at the cruelest time of the year. Faced with Moscow’s missiles, kyiv has called in a group of kamikaze hunters with a very clear plan. The terror ends. It we count last week. With temperatures plummeting to -20ºC and a network already weakened by months of attacks, waves of missiles and drones they seek to collapse substations, electrical infrastructure and nodes that sustain urban heat, and there is even fear of a more precise campaign against points that feed to nuclear plants. The goal it’s simple: turn the cold into political pressure, erode civil resistance and push kyiv towards a negotiation under torment, just when the United States tries to open a diplomatic path. The result is a country forced to live in survival modewith blackouts that last for days in some districts, thousands of buildings without heat in the capital, schools closed and citizens who, unable to leave, endure in dark and frozen homes, wrapped in blankets, with candles, camping burners and a shared feeling that the front is no longer only in the trenches, but also in the living room. Heat, water and normality under minimums. In cities like kyiv, the blow is especially dangerous because the heating depends on centralized systems that distribute hot water from cogeneration plants, and when the supply is cut off in the middle of the ice, the risk is not only of being cold, but also of the pipes freezing and bursting, causing flooding when the service returns. That is why the authorities have come to recommend draining circuits in thousands of buildings, accepting temporary cold weather to avoid a major disaster, while repairs are made slow and difficult by the weather and repeated attacks. Searching for fire. Life is reorganized around of heat points: public centers where people take shelter, charge mobile phones and receive hot food, and extraordinary solutions such as adapted trains as mobile hubs to warm up and regain some autonomy. Even so, they remembered in Forbes What is most striking is the obstinacy of normality: businesses operating with generatorsneighborhoods that resist in the dark, families improvising routines and a society that, instead of becoming anesthetized, tangibly feels again what it means to sustain a country at war when the temperature turns each blackout into a physical threat. Air saturation. Russian pressure is not only more constant, it is also more massive, and its strength resides in the volume: The number of attack drones has escalated to exceed 5,000 a monthwhich is equivalent to more than 150 every nighta figure designed to deplete defenses and force Ukraine to choose what saves and what doesn’t. Although the interception rate stays highthe strategic cost is enormous because shooting down swarms with surface-to-air missiles or aviation weapons consumes scarce and very expensive resources at an unsustainable speed. Zelensky himself has warned that there are systems that run out of ammunition. Mobile teams with autocannons and machine guns provide useful and relatively cheap defense, but its scope is limited and they can only protect specific points, such as a power plant, leaving too many gaps for an enemy who strikes and repeats the pattern every night. In that equation, the “thermal terror” It does not depend on destroying everything, but on having enough impacts so that the system does not raise its head and the population can’t rest. The kamikaze “hunters”. The Ukrainian response is coming through a route more adapted to this new mass war: interceptor drones small, fast and cheapconceived like disposable hunters capable of taking down Shaheds from a distance without burning a missile for each target. They are a evolution of the FPV ecosystembut oriented towards pure performance, with “bullet” type designs and industrial logic looking for volume: different models, several suppliers, accelerated production and a cost per unit that allows you to take risks without mortgaging the arsenal. Its effectiveness is maximized by launching more than one per whitejust as is done with expensive interceptors when the priority is to ensure the downing before the drone reaches a substation or thermal plant, which requires manufacturing many more interceptors than enemy drones. Aid. And yet, what seemed impossible a few months ago is beginning to sound viable: manufacturing has been triggered and, with allied supportUkraine is reaching a scale that It is no longer symbolicbut operational, to the point that interceptors are becoming protagonists of night demolitions and claiming a growing share of the work that previously fell on missiles. Hold on until March. The strategic sense of these interceptors is not only to shoot down drones, but to open a window of timebecause Ukraine will not be able to rebuild or stabilize its energy network as long as it continues receiving daily blows on the same critical points. The winter war is decided, therefore, in the ability to reduce the impact leak enough to repair without the repair being destroyed the next day, and in maintaining morale when the cold punishes as much as the enemy. Russia bets on fatigue and despairwhile Ukraine does it for a defense cheaper and massive that allows it to resist the peak of winter demand and reach the temperate spring season with the system alive. If the Russian plan is to push a country into a dark age of ice and blackouts, the Ukrainian response is to build, urgently and with war engineering, an aerial barrier made of kamikaze hunters that not only protect transformers, but buy something much more valuable: time not to break (or freeze). Image | Denys Shmyhal In Xataka | Russia has dynamited electricity in Ukraine to activate “thermal terror”: that “warming” in winter is a lethal risk In Xataka | Russia’s drones are dropping like flies and it’s because of Ukraine’s craziest weapons: a fishing … Read more

We have spent 30 years forgetting how things are made. Now China has the keys to the matter and the West is in panic

For the past three decades, Western democracies have operated under an intellectual mirage. Elites, blinded by a neoclassical bias, assumed that control of intellectual property, financial instruments, and software code constituted the pinnacle of value creation. In this worldview, physical processes—the “dirty work” of mining, refining, and manufacturing—were considered low-margin commodity services that could be outsourced to low-cost jurisdictions without strategic risk. As Gillian Tett explains in his Financial Times columnthis cognitive bias allowed China to dominate global supply chains with little protest. The material deterioration of the West. The essence of the current problem is defined by investor Craig Tindale in his essay “The return of matter”. In it he argues that the West has suffered “strategic disarmament” by dismantling its national productive economy in favor of quarterly financial efficiency. As Tindale details, he fell into the “raw material paradox”: believing that possessing the raw mineral is equivalent to possessing the usable material. While the West possesses vast geological deposits, China has monopolized the “Midstream,” that is, the heavy industrial capacity to refine, smelt and purify these materials into useful forms. Without this capability, a lithium mine in Australia or a copper mine in Arizona are simply quarries for a Chinese smelter; They are not strategic assets for the West if Beijing has the keys to access them. The data is there. The data of the Chinese industrial domain are, as investor Craig Tindale describesoverwhelming and unprecedented in history, consolidating what he calls “processing sovereignty”: Gallium: China controls approximately 98% of global production, a material that is essential for AESA radars, 5G networks and the semiconductors of the future. Rare earths: The Asian giant dominates 90% of chemical separation capacity – the true technical “separation wall” – and more than 90% of the production of NdFeB magnets, vital for electric vehicle engines and defense systems. Graphite: Control more than 90% of the production of graphite anodes, the indispensable component of virtually all lithium-ion batteries. Magnesium and Polysilicon: Your control extends to 90-95% of magnesium casting (key for aluminum alloys) and 95% polysilicon necessary for solar energy. As Tett points outwhile the West became obsessed with software and services, China was quietly building the physical infrastructure that today gives it a massive competitive advantage in the race for artificial intelligence and the energy transition. This physical reality is what has forced the Trump administration to try to redraw the energy map by taking Venezuelan crude oil, desperately seeking to regain control over the “matter.” The electric wall of AI. This physical reality has revealed that the race for Artificial Intelligence It’s not just a question of code or chips. The digital leadership of the West is now encountering the physical limit of cheap energy. Satya Nadella, CEO of Microsoft, and Jensen Huang, director of Nvidia, agree that the biggest current problem is not the excess of chips, but lack of electricity to connect them. On this board, China has gone from being a dependent petrostate to becoming the first “Electrostate” in the world. Beijing now produces 2.5 times more electricity than the US and builds 74% of all current solar and wind projects on the planet. By investing massively in electrification, China is expanding an infrastructure that could give it a definite advantage in the AI ​​race. The Venezuelan trap. Against this backdrop, Donald Trump’s administration has accepted the importance of physical matter, but seems determined to fight with tools from the last century. The taking of Venezuelan crude oil seeks to consolidate the reserves of Venezuela, Guyana and the United States are under US influence, which would represent close to 30% of the world’s oil reserves. according to a JPMorgan report. However, Venezuelan oil alone cannot solve the AI ​​problem. As Gillian Tett warnswhile Washington asks the world to buy 20th century infrastructure (fossil fuels), Beijing offers 21st century infrastructure (renewable energy and high voltage networks). In addition, Venezuelan crude oil is “mortgaged”: The country owes up to $60 billion to China under the oil-for-loans model, and its infrastructure is in ruins. The skills gap and the clash of “clocks.” Rebuilding industrial sovereignty is not just a question of money. The West has closed its heavy industrial capacity for thirty years, causing a “human bottleneck”. Metallurgists and process engineers who know how to adjust an unstable furnace or a chemical separation train are retiring without relief. Tindale further postulates a conflict of time horizons. The “Western Financial Clock,” which requires quarterly profits, has destabilized the “Industrial Clock” (which requires decades of investment) and the “War Clock” (which requires immediate reserves). While China’s clocks are synchronized by the state, the West remains trapped in short-term financial efficiency. Towards a rematerialized sovereignty? The JPMorgan report suggests that the US has won the short-term battle for Venezuelan crude oil. But, as Gillian Tett concludesrisks losing the global strategic war for the energy that will power AI. Tindale’s thesis is blunt: a civilization that financializes everything ends up sacrificing the material base that keeps it independent. If the West does not rebuild its foundries, refineries and factories, it will renounce the material sovereignty that sustains democracy, becoming a simple “quarry” rich in resources but poor in capacity in the face of a rival that already holds the keys to the physical world. Image | freepik Xataka | Venezuela has something much more valuable than oil and the US knows it. The big problem is that he doesn’t know where he is.

almost no one wants a computer with AI no matter how hard the industry tries

Dell is clear that its products in 2026 will no longer be “AI-first.” That absolute focus on promising the gold and the moro in the new generation of PCs thanks to the virtues of artificial intelligence is disappearing and the reason is obvious: almost no one cares if their PC has AI functions or not. what has happened. Kevin Terwilliger, chief product officer at Dell, said in a recent interview with PC Gamer that the AI ​​fever on PCs has ended up causing a lot of disappointment among users. “In fact,” he explains, “I think the AI ​​probably confuses them more than it helps them achieve a specific result.” Dell no longer believes (as much) in PCs with AI. This manager showed surprising honesty when talking about how this absolute commitment to AI has not convinced either users or companies. The company has taken a step back, and although they will continue to pay attention to these AI options, they will no longer be the priority because they have discovered that people don’t care too much about those options: “We’re very focused on leveraging the AI ​​capabilities of a device – in fact, every product we announce has an NPU – but what we’ve learned over the course of this year, especially from a consumer perspective, is that they don’t buy based on AI.” Although the monkey dresses in silk, the monkey stays. Our dear PC knows it well, that in the last two years wanted go from being a Personal Computer to a Personal Companion with the help, of course, of AI. All manufacturers started to brag about TOPS on powerful NPUs and how instead of using our computer with a mouse and keyboard we were going to use the voice. The promise has dissipated and what has happened to the PC is that everyone keep using it the same way you used it. At least, for now. Dell lowers the bet. Dell was one of Microsoft’s initial partners in the launch of Copilot+ PCs in 2024, and even added variants of its popular Dell XPS 13 and Inspiron with Qualcomm’s Snapdragon X Elite chip. They even added Cloud AI chips of this manufacturer in its high-end chips last year to try to reinforce the execution of local AI models, but that has not convinced users. That manufacturers like Dell change the discourse is significant and dangerous for Microsoft’s ambitious plans. Microsoft is left alone. The company led by Satya Nadella has been flooding us with new AI features in Windows for a long time, but the problem is that most of these features are being received with indifference… or with total rejection. The Windows Recall example is the clearest: the feature seemed promisingbut its launch was involved in a great privacy controversy and its availability was delayed and currently it is an option that is barely talked about. Thank you for your sincerity, Dell. Dell’s speech is surprising and appreciated. Especially after that continuous trickle of releases in which AI seemed to be the salvation of the PC and the key to a new golden age. These functions can end up being valuable, without a doubt, but what users continue to look for in their laptops, for example, is reliability and great autonomy, for example. That’s what still matters. The PC faces a complicated future. Jeff Clarke, COO of Dell, participated in a media meeting at CES 2026 and also mentioned how in this industry “We have this unfulfilled promise of AI and the expectation that AI will drive demand from end users.” It is clear that Dell now has a different vision, but both it and other manufacturers face a very difficult few months because as Clarke said, “we are about to enter 2026 with a quite significant memory shortage“. In Xataka | Sundar Pichai (CEO of Google) believes that ‘Her’ is inevitable: “there will be people who fall in love with an AI and we should prepare ourselves”

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.