The CEO of logistics gives way to the CEO of engineering

Tim Cook has announced that will step down as Apple CEO on September 1. will replace you John Ternusits senior vice president of hardware engineering. This long-awaited generational change represents an important change in the DNA of the leadership of one of the most valuable companies in the world. Why is it important. Cook was a genius of logistics, supply chain and business diplomacy. Ternus is very different: we are talking about a mechanical engineer who has spent 25 years (half of his life) designing, testing and manufacturing Apple products. Apple goes from a leader who optimized how products are made and sold to one who decides how they are conceived and built. The sign that anticipated everything. In January 2026 we say that Cook had put Ternus in charge of Apple’s design teams. The move was not officially announced, but Mark Gurman made it public in Bloomberg. It was the definitive signal and Cook’s succession had been on the agenda for some time… and Ternus was the number 1 favorite. Until then, design at Apple had functioned as an independent fiefdom, a direct inheritance from the Jony Ive era. That it became dependent on hardware engineering meant that in Ternus’ Apple, technical execution rules over aesthetics. It’s not that design stops mattering. He is no longer the king as he once was. What Ternus has achieved and what he hasn’t. Its footprint is on practically all of Apple’s current hardware catalog: Apple Silicon on the Mac. Intel’s transition to its own chips has probably been Apple’s most important technical decision in the last decade. In chip architecture, the main merit is attributable to Johny Srouji, Ternus’ replacement. In product execution (a MacBook Air without a fan, sustained performance, record autonomy, coherent integration with the SoC…), the credit goes to Ternus. We are possibly in the best Mac cycle in history. iPhone. Not everything in the iPhone is yours, but the build quality, thermal management, choice of materials, and internal integration are. iPad, AirPods, Apple Watch. He has participated in the launch of several new generations and product lines. What is not his fault is the stagnation of the iPad as a platform. That is a software and strategy problem, not the hardware, which is excellent, so we have to ask Craig Federighi and Tim Cook about it. Between the lines. The best comparison we can make here is not so much between Cook and Ternus but between Cook and… Steve Ballmer. Steve Ballmer was a sales and operations CEO who multiplied Microsoft’s revenue but missed the mobile revolution. Cook has been an operations and services CEO who has multiplied Apple’s revenue, but whose tenure has not produced a game-changing new product on the level of the iPhone or iPod. The Apple Watch took several generations to find its place, AirPods are a resounding success almost ten years later, but conceptually they are not a new category. The Vision Pro are in a limbo from which we will see how they emerge. Ternus arrives with a profile closer to the product. And that, in a product company, matters. Besides, Apple has appointed Johny Srouji as Chief Hardware Officera new position that unifies hardware engineering and hardware technologies under his command. It is important for two reasons: Srouji was about to leave. Months ago it was learned that he had informed Cook that he was seriously considering leaving the company. Apple has retained him with more power and responsibility. Confirms that Apple Silicon is the central strategic bet. Ternus’s first big decision as incoming CEO has been to shield his most valuable piece. Yes, but. Ternus inherits a company with pending tasks that cannot be resolved with good hardware alone: AI. Apple Intelligence has arrived with a notable delay (in various senses) with respect to Google, Microsoft and OpenAI. AI is fundamentally software, models and services. Ternus comes from iron. Regulation. The App Store is more controlled than ever and not only in the EU. Commissions, alternative payments and third-party stores are going to define a good part of the coming years. Tariffs and supply chain. The manufacturing structure in China that Cook has built and optimized for many years is now threatened by the Trump administration’s trade policy. The need to surprise. Apple hasn’t launched anything that evokes the ‘effect’ for a while. wow‘so common in the Jobs era. And now what. Cook, as has happened several times with the old guard, is not leaving completely. He will be executive president, focused on the relationship with governments and regulators: the same diplomacy that he has managed with reasonable success for 15 years remains in his hands. Apple does not lose Cook. It relocates it where it can provide the most value now. Ternus is 51 years old. Cook was 50 when he took office.. If Apple maintains its pattern of long tenures, Ternus may be at the helm for a decade or more. Apple’s commitment is to believe that its difference compared to Google, Microsoft and OpenAI will not be in the most powerful AI model, but in how it integrates AI into hardware that people touch, carry and use every day. That’s where Ternus has an advantage that no one else has. If that bet is correct, Apple has chosen the perfect CEO. If the AI ​​battle is won in the cloud and in models, you may have a problem. In Xataka | The foldable iPhone is getting closer every day: this is everything we know about it so far Featured image | Xataka

How to measure the distance between two points in Google Maps on PC and mobile

Let’s explain to you how to measure distances in Google Mapsso that you can have better references of how far away the points that interest you are. It is not about measuring distances on roads or paths, for that you can make routes on Google Mapsbut to draw a line between two points and know their physical distance. You will be able to do this to measure streets, roads, or anything you want on the map. A virtual ruler will be generated telling you the distance. We are going to teach you how to do both on the Google Maps website and in the application. Measure distances on the Google Maps website If you are using the Google Maps website, you have to do right click on one of the points of the measurement you want to make. This will open a context menu, where you have to click on the option measure distance that will appear at the bottom. Now all you have to do is click on another point on the map. Come on, you right click on the starting point, and then when you choose the option, click on the final click. This will generate a ruler that will show you the distance between these two points. Now you can continue marking new points that will be joined with the previous ones, and you will be able to see the distance between each of the points. Besides, At the bottom you have a local distance indicator which will tell you the total of the sum of all the distances. Measure distances in the Google Maps app Distance measurement is different in the Google Maps app, since it only shows you the total distance and not point by point. But the way to do it is quite similar. The first thing you have to do is click with your finger on the place on the map that you want to be the starting point. This will open a menu with many options, and in it you must click on the option measure distance that you will have inside at the bottom. Now, there will be an aiming point in the center of the screen, and with your fingers you will have to move until you go pointing to where you want to add new points. When you do, click on the button Add pointand everything will stay the same so you can add new points. At the end, you also have an indicator at the bottom left where it will tell you the total distance that all the points add up. In Xataka Basics | Google Maps: 45 functions and tricks to get the most out of both your website and your mobile app

Amazon is clear about its strategy for the AI ​​war: if you can’t beat your enemy, invest in them

Just two months ago Amazon announced a astronomical investment of $50 billion in OpenAI. Today he made a movement very similar to the announce which will invest $5 billion in Anthropic and could invest an additional $20 billion “tied to certain commercial milestones) in the future. There are counterparts and some circular financing, of course, but also a clear pattern: Amazon has no winning horse in the AI ​​race, so it is betting on its competitors. More circular financing. Amazon now has alliances in the form of active investment with the two leading AI companies in the world. In return, both OpenAI and Anthropic commit to huge spending on their services on AWS. There is a lot of circular financing here: me I lend you the money so that you spend it on me. Those houses of cards that OpenAI and Anthropic are building have clear risks, but the industry is totally immersed in that maelstrom. In Xataka OpenAI is making the tech industry unite its destiny with yours. For the sake of the global economy, it better work Analysts warn. There are concerned analysts here and others who defend this type of agreement. M. Mohan asked in X why regulators are not on top of these types of financially dangerous agreements: the domino effect if OpenAI or Anthropic fall could be terrible. For others like the well-known Jim Cramer this is not circular financing. According to him, circular agreements are designed to inflate profits, and here no one’s profits are being inflated. Their argument is that Amazon has real computing, Anthropic needs real computing, and the value of the investment is genuine. History repeats itself. The same debate occurred in January with OpenAI, and the conclusion was the same then: the image of circular financing is there but it does not necessarily imply fraud, it implies that Amazon has found a way to monetize the AI ​​​​craze without betting on any particular model. Or for the two who seem to be winning the race. But everyone is doing it. The numbers of the agreement with Anthropic. Amazon puts up $5 billion immediately, taking advantage of the company’s current valuation of $380 billion. It is also committed to investing up to an additional $20 billion linked to “certain commercial milestones” that have not been specified. In exchange, Anthropic commits to using Amazon technology, and specifically its Trainium and Graviton chips, for the next decade. No less than 5 GW of computing capacity is secured, which is more or less the capacity consumed by New York City. This is perfect for Anthropic. He Anthropic statement about the agreement contains an interesting paragraph. In it, the company admits that the demand for AI by companies, developers and users is generating “inevitable tension” in its infrastructure. Or what is the same: they can’t do everything, so they are resorting to measures that “penalize” the excessive use of their AI models. They restrict session limits during peak hours, change the pricing model in companies to a “pay as you go”, or change the level of effort of their models and they sign up for token inflation. The agreement with Amazon makes it possible to mitigate the problem of computing shortages. The race for gigawatts. The truth is that Anthropic has been moving for months to try to avoid more and more problems with the computing capacity they can access. In a few weeks we have seen how Amazon’s 5 GW have been secured and also “multiple gigawatts” computing teams contracted with Google and Broadcom. What Amazon is actually building. Viewed as a whole, Amazon’s strategy is simple and elegant. You don’t need to win the AI ​​modeling race, which is unpredictable and extraordinarily expensive. It only needs that whoever wins it depends on it and its infrastructure. By investing at the same time in two rivals like Anthropic and OpenAI and securing massive spending contracts from both, it achieves something striking. Turn uncertainty into an asset: it doesn’t matter who wins, because she will end up getting paid. This also reinforces the relevance of its Trainium and Graviton chips, something that validates its commitment to its own chips. {“videoId”:”xa4n2g8″,”autoplay”:false,”title”:”An initiative to secure the world’s software | Project Glasswing”, “tag”:””, “duration”:”349″} Win-Win. The agreement seems perfect for both parties. Amazon ensures, as we say, consumption in its infrastructure for the next ten years, and Anthropic achieves an investment that increases its market value again. The same happens with OpenAI, and in both cases these agreements and financial support only reinforce expectations about their imminent IPOs. Image | Fortune Brainstorm TECH In Xataka | OpenAI and Anthropic have proposed the impossible: lose $85 billion in one year and survive (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news Amazon is clear about its strategy for the AI ​​war: if you can’t beat your enemy, invest in them was originally published in Xataka by Javier Pastor .

If the question is what differentiates Samsung from its competition, Charlie Bae, Samsung’s product director, is clear: ecosystem

The television market is more contested than ever and traditional brands they no longer monopolize sales like they used to. As happens in other areas such as the automotive or smartphones, Chinese manufacturers They have stopped competing only on price and now they also do it in benefits. Hisense reached the second place worldwide in the premium segment with a share of 24% in the third quarter of 2024. TCL, for his partsurpassed Samsung in the television segment of 80 inches or more during that same period: it maintained a share of 23% compared to Samsung’s 19%. Both Chinese brands arrived at CES 2026 presenting their own technologies based on the evolution of its MiniLED panels:Hisense with your RGB MiniLED evo capable of exceeding 110% of the BT.2020 standard, and TCL with its SQD MiniLED as an alternative to OLED. The war is no longer about inches or prices. Now the dispute it’s in the quality. In this context of reconfiguration in the mid- and high-end market, we have had the opportunity to speak with Charlie Baeresponsible for Samsung’s television division in Europe. From volume to value: Samsung’s new scenario When asked about Samsung’s two decades of leadership in the global TV market, Bae doesn’t resort to triumphalism. Aware of the change that is occurring in the market, his reading is more nuanced, almost concerned about what is coming. “The market is transforming: it is going from being driven by volume to being driven by value,” he explains. “Due to the current economic situation, people are more conscious of what they spend. During COVID they spent a lot on changing their televisions, but now, when they consider renewing their TV, they are more cautious and think about the practical side.” That consumer caution is, in Bae’s opinion, both a challenge and an opportunity. A buyer who thinks twice is not necessarily a lost buyer; He is a buyer who can be convinced with solid arguments. And Samsung wants to be the brand that gives it to them. One of Bae’s most compelling arguments in his defense against Chinese rivals is not technological, but mathematical. According to the Samsung manager, a cheap television lasts on average between three and five years. A Samsung television, he tells us, lasts more than seven or eight years on average. “Think about it like this: if your TV lasts three or four years, you can only watch one World Cup. With Samsung, you can watch two.” Added to this is the commitment of seven years of system updates operational. “Even if you bought your TV last year, you’ll still be able to use the new AI features we launch. We want people to buy with the peace of mind that their TV is a long-term investment,” says Bae. Samsung’s response to competitive pressure with Chinese brands has a key piece: artificial intelligence. “15 years ago we introduced the Smart TV and no one imagined that it would become the standard. Today no one conceives of a television without applications, without being able to watch what they want when they want. That change led us to AI. Without a doubt the era of ‘AI TV’ will continue to develop over the next five years,” he concludes. However, Bae is careful to separate the hype widespread surrounding this technology of the actual content. “Previously, AI focused on optimizing the image and sound quality of the television,” he admits. “But now it’s visible and you can ask the TV questions about recommendations, travel plans, anything. The TV is something you can talk to, not just something you watch.” According to the manager, what differentiates Samsung products is that they apply this technology in a way that is useful to users, using as an example the Football Mode with AI included in their televisions, which allows something hitherto unthinkable: silencing the noise of the stands in a football match, without turning off the sound of the commentators. “If you’re watching the game at night and don’t want to turn up the volume, you can simply mute the stands and still hear the commentary clearly,” explains Bae. Beyond AI: OLED, MiniLED and MicroRGB In addition to the revolution in sales, television display technologies have stepped on the accelerator with the democratization of MiniLED panels for mid-range televisions, the gloss enhancement and color volume What QD-OLEDs offer or the new generation of MicroRGB screens. In this sense, Bae rejects the idea of ​​a screen technology that monopolizes the entire television market. “Technology continues to evolve, and I do not think that a single one is going to dominate the market. We do not focus on a single technology; we work on all of them in parallel, because each one responds to different user needs,” says Samsung’s product director. Samsung, its manager assures, works on all fronts: from the transparent Micro LED exhibited at CES to the 130 inch Micro RGBpassing through the high-brightness OLED. But also in formats that no one expected. In fact, Bae not only assures that Samsung will continue developing its catalog to offer different screen technologies, it is also committed to the flexibility of screen sizes and formats. All this in a context of televisions with increasingly larger diagonals, and living rooms with increasingly less square meters. “There are consumers who prefer small screens. We have The Movingstylea 27-inch touch screen that you can move around the house. In Europe, the number of single-person households is growing and homes are getting smaller, so you may be interested in a small, portable screen, not one with many inches,” insists the executive. In addition to the new panel technologies that are arriving in the brands’ catalogues, Samsung also highlights the arrival of other innovations that contribute to improving the visual experience, such as Glare Free technology, the anti-reflective system developed by Samsung that eliminates reflections and glare from windows and lights on the television screen. “Spain is a country with a lot of sun, so if you are … Read more

35% of its chip manufacturing machines are already of Chinese origin

Foreign lithography and wafer processing equipment manufacturers are selling less and less in China. In 2024, the country led by Xi Jinping represented 41% of ASML revenuebut in 2025 this figure dropped to 33%. And presumably in 2026 will contract up to 20%. Something very similar has happened to the American wafer processing machine manufacturer Applied Materials: its sales in China have gone from 37% of its total sales in 2024 to 30% in 2025. In addition, sales in China of the American companies Lam Research and KLA, and the Japanese Tokyo Electron, also have decreased during 2025 compared to those they obtained in 2024. This obvious trend is the consequence of two factors. On the one hand, US sanctions prevent US and allied manufacturers of lithography and wafer processing equipment from delivering their most sophisticated machines to their Chinese clients. The Dutch company ASML is most likely the most affected in this scenario. On the other hand, in response to pressure from the US, the Chinese Government is supporting the adoption of machines of Chinese origin in its integrated circuit factories. In fact, in 2025 the national tools represented 35% of the equipment in use in semiconductor plants, and Xi Jinping’s Government aims to reach 50% in new factories during 2026. Its purpose is clear: China’s chip industry needs to achieve technological independence as soon as possible in its fight with the United States. China has made great progress, but lithography remains its weakest point The resources that the Chinese Government is allocating to its designers and manufacturers of wafer processing equipment are bearing fruit. And they already compete face to face with foreign companies in the field of deposition, thermal processing, etching and cleaning of wafers. However, there are still no extreme ultraviolet (EUV) photolithography machines of Chinese origin in Chinese IC factories. Presumably they will arrive before this decade endsbut this is for the moment China’s real Achilles heel. One of the Chinese companies worth keeping track of is Pulin Technology. This organization has opted, like Naura Technology, AMEC (Advanced Micro-Fabrication Equipment Inc. China) or Piotech Inc., to develop your own cutting-edge photolithography machines. And the achievements are coming little by little. In mid-2025 Pulin sent one of his clients your first cutting-edge equipment using nanoimprint lithography technology (known as NIL for its English name NanoImprint Lithography). In mid-2025, Pulin sent one of its clients its first cutting-edge equipment NIL technology is not new. The Japanese company Canon has its own commercial NIL solution for yearsand presumably its operating principles are essentially the same as those of the machine designed by Pulin. On paper, NIL photolithography equipment is an alternative to printing machines. extreme ultraviolet lithography (UVE) designed and manufactured by the Dutch company ASML, although no to the high aperture version of these teams. The latter are currently the most sophisticated and expensive that exist. Very broadly speaking, the production of silicon wafers in the latter requires very precisely transporting the geometric pattern described by the mask to the surface of the silicon wafer using ultraviolet light and extremely refined optical elements. NIL lithography, however, allows the pattern to be transferred to the wafer without the need for intervention in the process. an extremely complex optical system. This strategy is simpler and cheaper, but it also involves the execution of several sequential processes that make it slower than UVE and UVP lithography. Canon assures that its nanoimprint lithography equipment can be used to manufacture integrated circuits comparable to the 5nm chips that TSMC, Samsung or Intel produce with ASML’s UVE machines. And in the future, with the refinements that will arrive, they will be able to manufacture 2nm chips. In addition, a NIL equipment costs ten times less than an ASML EUV machine: 15 million dollars compared to the 150 million dollars that the Dutch company asks its clients for an EUV machine with numerical aperture 0.33. We still don’t know how much each Pulin NIL machine costs, but it is reasonable to predict that at most it will have a cost comparable to that of the Canon machine. Image | Naura Technology More information | Tom’s Hardware In Xataka | Japan wants to end the Netherlands’ leadership in lithography equipment. This is your plan to get it

Who is Johny Srouji and why this great unknown has just become the second most powerful person at Apple

For those who have been following Apple for a long time, Johny Srouji is no stranger. For the rest of the world yes, but after the appointment of John Ternus as CEO of Applethis Israeli engineer has become the second most powerful person in the company. The question is obvious: who is Johny Srouji? Who is Srouji and why does he matter?. Born in Haifa, Israel, in 1964 to a middle-class Christian Arab family, Srouji studied computer science at the Israel Institute of Technology (Technion) and graduated Summa Cum Laude in both his engineering and master’s degrees. He worked at Intel and IBM before Apple hired him in 2008 with a very clear assignment: to design the company’s first chips. He did much more than that. The revolution made chip. That first chip designed by Srouji was the Apple A4, which debuted in 2010 in the original iPad and the iPhone 4. From there, Srouji forged one of the most prestigious hardware careers in the recent history of the technology industry. The A7 of 2013 was the first SoC in using 64-bit architecture, and then there would come the revolution of the Apple M1 with which the company definitively got rid of dependence on Intel in its Macs. But his work goes beyond. His official title until now was senior vice president of hardware technologies, but it did not reflect the real scope of his work. Srouji not only led the chip design. Also that of batteries, cameras, storage controllers, sensors, displays, cellular modems and other critical components of the entire family of Apple devices. Almost everything that makes these products work the way they do is largely due to the work of Srouji and his team. With the new position, his responsibility expands and he will now control the entire cycle: not only the hardware itself, but also the physical design. It’s a colossal challenge, but if anyone seems prepared to take it on, it’s Srouji. He was about to leave. In December 2025 Bloombeg reported that Srouji had informed Tim Cook that he was seriously considering leave Apple in the near future. Two days later, Srouji himself published a message to his team denying the newsbut the damage was done. For Apple to lose Srouji would have been a disaster, and it is very likely that this new position is in part Apple’s response to that alarm signal. Textbook talent retention, but raised to maximum power. New position, new structure. In it internal communication that Srouji has sent to team employees, the engineer detailed how he will organize the division into five areas: Hardware engineering: led by Tom Marieb, an Intel veteran who joined Apple in 2019. Siilicio: it will be directed by Sri Santhanam, a manager with a long career at Apple Advanced Technologies: Supervised by Zongjian Chen Platform architecture: led by Tim Millet Program management: will be managed by Donny Nordhues In that message, Srouji acknowledges that this “represents a significant change” but believes it will work thanks to the entire team. It seems that you are very clear about how you want to work with your team. A fusion with a lot of historical sense. The reunification of hardware engineering and the hardware technologies division under the same leader is not entirely new. It is the structure that Apple had for years under the direction of Bob Mansfield, former head of hardware. until 2013 and? then he took charge of the failed Project TitanApple’s car. That’s when those two areas were divided, something that allowed both Ternus and Srouji to progress in their domains, but also caused some structural tensions between teams that had to collaborate. Bringing them back together is a clear commitment to strengthening that collaboration. The great cover-up of Ternus’s appointment. It is normal that the vast majority of headlines go to Ternus, who will decide the future of the company from now on, but Apple is above all a hardware company. That Srouji now becomes his leader makes this engineer a person with enormous power within the company. The change is promising in terms of promoting that facet of the product that both he and Ternus dominate, and without a doubt interesting times await us at Apple. Image | Apple In Xataka | John Ternus, vice president of Apple: “The iPhone Air had been in development for years, but we had to say ‘no’ until now”

When they told us all the advantages of intermittent fasting, they forgot one small detail: that it could make us bald.

For years we have been sold that intermittent fasting It was the strategy of the future to lose weight and improve our metabolic health. It is logical: it was something easy to implement, reasonable and very striking. Had everything necessary to become a fashion. And so it was. It is now, as the first long-term studies come to an end, that we begin to really understand its pros and cons. The most striking, of course, is the one that has to do with hair. What exactly is intermittent fasting? In general terms, we call ‘intermittent fasting’ a diet that alternates periods without food restrictions with brief periods of fasting. ‘Fasting’, here, is a deliberately elastic term: it can mean eating absolutely nothing or significantly reducing the number of calories consumed. The idea behind it sounds good.. When we undergo prolonged calorie restriction, the body goes into “savings mode” and that causes weight loss to slow down (or, at least, slow down). Intermittent fasting would attempt to trick the body into not adapting to the new calorie restriction and therefore continuing to “spend” at a normal rate. And does it work? That’s the bad news. “Research does not consistently show that intermittent fasting is superior to continuous low-calorie diets” when it comes to weight loss, the study tells us. more complete analysis on the subject after reviewing almost fifty studies. The clinical trials that have been carried out Subsequently, they only insist on the same thing: in general terms, the results are identical to those with the rest of the normal diets. Both in the dropout rate and in the amount of weight achieved or the improvement in health markers. The choice of another method, ultimately, has more to do with individual philias and phobias than with any type of extra scientific evidence. After all, everyone has a peculiar relationship with food and, consequently, there are some strategies that ‘fit’ us better than others. In other words, there are people who use it. Yes and the truth is that nothing happens. Little by little, researchers are discovering good things (can help intestinal cells regenerate) and bad things (could promote the formation of precancerous polyps). So, little by little, we are better understanding what it does, what it stops doing and what mechanisms are behind intermittent fasting. That’s when the surprises begin. Because, for example, a clinical trial carried out with mice has discovered that intermittent fasting slows hair growth. Researchers at Westlake University (in Zhejiang, China) took about 50 mice, shaved them and divided them into three groups with dietary restrictions (fed every 8, 16 or 48 hours) and one without restrictions which is the control group. After a month, the mice that could eat without problem had recovered their hair. Those who fasted, on the other hand, only partially recovered after 96 days. As? Because? What is happening here? The first thing is to make it clear that the researchers “They don’t want to scare people away from intermittent fasting.“; but rather highlight “the importance of taking into account that it could have some unwanted effects.” Taking this into account (and that the study is in mice), the answer is both simple and full of uncertainties: to begin with, hair growth is a process that requires constant and balanced nutrition. But researchers believe the problem could go further: It is possible that “the body uses fat reserves instead of glucose and this could trigger the release of chemicals that damage hair cells.” However (and this is important) the research is in a very seminal state and there is still much to investigate. After all, there is no better occasion than this: the occasion they paint her bald. Image | Seika In Xataka | The great promise of science to end baldness is not a transplant or a medicine: it is a vaccine A version of this topic was originally published in February 2025

“Left click, right click”, this is how the AI ​​decides an attack in war. China, Russia and the US need fewer and fewer humans

A group of Google engineers signed an internal letter to protest against a project in which your own software was being used by the Pentagon, sparking an unprecedented debate within the company about how far the technology they had created should go. Since then, almost 10 years have passed, an “eternity” with the implementation of AI. The war accelerates… without humans. They counted last week in The New York Times that modern warfare is entering a phase in which human intervention is no longer the center of decision-making, but rather an almost symbolic step within processes dominated by algorithmswhere artificial intelligence systems identify targets, recommend attacks and generate complete plans in a matter of seconds. Programs like Project Maventoday developed by Palantir and integrated with models like Anthropic’sshow the extent to which the decision chain has been compressed: satellite images, drone data and intercepted signals are automatically processed to generate target lists and attack solutions, reducing human intervention to something as simple as selecting options on the screen, in the words of Pentagon officialsit is as simple as clicking “Left click, right click”. Powers in the same race. Because at the center of this transformation are the United States, China and Russia, competing to lead a new arms race based on autonomous systems capable of operating without direct intervention. In China, for example, the development of coordinated drone swarms by artificial intelligence and capable platforms to operate alongside fighters manned reflects a commitment to scale and automation. Meanwhile, in Russia they are betting on systems like the Lancet droneswhich evolve towards capabilitiesand autonomous selection of objectives. For its part, the United States is trying to close the gap by encouraging companies like Anduril to speed up production of autonomous drones, in a race where the speed of development is almost as important as the technology itself. The Chinese WZ-8 drone Ukraine as a turning point. How have we been countingthe war in Ukraine has been the turning point that has turned these technologies on real tools combat, demonstrating that relatively simple systems can evolve rapidly towards semi-autonomous capabilities and changing the balance on the battlefield. Adapted commercial drones, unmanned vessels and data analysis systems have allowed resist a superior adversary, while Russia has responded incorporating automation progressive in their own systems. As pointed out analyst Michael Horowitz, “the battlefield in Ukraine has served as a laboratory for the world,” accelerating a transition that is no longer experimental, but operational. Silicon Valley at war. Unlike previous arms races, the Times I remembered that the role does not fall solely on the States, but also in technology companies and start-ups that are redefining military development. Here are companies like Google that initially participated in projects like Maven before withdrawing due to internal pressures, while others like Palantir or Anduril have occupied that space with a more vision aligned with the defense. In China, the “civil-military fusion” model directly integrates to private companies in the development of military systems, while in the West attempts are made to replicate that dynamism with million-dollar investments and growing collaboration between Silicon Valley and the Pentagon. Algorithms against algorithms. The result is a form of war in which the confrontation is no longer only between armies, but between automated systems that operate at speeds impossible for humans, a scenario that we have counted where drones launch drones to take on other drones and sensor networks connect globally to execute real time attacks. Projects like the Chinese attempt to replicate networks similar to the Joint Fires Network American forces reflect this trend toward an interconnected war, one where a sensor at one point on the planet can trigger an attack on another without direct intervention. At this point, superiority no longer depends solely on the quality of weapons, but on the ability to integrate data, process it and act faster than the adversary. Uncontrolled speed. There is no doubt, this acceleration carries risks that worry even those who pushed these systems, as automation can trigger military responses before humans can intervene or fully understand the situation. Studies such as that of RAND Corporationworks that have shown scenarios in which autonomous systems inadvertently escalate conflicts, while experts warn of a possible “escalation spiral” driven by the decision speed of machines. As recognized General Jack Shanahan, promoter of Maven, the reality is that there is a danger of deploying “untested, insecure and poorly understood” systems in a context of competition where each actor fears being left behind. Less humans, more automation. Thus, the panorama that is drawn is that of a war every time more automatedwhere human intervention is progressively reduced and critical decisions are delegated in artificial intelligence systems capable of analyzing, deciding and acting in seconds, something very different which is do it “well”. From autonomous drones to target analysis platforms, through global combat networks, the trend seems clear, that of a war of the immediate future that will be decided less in offices and more in algorithmsin an unstable and certainly chilling balance, because we are talking about technological speed being on track to surpass the human capacity to control it in the middle of a war. Image | StockVault, Infinity 0 In Xataka | Russia is no longer surrendering to Ukrainian soldiers, but to machines: the rules of war are being redefined In Xataka | China was the power that launched drones. Now he has realized his danger with a decision: close the sky to them

the controversial measures with which we have shielded the network a year after the collapse

Next April 28 will mark exactly one year to the day that Spain and Portugal faded to black. An unprecedented “zero energy” in the last two decades that left nearly 60 million citizens without electricity, without internet, without traffic lights and with the banking system paralyzed for up to 16 hours. As they reflect in the magazine freenthat day we suddenly discovered that something we take for granted—electricity—is the fragile foundation on which our entire modern life rests. One year after the event, the initial shock has given way to data. We no longer ask ourselves only if such a blackout can happen again, but how much it is costing us to avoid it and if we have really learned our lesson. D-day is about to arrive. Twelve months later, we finally have the “official autopsy.” The European Network of Network Operators (ENTSO-E) published a comprehensive report of 472 pages where he concludes that there was no single cause, but rather a “perfect cocktail” of multiple factors. A sudden surge originating in Spain triggered instability that the system was unable to stop. As we have already explained in Xatakathe failure can be defined as “operational blindness.” The renewable plants operated with a fixed power factor; They did not know how to read the network surge and, for safety reasons, they disconnected suddenly, causing a rebound effect. Besides, as he adds BBClocal generator voltage controls were not fully aligned with operator requirements. The crisis required millisecond reflexes, but tension control was done manually. In fact, if Europe did not fall like a house of cards, it was due to an almost miraculous technicality: a relay in the Hernani substation (Gipuzkoa) acted like a “fusilazo”, cutting the connection with France in milliseconds to shield the continent. Ironically, just ten minutes later, it was that same interconnection that served as assisted breathing to resuscitate the system. The big question: what has Spain done differently? The fear of a new blackout has changed the rules of the game, but at a high price for the citizen. Electrical Network has imposed a “reinforced model” of operations. This means that they prioritize safety over cost, keeping more expensive and stable backup plants on, such as gas combined cycles. The result? The Spanish They have been paying an extra cost of 666 million euros In these eleven months only in “adjustment services”, which have shot up 43%. In the legislative sphere, the Government has approved the Royal Decree-Law 7/2026 to streamline bureaucracy through the “Renewable Acceleration Zones” (ZAR). However, experts warn thatSince there is still no structured capacity market, investing in the necessary storage systems (batteries) continues to be a financial risk for developers. There’s more shielding going on. The collapse not only left us in the dark, but it left us cut off, although in a very uneven way. While some completely lost the signal, others managed to maintain it thanks to the logistical efforts of some operators. To avoid this coverage lottery, the CNMC has proposed that Telefónica, Vodafone and MásOrange offer a “national roaming” plan in case of emergency. If your operator’s network goes down, your mobile phone would automatically connect to the competition, based on the Swedish model. Added to this is the request to make the alert system (ASA) mandatory in cars with digital radio (DAB+), to send warnings to the population immediately even if the internet is down. The false culprit and the new energy guzzler. After the collapse, many were quick to blame green energy, but the reality is different. As explained from freenthe problem is not that Spain has a lot of solar and wind energy, but that the electrical grid is still stuck in the 20th century, designed for fossil power plants and not for a decentralized system. In fact, Spain is a fascinating laboratory. According to EUObserverthe country has managed the recent price crisis caused by the Third Gulf War much better than its European neighbors thanks to its enormous solar shield. However, the trauma of the blackout has caused an absurd side effect: operators are so afraid of overloading the grid that they force solar and wind farms to disconnect more frequently. Curtailment (clean energy generated that is thrown away) has gone from 2% to 7%. And if that were not enough, the saturated network assumes the imminent arrival of a new energy-consuming giant: the massive data centers for Artificial Intelligence. The exchange of accusations is served. In the offices the short circuit has only just begun. As detailed Financial Times, The National Markets and Competition Commission (CNMC) has opened formal investigations. Red Eléctrica (REE) faces proceedings for “very serious” infractions, while giants such as Iberdrola, Naturgy, Endesa and Repsol face possible fines of up to 60 million euros for “serious” infractions. Besides, as accounted Public, up to twenty open sanctioning files. REE defends itself by ensuring that the opening of the file does not prove its guilt. Meanwhile, a Senate report promoted by the PP directly blames the Government, REE and the CNMC for ignoring known vulnerabilities, according to Reuters. And the tension reaches the limit: electricity companies like Endesa and Iberdrola They have demanded a judge access more than 8,000 calls and emails from REE executives during the hours of the blackout, after the leak of audios where technicians warned of the danger 15 days before. An electric heart that remains at risk. Spain is “a gold mine without a road”, as defined by Patxi Callejadirector of Iberdrola. We have the sun, the wind and the technical capacity. But the great lesson of this last year is that true energy independence is no longer played at the national level, but at the local level, where factories and homes install their own batteries and hybrid panels so as not to depend on the fragile central system. We survived the blackout and avoided another one by reaching for our wallets and operating defensively. But as long as the line procedures last a decade, mass storage … Read more

Five years ago, Venice spent more than 5 billion on a system of barriers against the sea. Now look for a plan B

There was a time when Venice looked at the Adriatic with ambition. The sea not only shaped the city, permeating its DNA, it also propelled it until it became a naval power who fought for dominance of the Mediterranean. Today things are different. The Serennissima (turned into tourist power) observes with increasing concern the coming and going of the tides, the same ones that in 2019 submerged it under 187 cm of water, flooding 80% of the city. The reason is very simple. Everything indicates that the multimillion-dollar system that Venice was equipped with a few years ago to protect itself from the threat of high water It won’t take long for it to become obsolete. And it is not very clear what the alternative is. One figure: 18. The threat of flooding is not new in Venice. In fact, one of the worst in memory was suffered six decades ago, in November 1966when an intense storm caused the water to reach 194 cm, flooding much of the city. However, experts have been detecting worrying signs for some time. It is not just that Venice sink or the sea level rising (which too). There are increasingly clear signs that suggest that floods will become more frequent in the future. Recently, a group of researchers dedicated themselves to analyzing the “extreme” episodes suffered by the city, those in which 60% of its surface was flooded. Throughout the last century and a half, it counted 28 incidents of those characteristics. The surprising thing is that the vast majority of them (18) were concentrated during the last 23 years. One measurement: 0.42 m. Today more than half of Venice is alone between 80 and 120 cm above the average sea level and projections show that this scenario will soon worsen: in the best of cases, if we manage to drastically reduce our polluting emissions, the sea will rise 0.42m by 2100. In the worst case, it will be 1.8 m, which would greatly complicate the outlook for the Serennissima. In fact, now the high tide already leaves St. Mark’s Square only 30 cm above the water level. One name: Mose. Aware of how much is at stake in Venice, the Italian Government has long been looking for a way to protect itself from floods. The result was Mose (experimental elettromechanical module)a system made up of four barriers and 78 independent mobile gates that allow authorities to protect the Venetian lagoon from what is known as high watertides that flood the city. The objective: to temporarily isolate the Adriatic lagoon and thus protect Venice from the most dangerous tides. To achieve this, the barriers were strategically installed in the inlets of Lido, Malamocco and Chioggia. Each gate also measures 20m wide and between 18.6 and 29.6 m long. An investment: 5,000 million. It is said that the project mobilized an investment of more than 5.5 billion of euros (its execution was marred by corruption). Its work began in 2003 and after several delays it carried out a first test in October 2020, in an event led by the then Prime Minister Giuseppe Conte. A year earlier, Venice had suffered a of the worst floods that are remembered, during which the water reached 187 cm, flooding part of the entrance to the Basilica of Saint Mark. An indicator: frequency. The problem is that the authorities are turning to Mose much more often than expected. EuroWeekly assures that in less than a month, between January 28 and February 19, the system was activated 30 times. Other media report that since their inauguration at the end of 2020, the barriers have saved Venice from flooding in 154 occasions. The problem is that the use of Mose does not come free to the region, neither in economic terms nor on a social and environmental level. Setting up the enormous Mose floodgates has a direct cost, but it also has another indirect cost: by isolating the lagoon, the system alters, for example, the activity of the port sector and interrupts maritime traffic with the port of Marghera. Guardian points out that pressing Mose’s button has an economic impact of more than 200,000 euros for Venice. For this year’s Carnival alone the total bill would be around five million euros. An extra concern: the lagoon. Not everything is measured in operational cost, maritime traffic and economic impact. Altering the tides in the area also has an impact on its ecosystem and that is something that worries experts like Andrea Rinaldo, from the scientific committee of the Lagoon Authority. Especially if two fundamental data are taken into account: first, the frequency of use in recent years; second, the forecasts for sea level rise. “With one more meter, the Mose barriers would have to be closed an average of 200 times a year, which means that they would practically always be blocked,” explains Roinaldo. “When this happens, the lagoon loses its function as a transitional environment. It would become a pond.” A victim: the lagoon itself. As explains GuardianBy blocking the flow of water, the barriers encourage the growth of algae. The problem is that when these die and decompose they directly affect the quality of the water and the rest of the flora and fauna. Does that mean Mose was a mistake? Rinaldo thinks not. The changes are simply happening much faster than engineers expected, forcing authorities and technicians to think about the future in the medium and long term. At the end of the day, if Mose taught anything, it is that projects of his importance are not approved and executed overnight. One question: What to do? The great unknown. Those responsible for Mose are looking for ways to reduce its impact, but it is not an easy decision. Among other things because the Venetians themselves have become accustomed to the barriers and gates coming into operation at the slightest risk, points out Giovanni Zaroti, one of the system technicians. Rinaldo mentions the possibility of launching an international call … Read more

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.