The US is launching a missile capable of burying the Tomahawk on Iran. And the big question is where are you doing it from?

The image of an American precision strike has been linked to silhouettes taking off from the sea or from the air. However, in recent years the Army has invested billions in recovering a capability that seemed secondary: hitting very, very far… from the mainland. In that bet may lie one of the greatest transformations of modern military power. A debut that changes theater. USA has premiered in combat the so-called Precision Strike Missileits new tactical ballistic missile, within the operation against Iran. It is not a minor evolution of the former ATACMSit is rather a leap in scope and concept. With more than 500 kilometers radius (and room to grow towards 650 and even 1,000) practically doubles the depth of ground fire available until now. As in many other “premieres”, it is not symbolic, it is doctrinal. A missile to bury the Tomahawk. The PrSM flies at speeds greater than Mach 3 in the terminal phase, allowing it to arrive earlier and better penetrate hardened targets. Forehead to Tomahawkslower and subsonic, the new system greatly reduces the enemy’s reaction time and complicates interception. Additionally, two missiles fit in a single HIMARS launcher pod, meaning that double the punch per vehicle. Of course, it does not replace the Tomahawk in strategic range, but in regional scenarios it can be left in the background due to speed, survivability and response capacity against time-sensitive targets. A PrSM capsule seen in front of a US Army M142 during an exercise in Australia. The M142 carries a 227 mm rocket with six projectiles. The Persian Gulf as a platform. At this point, geography explains a good part of the movement. The Gulf has a medium width of just 250 kilometerswith American allies aligned on the western bank and Iran occupying the eastern one. With a range of 500 kilometers, a land battery located anywhere on the Arab side can cover wide swathes from Iranian territory without the need to penetrate its airspace. That makes the missile a perfect tool to support an air campaign without exposing fighters or depending exclusively on ships. A test launch of a PrSM The key question: from where? The most decisive fact remains unknown. No has been confirmed Which Gulf country has authorized the use of its soil to launch these missiles. This mystery is not technical, it is rather political. The reason? Allowing a US land battery to fire on Iran automatically makes that territory in possible objective of retaliation. Many States in the region have historically preferred discreetly support to Washington while avoiding public exposure. Put another way, the exact location of the launch determines what capital takes on the direct risk. Hunting sensitive targets. Short-range ballistic missiles are especially effective against radars, mobile launchers and air defense nodes. Plus: they can be maintained on permanent alert and strike within minutes when a target arises. In a conflict where neutralizing anti-aircraft systems is key to sustaining air superiority, the PrSM provides a ground suppression capability which until now relied heavily on aviation and naval missiles. Beyond Iran. If you also want the premiere of the PrSM send a signal to other scenarios, especially the Pacific. Its planned evolution includes anti-ship versions capable of attacking moving targets and variants with greater range that will touch the threshold of medium-range missiles. It we have counted before. The US Army wants regain prominence in long-range warfare, traditionally dominated by the Air Force and Navy. Iran, in that sense, has been the first real test bed. Cost, volume and future. It is the “but” of any ballistic missile. Each projectile can exceed a million and a half dollars, although the price has been dropping as production increases. The goal is to reach up to 400 units annuallywhich will expand the available inventory and facilitate its sustained use. With future versions that could exceed the 1,000 kilometers rangethe PrSM does not seem just a substitute for the ATACMS. It is the first stone of a terrestrial architecture that seeks to project deep power from solid ground. What is really at stake. In short, the real twist is not that the United States has launched a new missile in a war, but that it has from the ground and against Iran. If he Tomahawk has symbolized precision warfare from the sea, the PrSM aims to represent the return of the tactical ballistic missile as a flexible instrument of regional pressure. And while it is not known with certainty from what ground ally is taking off, the political dimension of that launch will continue to be as relevant as the technical one. Image | CENTCOM, Australian Army, US Army In Xataka | If the question is how much of Europe is within range of Iran’s missiles, the answer is simple: a fairly large In Xataka | The arrival of the B-2s to Iran can only mean one thing: the search for the greatest threat to the United States has begun

It is a nod to Chinese Big Tech and a message for NVIDIA

Huawei has arrived at the Mobile World Congress with one objective: to show the world What good have these last five years been? of vetoes and sanctions. The company has just had the second best year in its history. It seemed impossible when The United States ostracized herbut this five years has served not only to regain the throne in the enormous Chinese market, but to build something: the idea that China’s technological evolution passes through its hands. As a result of this we have the advertisement at the Barcelona fair of a line of SuperPoD supercomputers with a single objective: that the Chinese Big Tech don’t have to depend from NVIDIA. Return. Huawei has been collaborating with SMIC -the great foundry of China- to create chips. Chips that feed both your consumer devices as other high-performance ones for large-scale computing. It is clearly difficult to do this without violating Western vetoes (for example, their mobile processors do not have 5G and are less powerful than those of Qualcomm or MediaTek), but they are making progress. The symbolic thing is that They have turned resilience into their best quality. If in 2020 they competed for the market with Samsung and Apple, achieving a profit of 129,000 yuan, in 2025 registered 127 billion dollars, something impressive if we take into account that, above all, They come from the local market. In this time, Huawei has positioned itself as a lifestyle brand that has consumer devices, but also home automation and even cars. But if there is a great frontier today, it is that of artificial intelligence. And Huawei knows that it was something that had to be attacked not only from the most local perspective, but by launching a global warning. SuperPoD. Because these supercomputers, really, are not new. The company presented them in mid-September last year with a more local focus, for China. And before looking at the products, you have to see what a SuperPoD is. These are high-performance clusters that bring together thousands of specialized AI chips. And those chips are not from NVIDIA, which dominates the global conversation in AI computing, but rather their own. It’s about your Ascendsome that have been developing for years and that China is waiting like May rain to break that hegemony of NVIDIA. The idea is the same as with other technological sectors of the Asian giant: not to depend on anyone else. They are the following: Atlas 950 SuperPoD– A cluster of up to 9,192 Ascend 950DT NPUs per system with up to 1,152 TB of unified memory. TaiShan 950 SuperPoD– First general-purpose computing SuperPoD with two models: 96 cores / 192 threads or 192 cores / 384 threads for, for example, massive virtualization or critical databases. Local ecosystem. Huawei’s approach is very interesting. The Ascend is not close to the power and sophistication of NVIDIA chips, nor to CUDA technology that has become the language of AI. However, if each chip individually cannot compete for the most demanding tasks, what Huawei has thought is that these chips be scalable. To do this, they have developed a connection technology with ultra-high bandwidth that allows all these chips to be connected to each other with the aim that, in practice, it behaves like a single logical computer. This connection technology has been named UnifiedBus and, in the statement, Huawei states that the idea is to “continue defending open source and open systems to accelerate developer innovation and the prosperity of ecosystems. That is something that resonates with the Government’s objective: that its companies such as Tencent, ByteDance, Alibaba or DeepSeek, which have run into the arms of the latest NVIDIA chips As soon as the ban was lifted, they developed their technologies using ‘made in China’ solutions. Ambition at the cost of sanction. All this comes in a tremendously turbulent context. China is betting a lot on artificial intelligence and robotics as pillars of the country’s technological roadmapbut NVIDIA still has the best product. There is analysis that expose that the best of Huawei is still five times less powerful than the best of NVIDIA, and the United States has just made it clear that investment in AI is one in national security. All the mess between Anthropic and the Pentagon has to do with how the United States demands that the AI ​​of its private companies belongs to the State because they claim that the AI ​​of Chinese companies belongs to China, and China will not hesitate to do whatever it wants with that AI. Because computing power is, and will be, at the core of the AI ​​race, Huawei has shown that it is doing everything it can to deliver the best tools. And Western sanctions have only helped China ‘wake up’ and begin to shape these technological solutions at an accelerated pace. NVIDIA was clear. It remains to be seen whether customers around the world will adopt Huawei’s SuperPoD systems as an alternative to NVIDIA, but what is already on the table is that something is happening. At least, in China. In the middle of last year, the CEO of NVIDIA pointed out that before the vetoes, NVIDIA had 95% of the market share in Chinabut currently it is only 50%. These vetoes did not stop China, but rather accelerated the development of its own industry to the point that the competition, now, is fierce. In fact, the manager recently pointed out that it was absurd for the US to try to stop China with vetoes and sanctions, since China would achieve technological sovereignty sooner or later and that the ideal would be to take an economic slice while they could… and make Chinese Big Tech dependent on NVIDIA technology. And there Huawei’s approach is very curious because yes, its chips may not be the most powerful, but they are mass scalable and adaptable to the needs of each of the companies. Images | HuaweiXataka In Xataka | Huawei no longer competes: it is building its own … Read more

The big problem with putting robots everywhere is that they get lost. An engineer from Elche believes she has the solution

It is no surprise that we see more and more robots in our daily lives: in a restaurant bringing orders to the table, in the field as a seasonal workermaking him courier delivery competition…and that’s not to mention its applications in automation on an industrial scale. Robots don’t need to rest, they don’t have labor rights, and they don’t complain. But they get lost. And that is a real, very common problem for which a research team from the Miguel Hernández University of Elche has found solution. The context. Autonomous robots need to know where they are to function and that does not always happen: when the location reference is lost, either because someone moves it, it is turned off or the environment changes without warning, the robot is unable to recover its position. Something as normal as running out of battery can be a technical drama. This phenomenon is not something isolated, in fact it even has a name in robotics: the “kidnapped robot problem“. Although we see more and more robots everywhere, this incident is a pending issue that has not been resolved in a robust way for decades. Without going any further, because resorting to GPS is something that can fail in settings such as indoors or near tall buildings. As deepens Míriam Máximolead author of the article: “It is a classic problem and very difficult to solve, especially in large environments.” The solution. What the team from the University of Elche has implemented is MCL-DLF, the acronym for Monte Carlo Localization – Deep Local Feature, a system that combines two technologies: on the one hand, a 3D LiDAR that emits laser pulses to draw a three-dimensional map of the environment similar to that of robot vacuum cleaners. On the other hand, an artificial intelligence that learns which elements of the environment are most useful for orientation. Why is it important. Because having a reliable location system is essential for any robotic deployment in real life: autonomous vehicles, delivery and logistics, assistance… its presence may be increasingly common, but it is still tremendously dependent on supervision: knowing where it is is essential for it to operate safely. The implemented method also introduces an important change: it is independent, in that it does not require external infrastructure to function like GPS, so its base is more robust and versatile in the face of different use scenarios in the real world. How it works. Its approach is hierarchical, so it first recognizes large structures and then fine details, similar to how people do. When you arrive at an unknown place, first you keep the essentials: what neighborhood you are in, for example. Then you look for more specific references to refine further. Furthermore, the system does not play everything on one card: it maintains several position hypotheses simultaneously and discards or refines them as the sensor captures more information. Tests carried out for months on the university campus with different lighting conditions, vegetation or simply the weather have shown more consistency than conventional methods. A good start with pending subjects. Beyond its promising results, the most striking thing about this research is its commitment to sensory autonomy: it does not depend on networks of beacons or GPS, but on its own sensors. This makes it a potentially more versatile system. However, it faces the great historical challenge of robot placement: how fragile it is in the face of changing environments. It is true that they have tested it in different conditions, but it has been within the campus: making the leap to more complex and constantly changing environments is their litmus test, in addition to additional validation in extreme conditions. Finally, before an eventual real commercial deployment, we will have to see how it integrates with other navigation systems and its computational cost. In Xataka | Tesla has been building the Optimus for years. China has just presented itself with fifteen companies and factories already set up In Xataka | We already have so many “humanoid” robots that it is difficult to differentiate one from the other. This graph fixes it Cover | Enchanted Tools

MicroLED promises to be the Holy Grail of televisions. That is your big problem today.

There are technologies that are born with enormous promise. He MicroLED is one of them. Since Samsung introduced “The Wall” at CES 2018the sector has been telling us for years that this technology is going to revolutionize the way we watch television. And he is right. The problem is that this revolution has not reached anyone’s living room. who is not a billionaire. The technology has become the Holy Grail of the television industry, but the enormous cost of its manufacture means that only the most exclusive models and, let’s say it without frills, extremely expensive, can integrate this technology. Unlike what has happened with OLED or MiniLED, manufacturers have not managed to reduce production costs of these panels to make them competitive in mass manufacturing. What is MicroLED and why is it so special? To understand MicroLED you have to know how current screens work. Traditional LED TVs have a layer of pixels that filters light coming from an array of LED lights installed on the back. It is, therefore, a backlighting technology that offers very good brightness power. The problem is that when those screens need to display pure black, the screen can’t turn off pixel by pixel, so it turns off areas of those rear LEDs. The more dimming zones you have, the better the light control and the more control over the blacks you have. Even so, it is inevitable that some light will sneak in. It’s not really black. The result is very dark grays at best. The technology OLED solved that problem years ago, making each pixel on the screen emit its own light that can be turned off individually. Here, the result is a perfect contrast, but it has its own limits. The LED diodes that make up each pixel are organic, so they degrade over time and are susceptible to burn-in, leaving a permanent mark on the screen after many hours with a static image on the screen. In this sense, the promise of MicroLED technology is to provide the perfect balance between OLED and LED, but without any of their drawbacks. Like OLED, it uses microscopic LEDs as a pixel, but made with inorganic materials that are much more stable and resistant to burning. In this way, the screens MicroLEDs are capable of reaching OLED contrast levelsbut with a much higher shine and with a useful life that is measured in decades. It is literally the best of all worlds. And there is also its trap. The problem: manufacturing the MicroLED is a nightmare A 4K display has about 8.3 million pixels. In the latest MicroLED panels, each of those pixels needs three individual LEDs, leaving us with almost 25 million microscopic chips that must be manufactured, placed and connected with nanometric precision on a panel the size of a television screen. This level of miniaturization required by MicroLED has limited its production to large-inch formats before the challenge it poses fit so many millions of diodes into a 55″ or 65″ panel. The process of mass transfer of these chips, what the industry calls mass transferis extraordinarily complex, and today, also extraordinarily expensive. How much expensive? To put it in context, one of the few MicroLED models that can be purchased in stores is a 89 inch Samsung and has a sale price of 109,000 euros. He LG Magnitaimed at the extreme luxury market, was around 230,000 euros in sizes of 118 and 136 inches. That price range makes them unviable as home televisions (at least for most mortals’ homes). Hence its market figures are very small at present. In all of 2024, they were manufactured less than 1,000 units of MicroLED televisions in the entire world. Samsung sells that many conventional televisions in a matter of minutes. However, although these panels do not reach the living rooms, it does not mean that the MicroLED is stagnant. In fact, it is in development. This technology is growing strongly in those niches where price matters less than performance. In large format signage it has been the standard for years. film and television studio fundslobbies of luxury buildings or private movie theaters. In automotive, the dashboards of the future want bright, durable and efficient screens. And in the wearables segment and augmented reality, both Apple and Samsung have been investing for some time in bringing MicroLED to smart watches and AR glasses, where extreme pixel density is critical and having smaller production volumes makes the cost more manageable. As indicated in an analysis According to Yole Group, the global MicroLED market could grow to nearly $5 billion in revenue by 2032, although most of that will come from those niche segments, not the living room TV. There are MicroLED and “MicroLED” The high production cost made manufacturers explore other ways to make this technology profitable and evolve. One of the solutions was to use as backlight system behind an LCD panel, rather than as self-emissive pixels. Strictly speaking, although the latter have MicroLED technology, they should not be considered as such. However, some brands use it interchangeably in their trade names for advertising purposes. By having a smaller scale, MicroLEDs allow much better control of light and enhancing the colors, but they still require an LCD panel that separates the colors of each subpixel. That is, it would act more like a MiniLED or a conventional LED than an OLED. The good news is that, as brands showed like Hisense and Samsung have already evolved MicroLED technology with white diodes, towards the RGB MicroLEDwhich already has a self-emissive RGB diode for each pixel that, now, would be closer to the operation of an OLED. This evolution, as before MicroLEDs they made other technologiesrepresents the first sign that these panels are beginning a path of optimization to reduce production costs. In fact, the models launched by Samsung during the last CES 2026 It would be around $30,000.. It seems like an exorbitant figure for a television, but it must be taken into … Read more

The economy’s big fear was a simultaneous global drought. Science has found our lifesaver

We have been observing for years how climatic extremes They hit different parts of the globe, with the experience in Spain still very marked. But with him increase in temperatures To the extreme, one of the biggest fears of climatologists and economists is the synchrony of global droughts. That is, a scenario in which the main food-producing regions dry out at the same time. The good news is that science indicates that the Earth (at the moment) is not drying out. A problem. Logically, if the main countries in the world where wheat, rice, corn or soybeans are produced had a drought simultaneously, we would have a huge problem of product supplywhich for many is a real nightmare. But here the researchers have reached a conclusion: synchronized global droughts are severely limited and barely affect between 1.8% and 6.5% of the global land surface at the same time. Without a doubt, a great respite for economists who saw the end of the world as we know it and who has been published in Nature. But the most impressive thing is that all this is thanks to the oceans. What we knew. Until now, we knew that major climate events such as The Child wave North Atlantic Oscillationcould alter rainfall patterns thousands of miles away through what scientists call “teleconnections.” And it is something that the research team itself pointed out in the past: there are interconnected drought nodes at different latitudes, most in North America, South America, Africa and Australia. That is, when there is drought in one place, it can move to another. But, if these nodes are connected… Why doesn’t the entire planet dry out at once when there is an anomaly like El Niño? The answer is in the oceanic variability. An ally. In this case, the oceans act as an immense regulatory mechanism and that is why the authors literally speak of a phenomenon called ‘geographic trapping’. In this way, the dynamics of the oceans force the scale of these hydrological extremes to remain confined to certain areas, preventing drought from spreading across all continents simultaneously between the different nodes. It matters more that it doesn’t rain. Another of the findings that may be surprising derives from a common myth about extreme droughts. In this case we usually automatically associate the worst droughts with the suffocating heat wavesbut, nevertheless, the data from the last 120 years are clear in pointing out that the lack of precipitation dominates over high temperatures when determining the severity of a drought. That is to say, it is important that it does not rain or that it is extremely hot. Specifically, the lack of rain is responsible for two-thirds of the impact of the severity of these events, relegating temperature to a secondary role, although not negligible in a world that is moving towards warming of up to three degrees Celsius. It’s good news. That the planet has mechanisms to avoid a total global drought is excellent news for global food security and international markets, by ensuring supply for supermarkets. But scientists point out that we should not let our guard down. It must be kept in mind that, although 6.5% of land affected simultaneously, the maximum possibility that we have mentioned before, seems small on a planetary scale, if that percentage coincides exactly with the great “breadbaskets of the world”, the economic and humanitarian disaster can be equally devastating. In this way, the regions identified as “hubs” host a large part of global agricultural production, and the study warns of a growing systemic vulnerability in these areas. Images | edcharlie In Xataka | The drought is turning water into a very scarce and valuable commodity in Spain. And there are already organized groups of thieves

Mercadona has become the great supermarket in Spain. Now it is becoming your big restaurant

On Saturday, at the gym door, I heard a group of friends talking about going out to eat. The debate ended when one of them proposed going to Mercadona and buying some hamburgers in the section ‘Ready to Eat’. From then on the talk went from focusing on ‘where to buy’ to ‘where to eat’: in the supermarket itself, on the beach (advantages of living in Galicia) or in a house. It could be a simple anecdote, if it weren’t for the fact that that conversation between colleagues at the exit of a gym hides something else: Mercadona is becoming the great food supplier from Spain. And it is so to such an extent that it no longer only rivals the rest of the retailbut with the bars, whose pulse is doubling. A percentage: 19.7%. A few weeks ago the consulting firm Worldpanel by Numerator (formerly Kantar) published a report which helps to understand the enormous weight that Mercadona has achieved, not only in the retail homeland, but in the food sector in general: the Valencian chain accounts for a 19.7% share of value in food and beverage consumption. That means it receives almost 20% of what we spend on food and drink, both inside and outside the home. Company-Collective Value share in food and drink consumption Mercadona 19.7% Bar+Cafeteria+Terraces 11.2% Independent Restaurants 8.6% T. Carrefour 6% Lidl 5.1% Quick Service Restaurant 3.4% G. Eroski 3.1% DAY 2.8% consumption 2.7% Alcampo 2% ALDI 1.4% Full-Service Restaurant 0.9% Why is it important? Because that percentage shows that Mercadona already sells as much or more food than traditional hospitality, at least in terms of value. The Worldpanel by Numerator report shows that bars, cafes and terraces account for a value share in food and beverages of around 11.2% and independent restaurants another 8.6%. Together they add up to 19.8%. That last percentage surpasses Mercadona by only one tenth. The list is completed by Carrefour, which accounts for 6%, Lidl (5.1%), the concept of Quick Service Restaurant (3.4%), G. Eroski (3.1%), DIA (2.8%) and Consum (2.7%). A half surprise. That Mercadona accounts for 19.7% of what we Spaniards spend on food is striking, but in reality it is hardly surprising. The data is explained by two trends that seem to move in opposite directions. The first is that we eat more and more at home. According to The Economistspending on food outside the home fell 2.2% last year. Domestic consumption increased, however, by half a point, 0.6%. Mercadona has been able to anticipate this scenario and has been betting heavily on its ‘Ready to Eat’ section since 2018, a section in which it offers already prepared dishes, from starters to sandwiches, stews, paella, lentils, meatballs, pasta… In December the chain had implemented the service in more than 1,110 stores. Nothing surprising if you take into account that Juan Roig, the owner of the company, assures that kitchens will eventually disappear from homes. Expanding your footprint. Mercadona is not only gaining strength as a competitor to the traditional hospitality industry (a sector that faces its own internal challenges, such as the menu of the day crisis), it also does so within the sector of retail. The Valencian chain has been leading it for some time, but that has not prevented it from continuing to expand its domain. The Worldpanel report also reflects that in 2025 the company consolidated its position in food distribution, increasing its share in 0.6 percentage points until they monopolize 27% of the entire ‘pie’. Go for the baskets. Carrefour is followed in the ranking, with a share of 9%, although the French firm experienced a decline of 0.7 percentage points, Lidl (6.9%), Grupo Eroski (4.3%), Dia (3.8%), Consum (3.6%), Alcampo (2.8%) and Aldi (2%). One of the keys that has allowed Mercadona to reinforce its leadership is the increase in the so-called “large baskets”, that is, purchases of the week or month, which concentrate household spending on its shelves. In 2025, Roig’s company reached a 42% share in this type of operations, 0.9% more than in 2024. Another of its advantages is the white label push in the sector of retail and the growing weight of “short assortment chains”, those with a limited supply and very focused on prices. Images | Wikipedia and K8 (Unsplash) In Xataka | We knew that Mercadona was making gold from its suppliers. Now we know the million-dollar toll that this entails.

All Big Tech are betting the money they have and the money they don’t have on the future of AI. All but one: Apple

650 billion dollars. There it is nothing. That is the total amount that Google, Amazon, Meta and Microsoft are going to invest in data centers for AI. That amount of money is astonishing and is similar to the current GDP of countries like Argentina or Israel. But the curious thing is not only that: there is a Big Tech that is totally ignoring this fever to spend on AI as if there were no tomorrow. Apple against the current. The company led by Tim Cook is the only one of the group of large technology companies whose capex (planned capital expenditure) was reduced last quarter. Based on FactSet data compiled by SherwoodApple’s forecasts for that quarter were not to spend more, but attention, spend (quite a bit) less. The numbers don’t lie. According to the data provided by these companies, Amazon expects that in 2026 its capex reaches up to 200,000 million dollars. Google wants to go from 175,000 to 185,000 million. Meta estimates that the expense will be between 115,000 and the 135,000 million. And although Microsoft did not give a specific figure, it surely exceeds the $114 billion estimated by Wall Street. And Apple? Apple will not spend more, but 19% according to its latest estimates: about $12.7 billion. Amazon: +42% YoY (vs. previous year) Microsoft: +89% YoY Google: +95% YoY Goal: +48% YoY Apple: -19% YoY Cupertino goes from AI. While its competitors spent record sums last quarter (which ended December 31) on the purchase of material and properties linked to the AI ​​sector and data centers, Apple continues not to invest in this sector. It is something that makes it clear that the company seems to have definitively decided that this is not its war. Siri+Gemini is the best test. Confirmation of that “surrender” is in the recent announcement that Gemini will be the AI ​​on which the new version of Siri will be based. Apple’s new AI assistant is expected to hit the market this spring with at least some initial features, but the fact that it does so depends entirely on Google’s AI model makes it clear that Apple here prefers to delegate rather than invest to have its own foundational model. AI will be a commodity. Instead of participating in this costly war of language models, Apple is clear that AI is going to end up being a commodity, something that is going to become a basic standard technology like the PC, mobile phone or laptop is now. Model prices plummet as the capacity of those models grows, and benchmarks make it clear that no model is better than another for long. Apple as a gateway to AI. As usual, what Apple will do is take advantage of the fact that has the “gateway to AI. With 2.4 billion devices worldwide, it controls the most valuable distribution channel on the planet. It has the luxury of not making “the engine,” but rather acting as an avenue to bring AI to the masses. Here agreements like the one it has completed with Google are just the beginning. It doesn’t matter being late. It is something that is in the company’s DNA. He also did not want to fight the search engine battle, but it did not matter: he reached an agreement with Google, which has paid him billions of dollars for years to be able to put its search engine as the default engine on iPhones, iPads and Macs. Apple prefers that others pave the way and absorb the costs of early learning. Then she usually arrives with superior integration and a refined experience (iPod, iPhone) or directly with deals like the one she completed in the search engine space. AI will be invisible and ubiquitous. Apple’s goal doesn’t seem to be to offer its own chatbot on the web, but to make AI invisible and ubiquitous. It doesn’t matter which model runs behind it, but simply that this AI works transparently for the user. And it does so, of course, seamlessly integrated into Apple services and applications. Privacy by flag. And of course, with that vaunted commitment to privacy that Apple always boasts of. Its Private Cloud Compute is the best proof of this. By not relying on advertising (hello Google, hello OpenAI), it is able to offer advanced features without collecting massive data from users. But there is risk. Still, the strategy has a critical risk: if AI models become a commodity and end up creating technological monopolies, Apple could be permanently at the mercy of its suppliers. If these competitive advantages end up being consolidated in the model layer – the one controlled by OpenAI, Anthropic and Google – and not in the integration layer – which is Apple’s – the dependence on third parties will be a dangerous strategic weakness. Room for maneuver. Apple has annual benefits close to 100 billion dollars, which gives it an enviable financial position to wait for this “hype” cycle to cool down. It is clear that there is an AI bubble and that bubble will probably end up exploding and leaving many victims. If it does, one of those that will undoubtedly have room to maneuver to survive will be Apple. Image | Xataka with Freepik In Xataka | China does not have a spending problem with AI. What it has is a huge income gap compared to its main rival

OpenClaw is the total AI agent that challenged Big Tech. Big Tech’s response: buy it, of course

Peter Steinberger It was a great unknown to the vast majority of the planet until less than a month ago. His project, which he initially called Clawdbot (later Moltbot and finally OpenClaw), became the new sensation of the internet and the world of AI. Its growth has been so spectacular that the majors in this segment set their eyes on it and, inevitably, began to fight to sign its creator and acquire his project. We already have a winner of that bid: OpenAI. What is OpenClaw. OpenClaw is what we could define as “the total AI agent.” A system that uses one or more AI models such as those from OpenAI, Anthropic or Google to do things for you. Here are some differences from using those models in a “traditional” way: You can chat with your AI agent using messaging apps like Telegram or WhatsApp, as if it were just another contact OpenClaw takes full control of the machine you install it on, whether it’s an old PC, a Raspberry Pi or a VPS, for example. You have permission to do whatever you want inside that machine, which also involves risks The capacity of current models, such as Opus 4.5, makes the agent certainly autonomous and proactive and, for example, suggests things to you or makes decisions based on the conversations you have with him? she? it? OpenAI buys OpenClaw. Last week Steinberger I already commented in an interview with Lex Fridman that OpenAI and Meta had made offers to sign him and acquire his project. Those intentions crystallized on Saturday, when the creator of OpenClaw advertisement that he had signed with OpenAI and that the OpenClaw project “will become managed by a foundation and will remain open and independent.” It was a more than reasonable exit for Steinberger, who will probably have received a significant sum of money and prestige, but that leads us to the eternal question: can you compete with the big companies? Short answer: probably not. Large companies have always been hampered by their own size when it comes to reacting quickly to new trends, and even the largest AI companies suffer from this same problem. OpenClaw was doing something that none of them had dared to do – partly because this type of agent has too much “power” – but with these projects and with startups that are beginning to emerge, the same thing always happens: either the big companies copy the idea and they end up burying the originalor they buy that startup that threatened to compete with them. For many startups, in fact, the “exit” or future strategy of the project happens to be bought by a large company. A creator who didn’t want to be CEO. Steinberger explained in his post how his project opened up “an endless string of possibilities” for him, and confessed that “yes, I could really see that OpenClaw could have become a giant company. But no, I’m not excited about that. I’m a creator at heart.” Steinberger has already created a company and dedicated 13 years of his life to it, and “what I want is to change the world, not create a big company, and partnering with OpenAI is the fastest way to bring this to the entire world.” One person’s first unicorn? The appearance of ChatGPT soon made will be spoken of the ‘Solo Unicorn’ phenomenon, a startup created by a single person and which, thanks to AI, would be valued at more than 1 billion dollars. We do not know what price OpenAI has paid for this signing, but it is likely that it will not reach that much. What does seem evident is that OpenClaw was the type of project and idea that certainly could have turned it into that “Solo Unicorn”. The era of custom AI agents. Sam Altman, CEO of OpenAI, confirmed the news in X. There it indicated that the creator of OpenClaw had joined OpenAI “to lead the next generation of personal agents”, and highlighted that “we expect this (personalized AI agents) to quickly become an integral part of our product offerings.” In addition, he assured that OpenClaw will remain open source, something that was probably one of the essential conditions that Steinberger set to join the ranks of OpenAI. And now what. That the project remains Open Source and independent is great news and theoretically that will allow OpenClaw to continue functioning as before, but having OpenAI’s resources can undoubtedly make it grow exceptionally. It remains to be seen whether that will end up having a negative impact in any way, but what also seems clear is that these types of “full AI agents” could soon also be an integral part of the offering of other AI companies. Welcome to the era of total AI agents. We had already partially seen what OpenClaw does with projects like Computer Use from Anthropic, Project Jarvis/Mariner by DeepM Mind u Operator from OpenAI itself. Both allowed AI would do things for us in the browser, but OpenClaw does things for us with all the applications on the machine on which we install it (the email client, the command console, etc.). We are facing an interesting stage for this type of systems. In Xataka | OpenClaw is one of the most fascinating and “dangerous” AIs of the moment. A Malaga company has come to the rescue

Big Tech is paying up to $600,000 to influencers to promote their AI. Now the race is about perception

Big technology companies are deploying their heavy artillery to attract users for their artificial intelligence services. Just like they count From CNBC, Microsoft and Google have found their new battlefield in influencers, with contracts that reach six-digit figures. The dimension of the phenomenon. According to data from Sensor Tower, generative AI platforms spent more than $1 billion on digital advertising in the United States during 2025, an increase of 126% compared to the previous year. That large companies promote their products through influencers is nothing new, and it is also a business that is very profitable for them, since by investing a small fraction of their budget they can get an avalanche of new users. According to CNBC, in order to attract new users for their AI services, Microsoft, Google, Anthropic and Meta They are hiring content creators to promote your tools on social networks. Figures. Microsoft and Google are paying between $400,000 and $600,000 to content creators for multi-month collaborations, according to sources close to the media. These contracts are not limited to specific publications, since according to the medium, they seek to ensure that influencers integrate AI tools into their usual content, tutorials and workflows. “We’re seeing a massive increase in creator spending from these AI brands. We’re getting a lot more interest from AI brands every month,” counted to AJ Eckstein, founder of Creator Match (an agency that connects brands with creators). How these agreements work. Collaborations range from LinkedIn posts explaining how to use Claude Code even videos on Instagram showing functions of Microsoft Copilot or the assistant Comet by Perplexity. Megan Lieu, AI and technology content creator with nearly 400,000 followers, explains told CNBC that his sponsored deals typically range from $5,000 to $30,000 depending on the campaign. Its most important collaboration to date has been with Anthropic to promote products from Claudealthough he did not specify the exact figure to the media. Some influencers can charge up to $100,000 per post, according to Eckstein. The other side of the coin. Despite the astronomical numbers, not all content creators are willing to jump on the AI ​​bandwagon. Jack Lepiarz, known as Jack the Whipper and with more than 7 million followers between YouTube, TikTok and Instagram, account to the medium that systematically rejects any agreement related to artificial intelligence. “I cannot with a clear conscience support something that is going to make it difficult for normal people to earn a living,” he declared to the outlet. Lepiarz previously turned down a $20,000 contract to promote AI imaging tools and says even $100,000 or $500,000 wouldn’t change his mind. Perception with Copilot. For Microsoft, these influencer campaigns can be especially key. And despite its large user base in Microsoft 365 services, only 3.3% pay for Copilotas told from Windows Central. The company needs its AI assistant, integrated into Windows, Microsoft 365 and Edge, to be perceived as a natural tool in daily work, and at the moment it is being especially difficult for them to achieve that. It’s public time. Big Tech hiring influencers occurs precisely at a time when companies are investing more than ever in advertising their AI tools. A few days ago we told precisely the case of Anthropic, which spent a million on ads during the Super Bowl. Separately, Google and Microsoft increased their digital advertising spending to promote AI products by approximately 495% last month compared to the previous year, according to Sensor Tower. The media also says that OpenAI multiplied its advertising investment tenfold in 2025. After years of making its tools known, it is now time to shape our perception of them. Cover image | aerps and Hillary Black In Xataka | The person who is earning the most money on Twitch by broadcasting 24 hours a day is not a person: it is an AI

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.