TSMC is not going to use its High-NA machines at the moment and has a compelling reason not to do so

On April 23, TSMC made official a strategic decision very important: has postponed the adoption of ASML’s extreme ultraviolet (EUV) and high aperture lithography machines until 2029. These are the equipment of manufacturing of more advanced integrated circuits that this company from the Netherlands currently has in its portfolio, and TSMC’s announcement caused ipso facto a drop of 3.3% of the value of its shares. It is not in vain that this Taiwanese chip producer is ASML’s largest client. In 2025 23.9% of total sales of this Dutch company came from TSMC. The main reason why this last company has decided not to use UVE High-NA machines of ASML in the short term is strictly economic. Each of them has a price of around 350 million euros, and, in addition, a single cutting-edge semiconductor plant requires the installation of several dozen of this equipment. TSMC considers that they are currently too expensive to make the manufacturing of advanced chips profitable. And, interestingly, Intel, Samsung and SK Hynix they are already adopting High-NA technology. This decision by TSMC brings great technical challenges The step taken by TSMC has not been improvised, as might be expected. In fact, over the past two years several managers at this company have publicly expressed doubts about the short-term adoption of ASML’s High-NA equipment. In January 2024 CC Wei, the current president and CEO of TSMC, surprised us with this statement: “We are studying it carefully, evaluating the maturity of the tool and examining its costs. We always make the right decision at the right time in order to offer the best service to our clients,” Wei assured. during a meeting. A few weeks earlier Szeho Ng, an analyst at China Renaissance, predicted that TSMC would not use ASML’s high-aperture UVE equipment until it introduced its 1nm integration technology. “We always make the right decision at the right time with the purpose of offering the best service to our clients” Last week it was Kevin Zhang, TSMC’s deputy chief operating officer, who clarified something very important: “I am amazed by our R&D team. They continue to find ways to drive technological development without using ASML’s High-NA UVE equipment. Someday we may have to use them, but right now we can continue to reap the benefits of current EUV technology without moving to High-NA which, as we all know, is extremely expensive.” In 2029, TSMC intends to have the A12 and A13 integration technologies ready for large-scale production, which are nothing more than derivatives of its A14 photolithography. From a commercial point of view these will be the first 1.2 and 1.3 nm technologies of this company. They will use GAA transistors (Gate-All-Around) and NanoFlex Pro technology. This latest innovation will allow IC designers to use fast cells for the critical parts of the GPU that need speed, and dense or efficient cells for the rest, thus optimizing the chip area down to the last millimeter. What we still do not know is what technical solutions TSMC engineers are going to implement to make it possible to manufacture 1.2 and 1.3 nm integrated circuits using ASML’s UVE equipment. It’s just a guess, but it seems unlikely that they will resort to the multiple patterning because this procedure compromises the performance per wafer and the cost of the semiconductors. TSMC would lose competitiveness. One last note: the multiple patterning Broadly speaking, it consists of transferring the pattern to the wafer in several passes with the purpose of increasing the resolution of the lithographic process. Image | ASML More information | Innovation Origins In Xataka | Bill Gates has X-rayed Intel. And his diagnosis is overwhelmingly accurate.

There is a Russian bomb floating in the Mediterranean coming from Ukraine. And Europe trembles because it can explode at any moment

It is a fact that most of the world’s trade moves by sea. This means that every day thousands of ships cross key routes very close to European coasts. In this constant traffic, a single out-of-control incident is enough to put entire ecosystems in check and force several countries to react at the same time. The war in Ukraine has just ended activate one of them. A bomb adrift in the heart of Europe. The situation is the following: in the Mediterranean right now there is more than just a damaged ship, the Arctic Metagaz is a latent threat that mixes war, energy and environmental risk in a single point. We are talking about a loaded Russian tanker with gas, fuel and diesela ship hit by a drone attack from Ukraine that sails uncontrollably, with structural damage and a real risk of explosion. Not only that. It appears to have no crew, is leaking and catching fire, and is moving slowly between European waters and North Africa. What makes it especially disturbing is not only its condition, but its origin: It is one more piece of the war being fought in Eastern Europe that has ended up floating in the Mediterranean, moving the conflict directly to the doors of the entire continent. It’s not just the front anymore. The episode confirms something that was already intuited for some time: that the war between Russia and Ukraine is no longer confined to the Black Sea or the land front. Ukraine has expanded its radius of action by attacking Russian ships on much more distant routes, including those that are part of the called “ghost fleet”key to avoiding sanctions and financing the Kremlin’s war effort. These increasingly frequent attacks turn ships into de facto military targets, even if they are sailing through international waters or near European territories. The result is an extension of the conflict that blurs borders and places Europe in an uncomfortable position, because it is not a direct part of these attacks, but its potential scenario. Arctic Metagaz Ecological risk and implications. The immediate danger right now it’s pretty obvious: an explosion or massive spill in an area of ​​high ecological value could cause lasting damage in the Mediterranean, affecting protected ecosystems and coastal economies. But the problem goes beyond the environmental impact. These types of incidents also reveal to us the fragility of the maritime system in times of hybrid war, where poorly maintained, aging ships, with opaque structures and no safety guarantees, They circulate on key routes. The combination of sanctions, evasion and attacks turns these ships into risk vectors that can trigger crises at any moment. Europe and the threat. The European reactionwith Italy and France along with several EU members warning of the imminent risk, reflects a growing concern: countries have asked a coordinated response facing a problem that is not only specific, but structural. The difficulty in intervening (whether due to weather conditions, the location of the vessel or legal issues) also represents a capacity and governance vacuum in nearby waters. While Russia he ignores of incident management and points to coastal states as responsibleEurope faces a rather complex dilemma: managing the consequences of a war in which it neither controls the origin nor the evolution. Symbol of a new phase. If you also want, the derived from the Arctic Metagaz summarizes like few elements the evolution of the current conflict: a war that no longer only dynamits infrastructure on land, but is capable of turning the sea into a space constant riskwhere each asset can become a threat. It is not just, therefore, an accident or an isolated episode, but the proof (one more) that the conflict has acquired an unpredictable dimensionwhere an action in Ukraine can end up generating a crisis thousands of kilometers away. And that is precisely what it has of the nerves to Europe: not knowing when or where the next impact may materialize. Image | war-sanctions.gur.gov.ua In Xataka | While we all look at Iran, in Ukraine they continue doing their thing: robot against robot battles where humans only watch In Xataka | Ukraine has become the world’s leading specialist against Iranian drones. And he won’t share his antidote

They fill you up in the moment, but leave you emptier later

Every time we are more alone and so? it’s a problembut don’t worry, AI has arrived to save us. Mark Zuckerberg believes AI friends can fill that gap. In South Korea They are sending AI robots to older people to keep them company and in New York they are also testing it. New studies suggest that it is a bad idea. The study. Was conducted by the University of British Columbia on a sample of 300 first-year students. The study leader tells 404media which is a very vulnerable time because often students are away from their family and don’t know anyone. The students were divided into three groups: the first would chat with an AI chatbot, the second would chat with another unknown student and the third, the control group, would have to write a diary. The rules were that they had to write at least one message a day and complete several daily surveys, including the UCLA Loneliness Scale. The chatbot was based on ChatGPT-4o, the model known for being more empathetic and close and whose withdrawal generated criticism precisely among those who seek emotional connections with an AI. The results. Participants who talked to another peer showed lower levels of loneliness, while the other two groups (AI chatbot and diary) showed no change. They did see a decrease in negative mood when chatting with the AI, suggesting that it offers momentary relief, but does not have a lasting effect. 33% of students in the first group continued talking to their partner after the experiment, while only 14% continued talking to the chatbot. There is more. The University of British Columbia conducted another study with a sample of 2,000 adults over a full year. It was seen that people who feel lonelier are more likely to increase the use of chatbots to try to fill that void. However, in the long run the effect of emotional isolation increases even more. According to one of the authors, it “suggests a negative feedback loop” and compares them to “social junk food,” meaning it fills but does not nourish. Connect with an AI. When we saw Her in 2013 we did not imagine that a few years later the story was going to come true. There are people falling in love with AIssome even they cheat on their human partners with one It is an increasingly common trend and AI companies know it. There are apps that offer AI companions like Replika or Character.ai, but AI is also coming mainstream. We have the example with the controversial erotic mode that OpenAI is preparing for ChatGPT. They are not always romantic relationships, there are also those maintain a friendship and who goes to the AI ​​to tell it their problems, as if he were a psychologist. The experts have already warned that machines cannot replace a real connection, but the machinery advances unstoppably and AI is already redefining relationships. In Xataka | People Blaming ChatGPT for Causing Delusions and Suicides: What’s Really Happening with AI and Mental Health Image | Talha Uğuz, Pexels

The war between Anthropic and the Pentagon points to something terrifying: a new “Oppenheimer Moment”

Anthropic has refused to bow to pressure from the Pentagon. Its co-founder and CEO, Dario Amodei, has just published a statement in which they make it clear that they are not willing to break their ethical principles. No massive espionage with AI, no development of lethal autonomous weapons with its models. And that reminds us of a terrible case: the one with the atomic bomb. From hero to villain. J. Robert Oppenheimer went from being the “father of the atomic bomb” and a national hero to become in an outcast. His sin was not betrayal, but his moral clarity. After witness the horror of Hiroshima and Nagasaki, Oppenheimer desperately tried to stop the atomic escalation and the development of the hydrogen bomb. Either you are with us, or against us. The United States, which had praised him in the past, took advantage of his former political affiliations and stripped him of all his privileges and influence. This demonstrated how the US government simply decided that scientific knowledge was state property and that any researcher who tried to propose ethical limits to their own projects would be treated as an enemy of the country. History is threatening to repeat itself these days. From Oppenheimer to Anthropic. He is doing it with a protagonist that is still there—the US Government—and another that is changing: the one who now defends the ethics of a scientific-technological project is not Oppenheimer, but Dario Amodei, CEO of Anthropic. Claude is increasingly vital in the US Government. Your company is between a rock and a hard place these days. Anthropic managed to make its model Claude become the pretty girl of the US Government. The ability of this AI has proven to be so remarkable that it was apparently used to plan the arrest of the former president of VenezuelaNicolás Maduro. red lines. But so that the Pentagon could use Claude, Anthropic imposed certain red lines. No use for mass surveillance of US citizens, and no use for the development of lethal autonomous weapons. And the Pentagon has ended up not liking those red lines, so they want to eliminate them and use Claude as they please as long as, they say, the Constitution and American laws are respected. The Pentagon wants AI without restrictions. That has ended up causing an enormously tense situation these days. The Pentagon threatened to punish Anthropic if it did not give in to its demands, and those threats from the Department of Defense have not been subtle at all. In fact, they have suggested that they could label Anthropic as a company that is “a supply chain risk,” a black label typically reserved for companies in rival countries like China or Russia. Contradiction. Dario Amodei himself explained in an entry on the company’s official blog that those two threats were self-exclusive: “These last two threats are inherently contradictory: one labels us as a security risk; the other labels Claude as essential to national security.” Can AI be nationalized? It’s a disturbing irony: the same government that considers Claude an essential tool for national security is willing to label his creators a public threat if they don’t hand over the keys to the kingdom and their AI. What the Department of Defense and the Pentagon want is to basically “nationalize” the AI ​​technology developed by Anthropic and appropriate it as they already did with the technology that gave rise to the atomic bomb. We know how that ended. Anthropic refuses to give in. The danger is enormous in both sections: mass surveillance, rather than defending democracy, can dynamite it from within, and the NSA scandal is a good example. But even more worrying is the Pentagon’s intention to use this AI to develop lethal autonomous weapons. Amodei insisted on this point, indicating that “The foundational models of AI They’re just not reliable enough. to power fully autonomous weapons. “We will not knowingly provide a product that puts American warfighters and civilians at risk.” Amodei even offers the Department of War/Defense help in the “transition to another provider” of AI models, but at the moment it is not clear which path the US government will take. Oppenheimer Moment. If the Pentagon finally execute his threat and ban Anthropic, the message for the industry will be chilling. In the age of AI there are no conscientious objectors: if a company develops a technological and strategic advantage at a military level, that company is at the mercy of the State. It is a new and terrifying “Oppenheimer Moment” that conditions the future not only of Anthropic, but of the development of AI models itself. In Xataka | “The world is in danger”: Anthropic’s security manager leaves the company to write poetry

An AI publishes 11,000 podcasts a day by copying local journalists. And at the moment there is no way to stop the avalanche

An automated podcast network publishes more episodes in 24 hours than many broadcasters do in a year, using AI to convert news articles to audio in minutes. A specific case, that of the channel ‘The Daily News Now!’, helps us to consider how far the scraping of content in the era of generative AI. To loot. The case was put on the table indicator: On January 31, at 2:57 in the afternoon, the newspaper ‘The Chronicle’ (a completely marginal publication: despite being 120 years old, it is published by Duke University, in Durham, and is run and produced entirely by students) published an article about Gemma Tutton, a student and pole vaulter who had won a university competition. Seventeen minutes later, a podcast called ‘Durham News Today’ uploaded an episode titled ‘Gemma Tutton’s Triumphant Return to Pole Vault’ to Spotify. The podcast, of course, had no connection with the newspaper. But it reproduced almost all the data from the original article in the same order, including practically identical phrases. And it is not an isolated case: ‘Durham News Today’ is one of at least 433 programs that make up ‘The Daily News Now!’ podcast network created by Corey Cambridge. As of January 23, ‘DNN’ has published more than 350,000 episodes (approximately 11,000 per day). How they do it. Obviously, with AI: a system of scraping (software automation that extracts large volumes of content) monitors media websites, excises text from published articles, processes it using natural language synthesis tools, converts it into audio and distributes it on platforms such as Spotify. All in a matter of minutes. And they don’t bother to dissemble: according to Indicator, they reproduce the structure, data and writing of pieces published by outlets such as local Fox and NBC affiliates, ‘TechCrunch’, ‘Toronto Star’, ‘The Verge’ or the radio station ‘WRAL’. The tools. To understand why an operation of this type is technically possible today, we must take a look at the ecosystem of tools that has been democratizing synthetic audio production for two years. In September 2024, Google activated the feature globally Audio Overview of NotebookLM. The tool converts any document uploaded by the user into an audio summary. The impact was immediate: NotebookLM went from 652,000 monthly visits in August of that year to 10.5 million in September, an increase of 371% in thirty days. In the three months following the global launch, users accumulated audio with a total duration greater than 350 years of continuous reproduction. NotebookLM normalized the idea of ​​the synthetic podcast, and it was all downhill from there. ElevenLabsspecialized in speech synthesis and valued at more than a billion dollars, launched its GenFM function in December 2024, which allows you to generate complete episodes from text. Wondercraftfunded in part by ElevenLabs, introduced support for editing podcasts generated with NotebookLM. Podcastle, aimed at podcast creators, incorporated speech generation with text to complete or replace fragments of speech. The secret: the price. In an analysis from a similar network (Inception Point AIwhich generates around 3,000 episodes per week with more than fifty AI announcers) producing an episode costs approximately one dollar, and with just 20 listeners the episode is profitable thanks to programmatic advertising. The model does not seek loyal audiences, but search engine positioning: by publishing hyper-specific episodes on cities or niche topics minutes after local media launch their articles, these networks anticipate humans’ capacity for informative immediacy. In other words: ‘The Daily News Now!’ appears in the top Spotify results for local news searches in dozens of American cities. It directly competes (and in many cases surpasses) the media from which it steals content. Legal issues. Cambridge defends itself by saying that its network only accesses “publicly available information” and merely summarizes it. But Indicator found almost thirty episodes of ‘Durham News Today’ that reproduced the structure, order and specific sentences of articles from ‘The Duke Chronicle’: it is not a specific pattern. And Cambridge may still be legally protected, but the problem is more about information ethics than legal details. In any case, in May 2025, the United States Copyright Office came to the conclusion that “publicly accessible” material is not necessarily free to use. There are legal precedents in that direction: in November 2025, a federal judge from New York did not reject the lawsuit by fourteen major publishers (including Forbes, The Atlantic and the Los Angeles Times) against the AI ​​company Cohere, considering that their summaries could constitute direct infringement if they reproduced “structure, sequencing, tone and expressive choices” of the original articles. On the contrary, in April of the same year, the case NYT vs. Microsoft dismissed claims related to the Copilot-generated summaries on the grounds that they were not “substantially similar” to the source articles. Meanwhile, and still without trialthere is the case of the New York Times against OpenAI and Microsoft, accused of using journalistic content to train their models Very clever. There is another detail: we are not talking about the ‘New York Times’, but rather ‘DNN’ concentrates its production on local niche news (university athletics, student councils, cats trapped in trees), first because these contents generate specific searches with little competition on Spotify. And second, because legally it is safer. They point to more fragile journalism models. Meanwhile, distributors like Spotify are developing tools to detect artificial music (removed more than 75 million tracks), but the next step is to make big brands aware that they do not benefit from the exploitation of newsrooms that cannot defend themselves. In Xataka | AI is already a battlefield: Anthropic has just accused DeepSeek and other Chinese companies of “distilling” Claude

Xiaomi already has its own AI model for robots. At the moment, he’s great at taking apart LEGOs and folding towels.

It has been a long, long time since Xiaomi stopped being a mobile company. Today the company’s tentacles reach all types of sectors, from mobile and household appliances until cars, chip design and, from now on, robotics. And the Chinese company has just presented its first vision, language and action model for robotics. Its name: Xiaomi-Robotics-0. What is this about?. Xiaomi-Robotics-0 an open-source model whose code can be found in GitHub and HugginFace. As the company explains, this model has been optimized to offer “high performance, speed and smoothness in real-time executions.” We should not think of this model as an AI capable of making a robot run and jump like a human, but rather one capable of making a “simple” robot understand its surroundings and know how to make the optimal decision without, for example, destroying whatever it has in its hands. About the robots. When we talk about AI applied to robotics we are not just talking about a robot being able to move. The device must know and understand that it should not apply the same force when holding a brick as it does when holding a cat, for example. In that sense, there has to be an understanding of the visual, an understanding of what is being seen and an appropriate execution of actions: this is a brick > it is a heavy object > I have to apply more force to hold it and move it from one side to the other. Xiaomi-Robotics-0 results in the benchmarks | Image: Xiaomi The benchmarks. Xiaomi has achieved, as detailed on the project website, very good results in the benchmarks I RELEASE (measures knowledge transfer), SimplerEnv (measures performance in real simulations) and CALVIN (measures performance in tasks conditioned by language). According to the company, Xiaomi-Robotics-0 “achieves high success rates and robust results in two challenging two-handed tasks: disassembling LEGOs and folding towels.” The fun of training. Every AI model draws from a training dataset. In the case of Xiaomi-Robotics-0, a 4.7 billion parameter model, the dataset consists of 200 million time steps of robot trajectories and more than 80 million samples of general vision-language data, including 338 hours of LEGO disassembly videos and 400 hours of towel folding videos. The results. The company claims in the paper that its model is capable of disassembling complex LEGOs of up to 20 pieces, adapting the grip in real time to avoid errors, using only one hand to place the towel correctly and folding it or, if you pick up two towels from the basket, take one of them, leave it in place and fold only one. This demonstrates an interesting capacity for adaptation and learning that, although it may seem trivial on paper, has its value if we think about industrial and even domestic robots. Beyond. What this model is demonstrating is being able to adapt to complex and unpredictable geometries, such as that of a towel thrown in a basket, and to understand the, let’s say, “soft physics.” On a towel it may seem like a small thing, but let’s think about manipulating human tissues in an intervention, for example. Same with LEGOs. It’s not just disassembling them, it’s understanding the position of the blocks, how they fit together, what force to apply and at what angle so as not to break them. Let’s think about a robot that removes debris. An industrial robot has historically been programmed with fixed coordinates, that is, moving something from point A to point B. A robot with AI like the one proposed by Xiaomi would be much more versatile. The first robot learns movements, the second robot learns tasks, and the difference is a world. If we think about a distant future in which there are domestic robots, a robot cleaning dust from a shelf will not be the same as knowing how to identify objects, decorations, etc., and understanding that it must move them to avoid throwing them away and cleaning them thoroughly. Cover image | Xiaomi In Xataka | A Chinese company boasts another limit in robotics: it ensures that its new humanoid robot runs like an elite athlete

OpenClaw is the most viral, fascinating and dangerous AI of the moment. For this last reason, it has joined forces with VirusTotal from Malaga

In 2025 we had a ‘DeepSeek moment’ and in 2026 we are having an ‘OpenClaw moment’. This AI agent is super powerful, but also super insecure. There is, however, good news, because the Malaga company VirusTotal has partnered with the OpenClaw project to try to mitigate one of the most important cybersecurity risks of this AI agent: its skills. what has happened. OpenClaw (formerly Moltbot, and before Clawdbot) has announced that it has begun a collaboration with the Malaga cybersecurity company VirusTotal, owned by Google. The agreement will see VirusTotal be in charge of “scanning” and analyzing the so-called “skills”, which work like OpenClaw plugins and add all kinds of functions. They do it, of course, but many take the opportunity to introduce malicious instructions that allow them to steal data and remotely operate other people’s AI agents. More security for disturbing AI. Peter Steinberger, creator of the project, has joined Jamieson O’Reilly, cybersecurity expert and founder of the company Dvulnand Bernardo Quintero, founder of VirusTotal, to offer that “additional layer of security for the OpenClaw community.” In it official announcement explain that “all the skills published in ClawdHub (the project’s official skills “store”) are now scanned through Virus Total’s Threat Intelligence system, including its new capability Code Insight (code inspection)”. Bernardo Quintero indicated on Twitter how the effort has already allowed 1,700 skillls to be identified as malicious. If the skill is malicious, it is blocked. This analysis carried out with the VirusTotal tools allows us to identify skills as malicious and block them immediately so that they cannot be downloaded. Not only that: those skills that have been classified as benign are analyzed again every day to detect scenarios in which for some reason they could end up becoming malicious. Still, be careful. Those responsible for OpenClaw warn: the VirusTotal scan helps a lot, but it is not a total guarantee that any skill can perform malicious actions on the machine on which we have our AI agent installed. The attacks of prompt injection Sophisticated skills can manage to cross that barrier, but of course this collaboration means that OpenClaw users can be much calmer regarding the skills available in the ClawdHub repository. OpenClaw wants to be much more secure. This first effort joins OpenClaw’s ambition to have a complete cybersecurity model which includes things like a public roadmap for your new developments in this area, a formal communication process, and details about full audits of your code. Plugging a problem that could kill OpenClaw. The OpenClaw project soon went viral due to its eye-catching options, but shortly after doing so a security audit initial 2,851 skills detected 341 malicious skills. Companies like BitDefender also joined these efforts to avoid problems with tools like AI Skills Checker to check whether a skill was dangerous or not. These malicious skills were, for example, capable of executing shell commands on the victim machine, which gave the attacker complete control of those resources. Attacking the machine is confusing it with natural language. Normally cybersecurity attacks are complex, but the problem with AI agents is that they work with natural language. This implies that to infiltrate these systems you do not have to use code, but simply “convince” and “trick” the AI ​​with natural language. That is where prompt injection attacks come in, which consist of giving instructions to those AI agents that can confuse them to obtain something that theoretically they should not allow them to obtain. Personal data, API keys of the models we use at OpenClaw, email accounts and passwords for all types of services… the possibilities are endless, and OpenClaw, which has access to all of this to operate autonomously, can end up being “tricked” into transferring said data. Beware of OpenClaw. These problems now seem a little less feasible thanks to the collaboration with VirusTotal, but those who are trying OpenClaw on their machines or any other platform should be very alert from the beginning. There are guides that help you install it with some barriers important security issues, and the project itself has a command (‘openclaw security audit –deep –fix’ to audit the most important problems and address them. In Xataka | OpenAI has a problem: Anthropic is succeeding right where the most money is at stake

There is a Chinese startup creating the most amazing robots of the moment. It’s called X Square

The only embodied AI (bodied artificial intelligence) company backed by the three Chinese technology giants: ByteDance, Meituan and Alibaba. Just over two years of life and financing rounds in which they have managed to overcome the 400 million dollars. These are some of the cover letters of X Square Robot, one of the most promising companies in the field of robotics. where does it come from. XSquare It is a Chinese startup which was born in 2023 at the hands of Wang Qian, an engineer and doctor from the University of Southern California who, in recent years, has maintained a discreet profile in the industry. The company was born not only as a company aimed at creating humanoid robots: they are also behind the development of the language models necessary to lead in robotics. The roadmap. The startup, despite its youth, has made the most of its two years of life. December 2023, full financing and start of operations. March 2024, efforts begin to develop a general large-scale model for embodied AIthe brain that would move its robots. May 2025, commercialization of Quanta X1, a bimanual wheeled robot equipped with its WALL-A model. Specially designed for logistics and commercial tasks. July 2025, first to show purposeful AI model general capable of directly controlling a highly dexterous robotic hand. Unlike traditional approaches—based on rules, fixed trajectories or action-specific training—the system uses a single model that integrates perception, planning and control, allowing grip and movement to be adapted in real time to changes in the environment. August 2025, Quanta X2 arrives, its first humanoid robot, also with a wheel base. The product. Quanta X2 is the latest solution from X Quare, a wheeled humanoid robot that integrates the company’s own AI model. This model allows the robot to have a vision system, autonomous motion control, real-time task planning, etc. We highly recommend watching the demo video in which X Square shows it in operation, because it is spectacular. Why is it important. X Square does not sell ordinary humanoid robots, it sells cognitive capacity. The norm in robotics companies is to design the hardware and adapt it to existing software. X Square designs its own models focused on physical AI. This is something fundamental for his native country, China. The country wants to accelerate the automobile industry in 2030 with 100% automated factories. The aid policy is especially favorable for local companies developing robotics solutions. China has created centers responsible for training robots to imitate human behavior. X Square software is key The backup. X Square is backed by giants like Alibaba and Bytedancethe first group having announced an internal team dedicated to robotics using Qwen, its AI models division, as a base. Despite Alibaba’s muscle when it comes to creating its own language models, the investment of more than $140 million in X Square Robot makes it clear that it is much more than a typical startup. Image | XSquare In Xataka | Robotics has just broken another scale barrier: there are already autonomous robots smaller than a grain of salt

At the moment it has an open field and four streets of a PAU

The official website says that “you no longer have to imagine it. Now you can live it.” But the truth is that, as the matter stands, it is difficult to imagine. Let’s hope we can live it. Because when there are nine months left, Madringthe capital’s Formula 1 street circuit looks like anything but a circuit. At these moments, it becomes difficult to think that through those streets and those half-assembly curves you can search Fernando Alonso his 33. If everything goes as planned, the September 11 The cars have to start rolling south of the city. Among the streets of Valdebebas, one of the Madrid PAUsand the bowels of IFEMA, which competes with the Fira de Barcelona for being the largest fairground and convention center in Spain. The layout is a good reflection of everything that Formula 1 is rewarding: urban layouts that allow attracting large investments from cities to export their image to the world even if, as in the case of Madrid, all the attractions around the circuit are residential buildings, a fairground and a half-built City of Justice for more than 20 years. Now, with 243 days left until Madrid returns to the Formula 1 calendar 45 years later, a doubt is beginning to float: whether Madrid will return to the Formula 1 calendar. Some works in diapers The FIA ​​said that everything is going well, that there is no problem. That’s what they collected in I amMotora portal specialized in motorsport competitions, a little less than a month ago. “There are no delays or concerns within the FIA,” they then stated from the media, which also claimed that a commission from the International Automobile Federation had been supervising the status of the works in the streets of Valdebebas. From IFEMA they also defend that they are advancing at the expected pace. “The works are going within the established deadline. The paving has begun and part of it is already finished, although it is being done little by little due to the rain,” they point out to Autobild. And they make it clear that the circuit still remains to be completed: “the last layer of asphalt is expected to be completed during the summer.” That IFEMA has come out to speak is no coincidence. The information points to a closing of ranks after the viability of the project was questioned. At least for this year. The Italian media Rmc Motori claimed last November that Liberty Media, owner of the rights to the sport, was considering removing 2026 calendar to Madrid in favor of Imola, given the progress of the works. The legendary Italian circuitwill not be part (at least for the moment) next year of the F1 calendar. What is certain is that the works seem to be in an embryonic phase. Those who are walking around the circuit these days are finding the streets of Valdebebas without any type of modification. Ready for you the cars pass at full speed but not single-seaters. Click on the image to go to the original tweet With few exceptionsit is difficult to intuit the circuit along the 22 curves that make it up. The Monumental, a banked curve with a 24% inclination that has become one of the great attractions of the circuit, it’s a muddy mess right now. Yes, progress has been made on the route but there is no sign of progress in the surrounding services and the first asphalt is conspicuous by its absence. The times are also much tighter than we might think. In August the circuit must be ready so that Eurocup-3, a single-seater category inferior to Formula 1, can compete in one of its grand prizes. If it arrives, the intention is to make it the first big test before Fernando Alonso, Carlos Sainz or Max Verstappen set foot on the soil of Valdebebas. Click on the image to go to the original tweet The circuit, in addition, has to fight with the opposition of the neighbors. Pave the way for cars to pass is causing profound changes in its streetsconstant works and the anticipation that the noise suffered during that weekend will be much higher than the averages that have paralyzed the concerts at the Santiago Bernabéu. Besides, environmental associations They defend that the project threatens the conservation of wetlands and “non-transplantable” trees in the area. Nor is Madrid the first city where the viability of a Formula 1 Grand Prix is ​​doubted a few months before its celebration. In South Korea, Yeongam circuit was not reviewed by the FIA up to 10 days before the traffic light went out on the finish line in a clear example of “out of sight, out of mind” heart. That same weekend work was being done on the track and in some areas the asphalt was not well established. In Las Vegas, Formula 1 has been fighting with a recurring problem for three years now: the sewers become loose with the passage of cars. And in Hanoi, 600 million euros were spent on a circuit so that five years later a total of zero cars raced before its abandonment. Photos | Ifema In Xataka | Madrid says that F1 will not be paid for with public money. Valencia promised the same and it cost them 300 million euros

The DGT is not going to fine for the V-16 beacons at the moment, and therein lies the key

Since last January 1, anyone who is stranded on the road due to a breakdown has to place the V-16 beacon connected. And what happens if I don’t have it? Absolutely nothing. At least that is what the Government assures. Because, with the law in hand, the agents can fine us if they consider it appropriate. We also don’t know how long this “truce” will last. “It is not tax collection”. This is what Fernando Grande-Marlaska stated in the press conference in which he gave the results of the road accidents relating to 2025. The DGT has made public the accident data for last year but a good part of the press conference has revolved around the topic of the moment: the connected V-16 beacon that the DGT has been required to carry since last January 1. The agents, Grande-Marlaska assures, will not fine for a “reasonable” period of time, in words reported by The World. They do it because, they say, “our objective is not sanctioning or collecting, what moves us is the obligation to save lives.” “Reasonable”. It is the temporary measure that the Minister of the Interior has used to refer to the time that the agents have before fining. The word says nothing because, really, from January 1, 2026, Traffic can fine us for not having the corresponding signage elements. The fine is 80 euros and it does not take into account whether we carry the triangles with us because the only essential element in the car when signaling an accident is the connected V16 beacon, which must be approved by the DGT. And the triangles have been left in a kind of limbo so that the driver can do with them whatever he considers. Not now. The position of the DGT has changed over time. Since it was confirmed that the V-16 beacon would be the only signaling element of a road breakdown, the discourse has changed and its position has been relaxed. At first it was argued that the use of triangles could be grounds for a fine since an incident was not being correctly signalled. Now, Interior says that there will be a period in which fines will not be imposed for this. Later it was left up in the air whether the beacon+triangle combination was valid. Finally, it will be allowed put the triangles “at your own risk”. many doubts. In his speech, Grande-Marlaska pointed out that last year more than 100 people died on the road, “a significant number for getting off to put up the triangles”, in words reported by Motorpassion. In The World They point out that estimates point to 25 pedestrians dying while trying to put the triangles in what Grande-Marlaska describes as “bleeding.” However, as we have said in Xatakathe DGT has never offered clarifying data. Traffic has always classified these victims as people run over “after getting out of the vehicle” but without clarifying under what circumstances. They do not indicate whether they were hit when getting out of the car, putting the triangles on, changing a tire on the shoulder or waiting for help to arrive. According to their accounts, between 2018 and 2022 (a period that includes before and during the COVID-19 pandemic), an annual average of between 18 and 26 people died in accidents “after getting off the vehicle” on high-capacity roads. as reflected in the document itself which explains why the regulations and technical requirements of this connected V-16 beacon are changed. Taking the total number of deaths in this entire series (8,615 people, according to data from Statista), we are talking about just over 1% of deaths that fall into the category “after getting out of the vehicle.” No fines but no extensions. The result in the application of the measure has been paradoxical. From the Interior they say that the measure is “essential” to reduce the number of road accidents but omitting its use or not having the beacon will not be penalized despite there being no extension. And, at the same time, Traffic defends that it has not implemented an extension because it is something that has been known since 2023 and that we should have already purchased the device. According to Pere Navarrodirector of the DGT, “we considered delaying it” but that “would not have changed anything.” Also left to the driver’s discretion whether or not they want to put the triangles in despite the fact that they consider it a sufficient risk to promote a regulatory change. And they recognize that something has been done wrong with the communication of the new measure. Photo | DGT and Help Flash In Xataka | The V-16 beacon business: who is making money with the elimination of the DGT triangles

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.