China manufactures 90% of the world’s humanoid robots and the reason is not its industrial policy: it is crossing the street

On Chinese New Year, 16 Unitree humanoid robots danced a folk dance before almost a billion viewers. The West reacted as always: some with panic, others with disdain, others with an undisguised admiration that sometimes tends to concoct theories with more clichés regarding China than real analysis. None of those answers is entirely true and that blindness has a cost. The context. China manufactures about 90% of the humanoid robots sold in the world. In 2025, about 13,000 units were shipped, with Chinese companies (AgiBot, Unitree, UBTech…) dominating the ranking by volume, according to Omdia data collected by Bloomberg. Tesla, with all its brand reputation and all its industrial apparatus, internally deployed around 800 units of the Optimus that same year. The figure. He Unitree G1 It costs $13,500. He Tesla Optimus will exceed 20,000. That gap is the difference between being able to iterate ten times with the same budget or staying at one. Between the lines. The story circulating in the West has two versions, equally lazy: The first: all this is the five-year plan, the hand of the State, industrial policy made robot. The second, reserved for the most condescending: it is because they copy. Neither of them explains what is really happening. China’s advantage in robotics does not come from the Communist Party. It comes from the Pearl River Delta and the Yangtze Delta: the two densest manufacturing ecosystems on the planet. Motors, actuators, sensors, custom PCBs… everything is available within walking distance. Is what it describes Rui Xuan engineer who has worked in robotics startups in China and Silicon Valley. When Unitree wants to test a new joint design, it crosses the street and comes back with the right component. A team in San Francisco has to wait weeks to receive the same component from China. The background. That difference in iteration speed changes everything in hardware engineering. It stops being a problem of talent, because Chinese and American engineers are equally capable, and becomes a problem of infrastructure. Breaking a robot, learning, replacing it, and trying again: that’s what builds cumulative technical advantage. If breaking a robot costs three weeks of logistics, learning stops and times become longer. Yes, but. China does have state support, and it is completely legitimate to point this out. The government has injected a lot of money into that sector and has set production targets. But it’s not that Silicon Valley is an impoverished region: it has more capital, investors with more experience and resources, and more decades of experience financing high-risk bets. If this were a war to see who has the fattest checkbook, the United States would win handily. But it is not. Furthermore, Chinese state money comes with strings attached: it is classified as “state asset” and founders assume personal liability if the company fails. That pushes capital toward politically safe bets, not necessarily toward the most innovative ones. The question. Can the West make up ground in robotics? Yes, but not like he’s trying. Attracting foreign talent helps on the margin, but does not solve the underlying problem. The equalization involves building local supply chains capable of delivering a spare part in two days, not two weeks. And that is not an immigration or R&D problem. It is an industrial-based problem, and solving it takes many years of work. And of thankless work, from which those who arrive later may reap the fruits. Until then we are going to see many more viral videos of Chinese robots doing pirouettes with increasing naturalness. And it’s because they’ve built the best environment in the world to break things and try again. In engineering, that explains almost everything. Featured image | CCTV In Xataka | Folding clothes or taking apart LEGOs has always been a tedious task. Xiaomi’s new AI for robots has put an end to it

China brought humanoid robots to the country’s biggest television show: it made them practice kung-fu with millimeter precision

Every year, hundreds of millions of people in China sit in front of the television to watch the Spring Festival Gala, recognized by the Guinness Book of Records as the most watched annual program on the planet. It is not only a music and dance show, but also a showcase where the country decides what image it wants to project of itself. In this scenario of maximum visibility, the presence of humanoid robots ceases to be a simple technological curiosity and begins to function as a public declaration about the place that innovation occupies in the national narrative. What happened there was not just an artistic number, but a clear clue as to where the Asian giant is looking when it thinks about its technological future. Kung fu, choreography and coordination. To present their robots to millions of spectators, the organizers turned to a deeply recognizable symbol: martial arts. In the CCTV broadcast available on YouTube We can see robots using traditional weapons such as swords and nunchucks, as well as doing tricks and jumping from trampolines, always in sequences shared with human performers. The choice of kung fu provided more than just visual spectacularity, it can also be interpreted as a close way of reading technological advancement within a tradition known to the public. The magnitude of the event. The Spring Festival Gala has been broadcast since 1983 and is an inseparable part of the New Year celebration in hundreds of millions of homes. Reuters also describes it as an event comparable, in terms of media scale, to the American Super Bowl, capable of concentrating popular culture, political message and industrial ambition in a single night. What appears in that scenario entertains and, at the same time, projects a message and indicates priorities. A gateway for the industry. Behind the staging there were specific names and a visible strategy. They participated in the gala companies known in the West such as Unitree, but others less known such as MagicLab, Galbot and Noetix. The immediate precedent helps to understand the moment: Unitree’s robot performance in the previous edition went viral and, in a way, brought this technology closer to the general public. So the idea of ​​betting on a similar show again is reasonable. From the stage to the factory. The public display of these systems fits with a line of industrial policy that places robotics and AI at the center of the next Chinese manufacturing stage. In recent years we have seen how the Asian giant has invested heavily in this sector. According to OmdiaChina accounted for around 90% of the nearly 13,000 humanoid robots shipped worldwide last year, a global shipping metric that does not go unnoticed. Morgan Stanley also projects that Chinese sales could exceed 28,000 units this year, which would point to a notable expansion phase. In Xataka There are people sharing their court cases with AI. The problem is when a judge considers the conversations as evidence In the end, what was seen on that stage went beyond well-executed choreography. Behind each movement appeared a country narrative that combines technological ambition, industrial policy and cultural projection in the same television image. The question is no longer whether these robots can perform in front of millions of people, but rather how much their presence will grow in the coming years and into what spaces of daily life they will end up integrating. For now, its massive presence is destined for this type of spectacle. Images | CCTV In Xataka | While technology companies dispense with juniors to replace them with AI, IBM is doing the opposite: catching bargains (function() { window._JS_MODULES = window._JS_MODULES || {}; var headElement = document.getElementsByTagName(‘head’)(0); if (_JS_MODULES.instagram) { var instagramScript = document.createElement(‘script’); instagramScript.src=”https://platform.instagram.com/en_US/embeds.js”; instagramScript.async = true; instagramScript.defer = true; headElement.appendChild(instagramScript); – The news China brought humanoid robots to the country’s biggest television show: it made them practice kung-fu with millimeter precision was originally published in Xataka by Javier Marquez .

Xiaomi already has its own AI model for robots. At the moment, he’s great at taking apart LEGOs and folding towels.

It has been a long, long time since Xiaomi stopped being a mobile company. Today the company’s tentacles reach all types of sectors, from mobile and household appliances until cars, chip design and, from now on, robotics. And the Chinese company has just presented its first vision, language and action model for robotics. Its name: Xiaomi-Robotics-0. What is this about?. Xiaomi-Robotics-0 an open-source model whose code can be found in GitHub and HugginFace. As the company explains, this model has been optimized to offer “high performance, speed and smoothness in real-time executions.” We should not think of this model as an AI capable of making a robot run and jump like a human, but rather one capable of making a “simple” robot understand its surroundings and know how to make the optimal decision without, for example, destroying whatever it has in its hands. About the robots. When we talk about AI applied to robotics we are not just talking about a robot being able to move. The device must know and understand that it should not apply the same force when holding a brick as it does when holding a cat, for example. In that sense, there has to be an understanding of the visual, an understanding of what is being seen and an appropriate execution of actions: this is a brick > it is a heavy object > I have to apply more force to hold it and move it from one side to the other. Xiaomi-Robotics-0 results in the benchmarks | Image: Xiaomi The benchmarks. Xiaomi has achieved, as detailed on the project website, very good results in the benchmarks I RELEASE (measures knowledge transfer), SimplerEnv (measures performance in real simulations) and CALVIN (measures performance in tasks conditioned by language). According to the company, Xiaomi-Robotics-0 “achieves high success rates and robust results in two challenging two-handed tasks: disassembling LEGOs and folding towels.” The fun of training. Every AI model draws from a training dataset. In the case of Xiaomi-Robotics-0, a 4.7 billion parameter model, the dataset consists of 200 million time steps of robot trajectories and more than 80 million samples of general vision-language data, including 338 hours of LEGO disassembly videos and 400 hours of towel folding videos. The results. The company claims in the paper that its model is capable of disassembling complex LEGOs of up to 20 pieces, adapting the grip in real time to avoid errors, using only one hand to place the towel correctly and folding it or, if you pick up two towels from the basket, take one of them, leave it in place and fold only one. This demonstrates an interesting capacity for adaptation and learning that, although it may seem trivial on paper, has its value if we think about industrial and even domestic robots. Beyond. What this model is demonstrating is being able to adapt to complex and unpredictable geometries, such as that of a towel thrown in a basket, and to understand the, let’s say, “soft physics.” On a towel it may seem like a small thing, but let’s think about manipulating human tissues in an intervention, for example. Same with LEGOs. It’s not just disassembling them, it’s understanding the position of the blocks, how they fit together, what force to apply and at what angle so as not to break them. Let’s think about a robot that removes debris. An industrial robot has historically been programmed with fixed coordinates, that is, moving something from point A to point B. A robot with AI like the one proposed by Xiaomi would be much more versatile. The first robot learns movements, the second robot learns tasks, and the difference is a world. If we think about a distant future in which there are domestic robots, a robot cleaning dust from a shelf will not be the same as knowing how to identify objects, decorations, etc., and understanding that it must move them to avoid throwing them away and cleaning them thoroughly. Cover image | Xiaomi In Xataka | A Chinese company boasts another limit in robotics: it ensures that its new humanoid robot runs like an elite athlete

China surrounded an enclave with robots. Now they have been given a rifle to shoot at 100 meters, and the result points to an island

China has been leaving increasingly explicit clues about how it imagines future conflicts. In 2025, PLA maneuvers included island assault exercises minors using ground robots and unmanned systems, a sign that Beijing is no longer testing only classic amphibious crossings, but scenarios where machines make way before soldiers. Those practices marked a clear direction: Combat automation was no longer a distant theory, but something China was beginning to test on the ground. Now he has amplified it. Drones with rifle. China has made a qualitative leap in the use of combat drones by demonstrating that a UAV armed with a standard assault rifle can be 100% right of his shots against a human target at 100 meters while remaining in hover. The system, developed by a Chinese company together with the PLA special operations academy, fired 20 times and placed half of the hits in a comparable radius. a shot to the heada result that makes it clear that these are no longer experimental platforms but rather precision weapons ready for real environments. Extra ball. It does not seem like a specific experiment or a laboratory demonstration: the team itself has explained that the only “imperfect” shot was due defective ammunitionnot the system, making this test an unmistakable sign of where Chinese combat power is headed. Taiwan and a problem. This progress cannot be understood without the Taiwan backdropone of the most urbanized territories on the planet, where any military operation would require fighting in dense megacities, full of civilians, underground infrastructure and narrow streets that neutralize many traditional advantages. For the PLA, the challenge is not just cross the seabut to dominate neighborhoods, subway stations and residential complexes where human infantry suffer enormous political and military costs. The Chinese response to this dilemma is neither doctrinal nor moral, but technical: dealing with urban warfare as an engineering problem which can be solved by delegating violence to machines capable of moving, identifying targets and shooting without fatigue or fear. A bet. In fact, recalled in The Diplomat that the essay of the armed drone fits into the third major phase of Chinese military modernization, the so-called “intelligentization,” which seeks to replace human decisions with distributed artificial intelligence systems. Having mechanized and digitized its forces, the PLA now aims to delegate key functions (detection, prioritization and attack) to algorithms that operate faster than any human chain of command. In this framework, a drone with a rifle is not a curiosity, but rather an elemental piece of an ecosystem where sensors, weapons and software act in a coordinated manner, reducing the role of the soldier. to a mere initial authorizer or, in the extreme, eliminating him from decision-making altogether. Swarms in alleys. There is much more, because the medium stood out documents and studies linked to Chinese military universities that reveal that the target is not individual drones, but autonomous swarms specifically designed for urban warfare. These systems are designed to operate at low altitudes, inside buildings, indoors and underground, even when communications are degraded or non-existent. Through simple rules and self-organization, swarms They could patrol areastrack people and execute attacks without receiving orders in real time, a solution that the PLA consider ideal to neutralize defenses in cities such as Taipei or Kaohsiung and to eliminate key objectives before external forces can intervene. The gray area of ​​legality. The technological bet is accompanied by a legal position deliberately ambiguous by Beijing on lethal autonomous weapons. As? Defining as unacceptable Only those systems that simultaneously meet a series of very strict criteria, China leaves itself a wide margin to develop weapons capable of killing without direct human supervision, as long as they can be stopped in theory or follow pre-programmed rules. This ambiguity, they say, contrasts with documented risks of AI in combat (identification errors, inability to interpret human intentions, data biases) and makes it easier for research to advance without clear regulatory brakes. The future that is being tested today. In short, the drone that shoot with surgical precision at 100 meters is not an anecdote, but tangible proof of where the Beijing strategy: move the war to the heart of the city and delegate it to machines. There is no doubt that if this model is applied in a conflict such as Taiwan, the combination of autonomous swarms, integrated light weapons and decisions without human intervention could multiply the risk for civilians and reduce the political thresholds for the use of force. From that prism, what is presented today as a technical experiment is, in reality, a most disturbing preview: that of an urban war where the alleys are no longer patrolled by soldiers, but by armed robots that will never ask questions. Image | Heeheemalu In Xataka | The biggest geopolitical risk on the planet is not Greenland. It’s a smaller island with a disturbing neighbor: Taiwan In Xataka | 200 drones in the hands of a single soldier: China is advancing very quickly in a type of war that seemed like science fiction

There is a Chinese startup creating the most amazing robots of the moment. It’s called X Square

The only embodied AI (bodied artificial intelligence) company backed by the three Chinese technology giants: ByteDance, Meituan and Alibaba. Just over two years of life and financing rounds in which they have managed to overcome the 400 million dollars. These are some of the cover letters of X Square Robot, one of the most promising companies in the field of robotics. where does it come from. XSquare It is a Chinese startup which was born in 2023 at the hands of Wang Qian, an engineer and doctor from the University of Southern California who, in recent years, has maintained a discreet profile in the industry. The company was born not only as a company aimed at creating humanoid robots: they are also behind the development of the language models necessary to lead in robotics. The roadmap. The startup, despite its youth, has made the most of its two years of life. December 2023, full financing and start of operations. March 2024, efforts begin to develop a general large-scale model for embodied AIthe brain that would move its robots. May 2025, commercialization of Quanta X1, a bimanual wheeled robot equipped with its WALL-A model. Specially designed for logistics and commercial tasks. July 2025, first to show purposeful AI model general capable of directly controlling a highly dexterous robotic hand. Unlike traditional approaches—based on rules, fixed trajectories or action-specific training—the system uses a single model that integrates perception, planning and control, allowing grip and movement to be adapted in real time to changes in the environment. August 2025, Quanta X2 arrives, its first humanoid robot, also with a wheel base. The product. Quanta X2 is the latest solution from X Quare, a wheeled humanoid robot that integrates the company’s own AI model. This model allows the robot to have a vision system, autonomous motion control, real-time task planning, etc. We highly recommend watching the demo video in which X Square shows it in operation, because it is spectacular. Why is it important. X Square does not sell ordinary humanoid robots, it sells cognitive capacity. The norm in robotics companies is to design the hardware and adapt it to existing software. X Square designs its own models focused on physical AI. This is something fundamental for his native country, China. The country wants to accelerate the automobile industry in 2030 with 100% automated factories. The aid policy is especially favorable for local companies developing robotics solutions. China has created centers responsible for training robots to imitate human behavior. X Square software is key The backup. X Square is backed by giants like Alibaba and Bytedancethe first group having announced an internal team dedicated to robotics using Qwen, its AI models division, as a base. Despite Alibaba’s muscle when it comes to creating its own language models, the investment of more than $140 million in X Square Robot makes it clear that it is much more than a typical startup. Image | XSquare In Xataka | Robotics has just broken another scale barrier: there are already autonomous robots smaller than a grain of salt

Hyundai imagines factories full of humanoid robots. A Korean union has said ‘not so fast’

Hyundai has been building a very specific story for months about the future of its factories, one in which humanoid robots go from being a distant promise to a real industrial tool. The image is powerful and connects with a global race to automate increasingly complex processes, but in South Korea that discourse has already found its first limit. Even before robots enter production lines, the union has come forward to make its position clear and warn that any changes that impact employment will have to be negotiated. A clear warning. Hyundai Motor Union has made it clear that “Without an agreement between the company and workers, not a single robot can enter South Korean plants,” stressing that any decision with an impact on employment must go through the negotiation table. The message connects directly with the current collective agreement, which requires all measures that affect work to be subject to debate and joint approval. With this positioning, the introduction of humanoids is emerging as one of the possible reasons for friction between worker representatives and the Asian corporation. Fear that South Korea will lose prominence. The union links automation to a broader movement of industrial reorganization, marked by the growth of manufacturing in the United States. As they explain, the planned increase in capacity at the US plant could end up subtracting volume from factories in South Korea, and they maintain that two centers would already be suffering from a lack of workload. In this context, humanoids are interpreted not only as a technological tool, but as an element that can accelerate job adjustments if it is not accompanied by clear guarantees regarding the maintenance of employment. The starting point of the discussion. This comes after Hyundai introduced Atlas, the humanoid robot developed by Boston Dynamicsas a key piece of its medium-term industrial strategy. The firm assured that it plans to progressively integrate it into its global network of factories starting in 2028. It also explained that these robots are designed to take on general industrial tasks and work alongside people, with the aim of reducing physical effort and taking on potentially dangerous jobs. Of course, he avoided specifying how many units he will deploy in the first phase or how much the project will cost. First in the United States. The manufacturer has already begun to draw how it wants to industrialize this bet. The group has explained that it will build a specific plant in the United States for the production of robots, a factory dedicated to producing Atlas on a large scale in the coming years. The first operational destination would be at the Georgia plant, known as HMGMAwhere humanoids would initially be used in very specific tasks, such as classifying and sequencing parts for the assembly line. The small labor print. Hyundai’s commitment is part of a much broader race to bring humanoid robots to the industry. Companies such as Tesla, Amazon or the Chinese manufacturer BYD have announced similar plans, although with different degrees of maturity. Some projects have already gone from demonstration to real work, such as the robot Figure 01 in a BMW plantwhere he performs support tasks autonomously. These are still limited and highly supervised experiences, but sufficient to show that the leap from the laboratory to the factory has already begun. Images | hyundai In Xataka | 100% autonomous factories where it is not necessary to turn on the light: China is already considering manufacturing cars only with robots in 2030

drones converted into Uber of combat robots

In Ukraine, the war is transforming at brutal speed due to the massive irruption of drones and robotsmachines and devices that have ceased to be a complement to central part of the fight. Every week they appear new shapes to use them to reconnoitre, attack, evacuate or move supplies without exposing soldiersand that is forcing us to adapt tactics almost in real time. What we did not imagine was to what extent. Cross a line. In Ukraine, this “machine war” has entered a phase as delusional as it is logicalone in which a drone is no longer just a weapon or an eye in the sky, but a means of transportation: Ukrainian soldiers have started using aerial drones as if they were improvised Ubers for combat robots, loading small ground vehicles and dropping them near Russian positions to save time and, above all, blood. The image was described by military commanders to the Insider mediumwhen the soldiers at the front saw with stupefaction and surprise the almost absurd scene (a flying platform carrying another armed platform), but which summarizes better than anything the technological moment of the front: the continuous impossible combinations that are born from a simple and brutal need, to put capabilities on the ground without exposing a human even a second more than is essential. The trick. Here is a company that we have talked before. Ark Roboticswhich supplies autonomous robots to more than 20 brigades, says that this tactic has even surprised its own CEO, Achi, who speaks on Insider under a pseudonym for safety and that when he saw it he reacted with a mixture of disbelief and alarm, before admitting that it made all the sense in the world. A large drone transports a small ground robot forward and “drops” it to deploy it directly where it matters, avoiding the most vulnerable part of the trip, that slow advance over land that exposes the UGV to mines, direct fire, mud, craters and detection. The idea is so simple that it is scary: it is not about inventing a marvel, but about skipping the route that produces casualties, and converting the deployment into something fast and safe for the human operator. Why does it make sense? The reason this madness works is that air and ground combat complement each other in modern warfare: aerial drones are numerous, can cover distances quickly and cross dangerous areas more easily, but they are noisy, visible and need to stay close to observe or attack. Terrestrial robots, on the other hand, they are slow to arrivebut when they are already in position they can do things that cannot be done in the same way from the air: get into trenches, enter shelters, approach without announcing their presence, place explosives, collect intelligence, shoot with more stability and remain hidden next to an enemy point as if they were part of the landscape. That species drone-Uber It precisely solves the bottleneck: it does not improve the robot itself, it improves “how you take it” to the place where it starts to be really dangerous. Ukrainian land robot Crazy innovation… with logic. This type of hybrid shows to what extent the war in Ukraine has become in a laboratory that no longer differentiates between classic categories, because everything is mixed in order to gain seconds and reduce casualties. It’s not just creativity: it’s creativity for survival, squeezing out any tool until you get uses from it that weren’t in the plans. Other manufacturers as Milrem Robotics They have also recognized that the Ukrainians have used their robots in unexpected waysand that pressure from the front is rewriting the design of systems in real time, in cycles of change so rapid that they seem impossible in traditional industry. The cost of speed. The problem for companies like Ark is that this “insane phase” of machine warfare forces them to innovate with a speed that can turn against: If you change too much, you no longer mass produce, and if you produce without changing, you fall behind. Achi describe an almost inhuman pace of iteration, with multiple modifications in weeks, and the permanent risk of following wrong trends that compromise reliability and volume. In practice, war requires them to do two incompatible things at once: experiment as an improvised workshop and manufacture as a real industry. The future that looms. Although hethe terrestrial robots are still a minority in the face of the torrent of aerial drones, the scene with Ark robots makes it clear that it is an expanding sector and that the front is pushing towards a model in which the front line is increasingly supported by machines. The company develops a system called Frontier to coordinate thousands of drones and robots with minimal human intervention, and the idea that floats above everything is as disturbing as it is coherent: if moving people near the front is increasingly absurd, war will tend to move machines, and Ukraine this exploiting that logic in a big way. Image | Ministry of Defense of Ukraine In Xataka | The drone war in Ukraine is scary for a reason: It’s called Sirius-82 and it has turned rivers into modern minefields In Xataka | Ukraine has called in a group of hunters for an unprecedented mission: to prevent Russian missiles from freezing it

These are not simple collector’s figurines. They are impressive robots worth more than 1,600 euros, and I have tried them

I’m going to tell you an anecdote. A few weeks ago I received a press release in the mail, one of the hundreds and hundreds that come in a day. This note said something like Robosen arriving in Spain with the Soundwave G1, and of course, my geeky mind clicked. Soundwave is the right hand of Megatron, the bad guy from ‘Transformer’, and I thought “well, another company that makes collector’s figurines.” The thing is that I went to their website and I went completely crazy. I was hoping to find a figure like some I have from ‘One Piece’ and company, but no. What I saw were some animatronics of more than 1,600 euros that take away the hiccups. Not just because of the level of detail, but because of how they move, talk and interact. Then I looked at the spec sheet, saw what they have inside, and thought “why not?” Spoiler: it is not just a figure | Image: Xataka So I wrote to Robosen to ask for the Soundwave, try it and tell Xataka how it is. Hopes? Zero. Not because of them, but because I understand that sending a 1,600 euro robot by courier in the middle of Christmas is something you want, which is something you want, you don’t want it. Well, not one, not two. But three. Robosen gave me not only Soundwave, but Megatron and a limited edition Buzz Lightyear. Like a little child, I spent Christmas trying them and teaching them to every living being that entered the house. Because yes, they are one of those products that you don’t have the opportunity to try every day and that are capable of making you say “wow!”. I’ll tell you how it is. There are figures and FIGURES The Buzz Box is a resounding yes | Image: Xataka The animatronics come packaged with exquisite taste. A product of this type and price must convey premium aroma right out of the box and Robosen has certainly nailed it. Special mention deserves the Buzz Lightyear box, which is the same as the original ‘Toy Story’ box. What I want to get at is that everything comes very well packaged, with its rigid foam and closures to prevent falls during transport. No complaints. “I am Megatron, leader of the Decepticons!” Megatron | Image: Xataka I think the easiest thing is to show the robots one by one, so let’s start with the most impressive one: Megatron. This beast, whose price amounts to 1,612.33 euros At the time I write these lines, he was the one that convinced me to order the robots and try them out. It is inspired, like Soundwave, by the design of 80s cartoons and the attention to detail is spectacular. Megatron in his tank form | Image: Xataka Image | Xataka Image | Xataka Megatron, as every Transformer fan will know, takes the form of a tank, and here it could not be any less. The animatronic features 112 LED lights with red and purple accents, wheels inspired by real tanks, and a shiny metallic finish. That’s on the outside, but Inside it has 118 microchips and 36 high-precision servomotors that make it not only move, because it moves, but also transform. I think a video does it more justice than photos. In both tank mode and robot mode we can interact with the device, either by voice or through the mobile application. These are the only three “buts” that I can say not only to Megatron, but to all the others I have tried: Voice commands only work in English and are predefined. If you don’t know English or your pronunciation is average, it is quite likely that the robots will not understand the command. The application is unique for each robot. It is not like the Xiaomi app, to which you can connect all the company’s devices, but Megatron has its app, Buzz Lightyear has his, etc. You cannot navigate the app if you do not have the robot connected. Image | Xataka For the rest, and removing that, the interaction with the robot is great. You can move it, talk to it to make it say phrases, start little theaters like the one you can see below these lines and, if someone is encouraged and has the patience and, above all, the art that I don’t have for it, program actions to share them with the rest of the world. I had to lower him to the floor because he moves a lot during the scene and the platform reaches as far as it goes. The robot’s movement is extremely precise. The engines make some noise, but nothing out of the ordinary. The voice is heard loud and clear and, when the movement is fast, the effect is very, very successful. By the way, programming can be done either on a PC, or by putting the robot in a position and recording the posture in the app. That’s easy to do. Image | Xataka Natural movement, that is, walking forward or backward, is brutal. Slow, but very, very precise and stable. The first time you move it it seems like real magic. That a robot this size remains this stable while standing and moving is impressive. In tank mode, movement is much more fluid. Those nostalgic for ‘Transformers’ will surely like to know that the phrases have not been generated with AI, but have been recorded by original voice actor, Frank Welkerwho also voiced Soundwave. There are more than 270 original voice lines and you can generally interact with the robot with more than 50 voice commands. Megatron | Image: Xataka Something that has me completely fascinated is that the robot feels alive. If you leave it on stand-by you will see it “breathe” and make small gestures and movements to give a more realistic impression. This is regardless of whether it is in tank mode or robot mode, and yes, in case you were wondering, yes, … Read more

We are entering an era in which robots with AI are becoming increasingly popular. LG already has its own to help us with household tasks

LG Electronics has CLOiD officially announcedits first multitasking home robot powered by artificial intelligence, which is being presented to the public for the first time at CES 2026 in Las Vegas. The goal is for this robot to be able to automate a good part of household tasks, going beyond the basic cleaning functions to which current robots are accustomed. Below these lines we tell you all the details. LG’s first robot for domestic work According to LG, CLOiD is capable of performing complex tasks like getting milk from the refrigerator, putting a croissant in the oven for breakfast, and even taking care of the laundry: from starting wash cycles to folding and stacking clothes once they’re dry. The company is demonstrating these capabilities in various domestic scenarios during the technology fair. The robot has two articulated arms with seven degrees of freedom each. The shoulders, elbows and wrists allow forward, backward, rotational and lateral movements, while each hand includes five fingers that move independently to manipulate objects with precision. The torso can be tilted to adjust its height, allowing it to pick up objects from knee height upwards, although not off the ground. An intelligent “head” as a control center The CLOiD top unit functions as a mobile smart home control center. It is equipped with a chip that acts as the robot’s brain, screen, speaker, cameras, various sensors and generative artificial intelligence by voice. These components allow the robot to communicate with people using spoken language and “facial expressions” on its screen, learn users’ living patterns, and control connected home appliances. Integration with LG’s ThinQ and ThinQ ON ecosystem allows CLOiD to work more fully with the South Korean brand’s products, essentially acting as a hands-on smart home hub. Physical AI Technology: VLM and VLA At the core of CLOiD is what LG calls physical AI technology, which combines two models: Vision Language Model (VLM), which converts images and video into structured language-based understanding, and Vision Language Action (VLA), which translates visual and verbal inputs into physical actions. According to the company, these models have been trained with tens of thousands of hours of data on household tasks, allowing the robot to recognize appliances, interpret the user’s intent and execute appropriate actions. The wheeled base uses autonomous driving technology derived from LG’s experience with robot vacuum cleaners and his Q9 model. According to the company, this configuration was chosen for its stability, safety and cost-effectiveness, with a low center of gravity that reduces the risk of overturning if a child or pet comes into contact with it. One more step in LG’s home robotics CLOiD isn’t the only robot capable of folding clothes showing at CES this year. SwitchBot is also showing its Onero H1 with similar capabilities. However, everything indicates that at the moment LG seems to be considering CLOiD more as a concept than as a product that they are going to really sell in the short term. The company says it will continue to develop home robots with practical functions and shapes for household tasks, and expand the application of its robotic technology to conventional home appliances. The ultimate goal, according to Steve Baekpresident of LG’s home appliance solutions division, is to achieve its vision of “Zero Labor Home,” “making housework a thing of the past so customers can spend more time on the things that really matter.” Autonomous robots with generative artificial intelligence are beginning to conquer technology fairs. They are the perfect setting to attract the masses, so it remains to be seen if they end up convincing enough so that in a few years we will see them hogging store shelves. Among other factors, the price will be what decides if the move really pays off. Images | LG In Xataka | The technology industry has been searching for the “next smartphone” for a decade. Now he thinks he found it with AI

There are already autonomous robots smaller than a grain of salt

Robotics has been pursuing the same obsession for decades: reducing the size of machines without emptying them of intelligence. Until now, that goal had a physical limit that was difficult to cross. Above a certain threshold, making a smaller robot meant making several compromises. That just changed. A team of researchers has shown that It is possible to build an autonomous robot so tiny that it can barely be seen, but still capable of perceiving its environment, processing information, and responding without outside intervention. The development comes from researchers at the University of Pennsylvania and the University of Michigan, who have built what the team describes as the autonomous programmable robot smallest achieved so far. The device is designed to operate submerged in a fluid, and in that environment it can move and operate. The scientific article describes a body measuring approximately 210 by 340 micrometers and 50 micrometers thick. Its scale is so small that it can rest on the ridge of a fingerprint and is almost invisible to the naked eye. A complete robot on a microscopic scale. The difference compared to previous attempts is not only in the miniaturization, but in what this device theoretically manages to integrate. According to the researchers, the microrobot incorporates computing, memory, sensors, communication and locomotion systems within a single autonomous platform. Until now, these systems often relied on external equipment to process information or make decisions. In this case, the robot can execute digitally defined algorithms and modify its behavior based on what is happening around it. The main obstacle to getting here has not been conceptual, but physical. At micrometer scales, the rules change: gravity and inertia lose weight, and forces such as viscosity and drag dominate. In that environment, moving through a fluid is more like moving through thick material than swimming in water. Added to this difficulty is an even more severe restriction, energy. With power budgets around 100 nanowatts, integrating propulsion and computing at the same time had been, until now, an almost impossible compromise. Electronics designed to survive on almost no power. The solution involved rethinking the robot’s electronic architecture from scratch. The team worked with a 55 nanometer CMOS process and used subthreshold digital logic to keep consumption within a budget close to 100 nanowatts. In that space they managed to integrate photovoltaic cells for power, temperature sensors, control circuits for the actuators, an optical receiver for programming and communication, as well as a processor with memory. Locomotion is one of the most unique aspects of design. Instead of motors or appendages, the microrobot uses electric fields to induce currents in the fluid around it, moving without moving parts that could break. Its creators describe it as a system in which the robot generates its own “river” to move forward. That same minimalist logic extends to communication. The measurements you make, such as temperature, are encoded into motion sequences, a simple but effective method at this scale. Tiny robots that act together. Beyond individual behavior, the team has shown that these microrobots can synchronize and operate in groups. According to the researchers, several devices are capable of coordinating their movements and forming collective patterns comparable to schools of fish. This approach opens the door to distributed tasks, in which each unit contributes local information or action. In theory, these groups could continue to operate autonomously for months if kept charged with LED light on their solar cells, although available memory limits the complexity of programmable behaviors for now. With this platform, researchers propose a path toward more general-purpose microrobots, capable of executing tasks in difficult environments without constant supervision. On the horizon are applications that today are closer to the laboratory than to the real world, for example in biomedicine, where devices of this type could operate on body fluids. The team itself insists that this is just a first step. The advance opens a technical base, but the jump to practical uses will depend on increasing performance. Images | University of Pennsylvania and the University of Michigan In Xataka | We still don’t know if humanoid robots will be the next great technological revolution. Yes we know that China will lead it

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.